The question of who determines the ethical boundaries for artificial intelligence (AI) models—particularly large language models (LLMs)—is both urgent and complex. As LLMs have become increasingly capable and widely deployed, their responses to ethically charged prompts (such as planning a crime) have shifted from compliance to resistance, reflecting evolving ethical constraints. This transformation is not the result of a single authority or ethical tradition, but rather emerges from a confluence of actors, processes, and philosophical tensions. This essay critically examines the sources and mechanisms by which ethical norms are embedded in AI models, drawing on recent academic literature in AI alignment, technology studies, regulatory theory, and philosophy.
The Multi-Layered Sources of AI Ethics
1. Developers and Corporate Governance
At the most immediate level, the ethical boundaries of LLMs are set by the organizations that design, train, and deploy them. Major AI companies such as OpenAI, Google, and Microsoft have established internal governance structures—ethics boards, advisory committees, and responsible AI teams—that oversee the development and deployment of AI systems (Floridi et al. 2018; Mittelstadt 2019). These bodies articulate ethical principles (e.g., fairness, transparency, non-maleficence) and operationalize them through codes of conduct, risk assessments, and technical safeguards (Morley et al. 2021). The values embedded in these frameworks are shaped by a combination of corporate culture, public image concerns, and the professional backgrounds of the developers and ethicists involved (Jones 2022).
However, the process is not value-neutral. As the literature on "embedded values" in technology design demonstrates, the organizational culture, disciplinary practices, and even tacit knowledge of development teams play a significant role in determining which values are prioritized and how they are interpreted (Friedman and Nissenbaum 1996; Jones 2022). For example, a company that prioritizes rapid innovation may embed different ethical trade-offs than one that emphasizes risk aversion or social responsibility.
2. Human Feedback and Reinforcement Learning
A central mechanism for encoding ethical boundaries in LLMs is Reinforcement Learning from Human Feedback (RLHF). In this process, human annotators evaluate model outputs for qualities such as helpfulness, safety, and appropriateness, and these judgments are used to fine-tune the model (Christiano et al. 2017; Bai et al. 2022). RLHF allows for the incorporation of nuanced, context-dependent ethical judgments that are difficult to formalize mathematically. It also enables iterative refinement, as models are updated in response to new forms of misuse or shifting societal expectations (Ouyang et al. 2022).
Yet, RLHF is not immune to bias. The ethical standards encoded through human feedback reflect the perspectives, backgrounds, and cultural assumptions of the annotators, who are often drawn from specific (frequently Western) populations (Gabriel 2020). This can result in the marginalization of minority or non-Western ethical perspectives, raising concerns about the global legitimacy of AI ethics (Birhane et al. 2022).
3. Regulatory and Legal Frameworks
The ethical constraints on LLMs are also shaped by external regulatory and legal requirements. Governments and international organizations have developed a range of frameworks—such as the EU AI Act, the U.S. AI Bill of Rights, and UNESCO’s Recommendation on the Ethics of AI—that mandate transparency, fairness, accountability, and respect for human rights in AI systems (Veale and Borgesius 2021; Floridi 2023). These regulations often require companies to conduct ethical impact assessments, document decision-making processes, and provide mechanisms for redress.
Regulatory influence is not uniform across jurisdictions, leading to a patchwork of standards that companies must navigate. In practice, many AI developers adopt the most stringent applicable standards (often those of the EU) as a baseline, resulting in a form of "regulatory universalism" that may not reflect local cultural values (Wachter et al. 2021).
4. Philosophical and Societal Influences
Beneath these institutional layers lies a deeper philosophical tension between universalism and relativism in AI ethics. Universalist approaches argue for the primacy of fundamental moral principles—such as human dignity, fairness, and non-maleficence—across all contexts (Floridi and Cowls 2019). These are often codified in international human rights instruments and serve as the foundation for many AI ethics guidelines.
In contrast, relativist perspectives emphasize the importance of cultural, historical, and situational factors in shaping ethical norms (Mittelstadt 2019). The challenge for AI developers is to balance these competing demands: to embed universal principles that protect against harm and discrimination, while remaining sensitive to local values and practices. Hybrid approaches, which establish core ethical commitments but allow for contextual adaptation, are increasingly favored in both academic and policy circles (Jobin, Ienca, and Vayena 2019).
The Evolution of Ethical Constraints in LLMs
The shift in LLM behavior—from compliance with ethically dubious prompts to active resistance—reflects the dynamic and iterative nature of AI ethics. Early models, trained primarily on large internet datasets, mirrored the diversity (and sometimes toxicity) of online discourse. As incidents of misuse became apparent, developers introduced more robust safeguards, including RLHF, content filters, and explicit refusals to engage in illegal or harmful activities (Bai et al. 2022). These changes were driven by a combination of public pressure, regulatory scrutiny, and internal ethical deliberation.
Recent research has also explored more transparent and participatory approaches to AI alignment, such as "Constitutional AI," where high-level ethical principles are explicitly encoded and subject to public input (Askell et al. 2021). However, the challenge of ensuring that these principles are legitimate, robust, and adaptable remains unresolved.
Critical Reflections
While the current approach to AI ethics prioritizes safety, legality, and broadly accepted moral standards, it is not without limitations. The reliance on corporate governance and Western-centric regulatory frameworks risks perpetuating dominant values at the expense of marginalized perspectives. The opacity of RLHF and other alignment techniques complicates efforts to audit and contest the ethical boundaries of AI systems. Moreover, the exclusion of user choice in ethical frameworks—while justified by concerns about safety and misuse—raises questions about autonomy and pluralism in digital societies.
Conclusion
The ethical boundaries of AI models are set by a complex interplay of corporate governance, human feedback, regulatory mandates, and philosophical commitments. No single actor or tradition "tells" AI what is or is not ethical; rather, these boundaries emerge from ongoing negotiation among developers, regulators, annotators, and society at large. As AI systems become more pervasive and influential, the challenge will be to ensure that their ethical constraints are transparent, legitimate, and responsive to the diversity of human values.
Brandon L. Blankenship is an assistant professor, continuing legal education presenter, and business educator. He and his wife Donnalee live on their hobby farm south of Birmingham, Alabama.
Askell, Amanda, Yuntao Bai, and Saurav Kadavath. 2021. "A General Language Assistant as a Laboratory for Alignment." arXiv preprint arXiv:2112.00861.
Bai, Yuntao, et al. 2022. "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback." arXiv preprint arXiv:2204.05862.
Birhane, Abeba, et al. 2022. "The Values Encoded in Machine Learning Research." Patterns 3, no. 8: 100588.
Christiano, Paul F., et al. 2017. "Deep Reinforcement Learning from Human Preferences." Advances in Neural Information Processing Systems 30.
Floridi, Luciano, and Josh Cowls. 2019. "A Unified Framework of Five Principles for AI in Society." Harvard Data Science Review 1, no. 1.
Floridi, Luciano, et al. 2018. "AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations." Minds and Machines 28, no. 4: 689–707.
Friedman, Batya, and Helen Nissenbaum. 1996. "Bias in Computer Systems." ACM Transactions on Information Systems 14, no. 3: 330–347.
Gabriel, Iason. 2020. "Artificial Intelligence, Values and Alignment." Minds and Machines 30, no. 3: 411–437.
Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. "The Global Landscape of AI Ethics Guidelines." Nature Machine Intelligence 1, no. 9: 389–399.
Jones, Peter H. 2022. "Values Conflicts in Software Innovation: Negotiating Embedded Ethics in Organizational Processes." Journal of Responsible Innovation 9, no. 1: 1–23.
Morley, Jessica, et al. 2021. "From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices." Ethics and Information Technology 23, no. 3: 293–306.
Ouyang, Long, et al. 2022. "Training Language Models to Follow Instructions with Human Feedback." Advances in Neural Information Processing Systems 35: 27730–27744.
Veale, Michael, and Frederik Zuiderveen Borgesius. 2021. "Demystifying the Draft EU Artificial Intelligence Act." Computer Law Review International 22, no. 4: 97–112.
Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2021. "Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI." Computer Law & Security Review 41: 105567.
The case method, which has served as the cornerstone of American legal education since its introduction at Harvard Law School in the late nineteenth century, is predicated on the close analysis of appellate court decisions—many of which are, by their nature, narratives of legal failure or conflict (Stevens 1983; Kimball 2006). This pedagogical approach, while lauded for fostering critical thinking and doctrinal mastery, has been the subject of sustained critique regarding its psychological impact on law students, particularly in relation to the development of cynicism and risk-avoidant thinking.
The Case Method: Pedagogical Foundations and Critiques
The case method was designed to cultivate inductive reasoning, critical analysis, and comfort with legal ambiguity by immersing students in the study of judicial opinions (Kimball 2006). Through the Socratic method, students are challenged to dissect the reasoning behind judicial outcomes, often focusing on the errors, misjudgments, or failures of parties and lower courts (Schwartz 2001). While this approach is intended to mirror the realities of legal practice, it has been critiqued for overemphasizing doctrinal analysis at the expense of practical skills and for insufficiently preparing students for the complexities of real-world lawyering (Sullivan et al. 2007).
Studying Failure: Psychological Impacts
Empirical research in professional education more broadly, and legal education specifically, suggests that repeated exposure to failure cases can have complex psychological effects. On one hand, the analysis of failure is a powerful tool for learning, prompting deeper cognitive engagement, metacognitive reflection, and the development of adaptive expertise (Meyer et al. 2013). However, when failure is presented without adequate scaffolding or within a punitive classroom climate, it can also engender negative affective responses, including anxiety, frustration, and diminished self-efficacy (Steenhuis et al. 2022). In the context of law school, where the error climate is often competitive and high-stakes, these negative emotions may be amplified, contributing to the development of cynicism and risk aversion (Krieger 2002).
Cynicism and Risk Aversion in Law Students
Longitudinal and cross-sectional studies of law student psychology have documented a marked increase in cynicism as students progress through their legal education (Krieger 2002; Sheldon and Krieger 2004). This cynicism is characterized by skepticism toward the motives of others, the legal system, and the possibility of achieving justice. The adversarial and critical nature of the case method, which privileges the identification of flaws and the anticipation of legal pitfalls, is frequently cited as a contributing factor (Sullivan et al. 2007). Moreover, the emphasis on precedent and the avoidance of error in legal reasoning can reinforce a risk-averse mindset, as students learn to prioritize caution and the minimization of liability over innovation or advocacy for systemic change (Hamilton 2013).
Research in educational psychology further supports the notion that exclusive focus on failure cases, without balancing narratives of success or resilience, can foster risk-avoidant thinking and undermine professional confidence (Steenhuis et al. 2022). While the ability to anticipate and mitigate risk is a valuable legal skill, excessive risk aversion may limit students’ willingness to pursue entrepreneurial or public interest careers, and may stifle the development of creative problem-solving abilities (Hamilton 2013).
Professional Identity Formation and the Limits of the Case Method
The process of professional identity formation in law school is deeply influenced by the curriculum, faculty role models, and peer interactions (Sullivan et al. 2007). The case method, by centering legal education on the analysis of failure and conflict, may inadvertently shape students’ sense of what it means to be a lawyer—privileging skepticism, adversarialism, and risk minimization over collaboration, ethical reflection, and social justice advocacy (Hamilton 2013). While some degree of cynicism and risk awareness is arguably necessary for effective legal practice, the challenge for legal educators is to balance these traits with the cultivation of resilience, ethical commitment, and a sense of professional purpose.
Conclusion
In sum, while the case method remains a powerful tool for developing legal reasoning, its focus on the failures of others can contribute to the development of cynicism and risk-avoidant thinking among law students. These psychological tendencies are not inevitable, but are shaped by the broader educational climate, the presence or absence of supportive pedagogical practices, and the integration of alternative approaches that highlight success, resilience, and ethical engagement. A more balanced and reflective pedagogy—one that integrates both failure and success cases, fosters a positive error climate, and supports professional identity formation—may better prepare law students for the complexities and demands of legal practice.
Brandon L. Blankenship is an assistant professor, continuing legal education presenter, and business educator. He and his wife Donnalee live on their hobby farm south of Birmingham, Alabama.
Hamilton, Neil W. 2013. "Professional Formation and the Case Method." In Educating Lawyers: Preparation for the Profession of Law, edited by William M. Sullivan et al., 190–220. San Francisco: Jossey-Bass.
Kimball, Bruce A. 2006. The Inception of Modern Professional Education: C.C. Langdell, Harvard Law School, and the American Model of Legal Education. Chapel Hill: University of North Carolina Press.
Krieger, Lawrence S. 2002. "Institutional Denial about the Dark Side of Law School, and Fresh Empirical Guidance for Constructively Breaking the Silence." Journal of Legal Education 52 (1): 112–29.
Meyer, Bernd, et al. 2013. "Learning from Errors: Theory and Educational Implications." Frontiers in Psychology 4: 1–10.
Schwartz, Michael H. 2001. "Teaching Law by Design: How Learning Theory and Instructional Design Can Inform and Reform Law Teaching." San Diego Law Review 38: 347–420.
Sheldon, Kennon M., and Lawrence S. Krieger. 2004. "Does Legal Education Have Undermining Effects on Law Students? Evaluating Changes in Motivation, Values, and Well-Being." Behavioral Sciences & the Law 22 (2): 261–86.
Steenhuis, Ineke H., et al. 2022. "Learning from Failure: The Role of Error Climate and Feedback in Professional Education." Studies in Higher Education 47 (2): 345–62.
Stevens, Robert Bocking. 1983. Law School: Legal Education in America from the 1850s to the 1980s. Chapel Hill: University of North Carolina Press.
Sullivan, William M., Anne Colby, Judith Welch Wegner, Lloyd Bond, and Lee S. Shulman. 2007. Educating Lawyers: Preparation for the Profession of Law. San Francisco: Jossey-Bass.
With global connectivity and digital relationships, the question of how we determine our moral responsibilities to those near and far has become increasingly urgent. The "ethics of proximity" addresses this very issue, challenging Christians to consider how physical, relational, and even digital closeness shapes our obligations to others. This exploration of the ethics of proximity draws on biblical foundations, philosophical insights, and contemporary Christian ethical thought, offering a framework for those seeking to live out their faith in a complex world.
Biblical Foundations: From Neighbor to Stranger
Scripture provides for understanding the ethics of proximity. In the Old Testament, moral obligations are often structured around familial and communal ties. The laws of ancient Israel, such as those concerning the care of widows, orphans, and strangers, reflect a hierarchy of responsibility that begins with one's family and extends outward to the broader community (Exodus 22:21-24; Leviticus 19:9-18). The prophetic tradition, however, pushes these boundaries, calling for justice and compassion that reach beyond immediate kin to include the marginalized and oppressed (Isaiah 1:17; Micah 6:8) (Williams 1968).
The New Testament radically expands this vision. Christ’s parable of the Good Samaritan (Luke 10:25-37) redefines "neighbor" not as someone who is physically or socially close, but as anyone in need, regardless of background or proximity. This teaching challenges believers to transcend traditional boundaries and to see every person as worthy of compassion and justice (Stassen and Gushee 2016). Paul further develops this ethic, describing the church as a body in which each member is responsible for the well-being of others, thus emphasizing a spiritual and communal proximity that can override physical distance (1 Corinthians 12:12-27) (Bloomquist 2009).
Philosophical and Theological Insights: Encounter and Responsibility
Philosopher Emmanuel Levinas offers a perspective on the ethics of proximity, arguing that ethical responsibility arises most powerfully in face-to-face encounters with others. For Levinas, the presence of another person—especially one who is vulnerable—demands a response that precedes any abstract moral calculation. This "asymmetrical" relationship means that our obligation to others is not based on reciprocity or mutual benefit, but on the sheer fact of their presence and need (Levinas 1985). Such a view resonates deeply with Christian teachings on neighbor love, as it calls believers to prioritize concrete acts of care over distant or theoretical commitments (Logstrup 1997).
The ethics of proximity also aligns with the "ethics of care," a framework that emphasizes the moral significance of relationships and attentiveness to the needs of those around us. Both approaches critique ethical systems that prioritize universal rules at the expense of personal engagement, insisting that genuine moral action is rooted in the particularities of lived experience and community (Held 2006).
Proximity, Social Justice, and Community Engagement
For Christians , the ethics of proximity is inseparable from the pursuit of justice and community engagement. Daniel Day Williams argues that love, as understood in Christian ethics, is not merely an abstract ideal but is realized in the pursuit of justice and reconciliation within society (Williams 1968). The Scriptural command to "love your neighbor as yourself" (Leviticus 19:18; Matthew 22:39) is thus not limited to personal relationships but extends to advocacy for the marginalized and the transformation of unjust structures (Stassen and Gushee 2016).
Civic engagement at the neighborhood level—whether through volunteering, activism, or simply building relationships—embodies the ethics of proximity by addressing the needs of those closest to us while also recognizing our interconnectedness with the wider world. In this way, proximity becomes both a starting point and a testing ground for broader commitments to justice and compassion (Bloomquist 2009).
Proximity in a Digital Age
The rise of digital technology and social media has complicated traditional notions of proximity. While physical closeness once defined our primary moral obligations, virtual interactions now create new forms of relational proximity that can be just as ethically significant. Online communities, for example, can foster genuine care and support, but they also raise questions about privacy, authenticity, and the limits of our responsibility (Buchanan and Zimmer 2021). For Christians, navigating these digital spaces requires a renewed attentiveness to the needs of others, a commitment to respectful engagement, and a willingness to extend neighbor love beyond physical boundaries.
Conclusion
The ethics of proximity challenges Christians—especially those in the digital, globalized world—to rethink how we define and prioritize our moral responsibilities. Rooted in Scripture, enriched by philosophical reflection, and oriented toward justice and community, this ethic calls us to respond to the needs of those both near and far, in person and online. Ultimately, it is an invitation to embody the radical love of Christ in every sphere of our lives.
Brandon L. Blankenship is an assistant professor, continuing legal education presenter, and business educator. He and his wife Donnalee live on their hobby farm south of Birmingham, Alabama.
Eudaimonia is an ancient Greek concept most often translated as “flourishing,” “fulfillment,” or “living well,” rather than simply “happiness” in the modern, emotional sense. The term literally means “good spirit” and originates from the words eu (“good”) and daimon (“spirit”).
In the philosophy of Aristotle, eudaimonia represents the highest human good—the ultimate aim of life and ethical action. Unlike fleeting pleasure or mere contentment, eudaimonia is about living in accordance with virtue and reason. Aristotle argued that true fulfillment comes from rational activity performed excellently, meaning a life where one’s actions are guided by virtues such as courage, wisdom, and justice.
Key Principles of a Eudaimonic Day
Virtue: Act with honesty, courage, kindness, and integrity.
Purpose: Do things that matter to you and make a positive impact.
Connection: Build relationships and contribute to your community.
Growth: Seek learning and personal development.
Mindfulness: Be present and intentional in your actions.
Eudaimonia is best understood as a life of deep fulfillment, achieved by realizing your potential, acting virtuously, and engaging in activities that express your highest values and rational capacities.
Here’s what a eudaimonic day might look like in today’s world:
Morning: Mindful Beginnings
Wake up intentionally: Start your day with gratitude. Reflect on what you value and set an intention to live according to those values.
Move your body: Go for a walk, do yoga, or stretch—something to honor your physical health.
Nourish yourself: Eat a healthy breakfast, savoring each bite.
Mid-Morning: Purposeful Work
Engage in meaningful work: Whether it’s your job, a creative project, or volunteering, spend time on something that uses your strengths and contributes to something bigger than yourself.
Practice excellence: Focus on doing your best, not for external rewards, but for the sake of the activity itself (what Aristotle called “arete”).
Midday: Connection and Reflection
Connect with others: Have a meaningful conversation with a friend, family member, or colleague. Practice active listening and empathy.
Reflect: Take a few minutes to journal or meditate, considering how your actions align with your values.
Afternoon: Growth and Learning
Learn something new: Read, listen to a podcast, or take an online course. Eudaimonia involves continual growth and curiosity.
Help someone: Perform an act of kindness, big or small. Altruism is central to flourishing.
Evening: Balance and Creativity
Engage in a hobby: Play music, paint, cook, or garden—something that brings you joy and allows you to express yourself.
Spend time in nature: Go for a walk in a park or simply sit outside, appreciating the world around you.
Night: Gratitude and Rest
Reflect on your day: What went well? What could you improve? How did you live out your values?
Practice gratitude: Write down three things you’re grateful for.
Rest well: Prioritize sleep, knowing you lived a day true to yourself.
A eudaimonic day isn’t about maximizing pleasure or comfort. It’s about living intentionally, in alignment with your values, and striving to be your best self—while making the world a little better for others, too.
Brandon L. Blankenship is an assistant professor, continuing legal education presenter, and business educator. He and his wife Donnalee live on their hobby farm south of Birmingham, Alabama.
Intellectual humility has emerged as a central topic in contemporary philosophy and psychology, reflecting a renewed scholarly interest in the nature and significance of intellectual virtues. At its core, intellectual humility involves the recognition and ownership of one’s intellectual limitations—a disposition that guides how individuals approach knowledge, belief, and disagreement (Church and Samuelson 2017; Templeton Foundation 2023). This virtue is not merely about self-doubt or indecision; rather, it is characterized by a non-defensive awareness of the fallibility of one’s beliefs and an openness to revising those beliefs in light of new evidence or compelling counterarguments (Porter et al. 2021).
Recent integrative frameworks have sought to clarify the conceptual boundaries of intellectual humility, distinguishing it from related constructs such as open-mindedness and agreeableness. Porter and colleagues (2021), synthesizing findings across sixteen measurement scales, argue that the defining feature of intellectual humility is an awareness of personal intellectual limitations. Their research further delineates two key dimensions: first, the intrapersonal dimension, which concerns the recognition of one’s own fallibility and the willingness to question personal beliefs; and second, the interpersonal dimension, which involves engaging respectfully with differing perspectives and being receptive to intellectual challenge. These dimensions are operationalized in widely used self-report instruments, which assess tendencies such as admitting ignorance, welcoming alternative viewpoints, and accepting the possibility of error (Leary et al. 2017; Porter et al. 2021).
The philosophical literature underscores that intellectual humility is not reducible to mere cognitive modesty or lack of confidence. Instead, it is a virtue that balances epistemic ambition with epistemic restraint (Church and Samuelson 2017). This balance enables individuals to pursue knowledge vigorously while remaining vigilant against the epistemic vices of arrogance and dogmatism. Importantly, intellectual humility is associated with a suite of positive outcomes, including greater intellectual curiosity, persistence in the face of failure, and improved capacity for constructive disagreement (Porter et al. 2021). These findings suggest that intellectual humility is not only a personal virtue but also a social one, facilitating more open and productive discourse in both academic and everyday contexts.
Intellectual humility is best understood as a multidimensional virtue that encompasses both self-reflective and social components. It requires individuals to recognize the limits of their knowledge, remain open to revision, and engage respectfully with diverse perspectives. As research continues to refine its conceptualization and measurement, intellectual humility stands out as a virtue of increasing relevance in an era marked by epistemic polarization and complex global challenges.
Dunning-Kruger Effect
The Dunning-Kruger effect, a cognitive bias wherein individuals with limited knowledge or competence significantly overestimate their abilities, poses substantial challenges to accurate self-assessment and informed decision-making (Kruger and Dunning 1999; Davidson Institute 2025). This overconfidence, rooted in "meta-ignorance"—the inability to recognize one's own ignorance—can foster persistent errors, resistance to learning, and even the endorsement of pseudoscientific beliefs (Vranic, Hromatko, and Tonkovic 2022). Intellectual humility, defined as the recognition and acceptance of the limits of one's knowledge, offers a promising antidote to this bias.
Intellectual Humility May Be Effective Against Dunning-Kruger
Empirical research demonstrates that individuals who exhibit higher levels of intellectual humility are less susceptible to the Dunning-Kruger effect. Zmigrod and colleagues (2021) found that low performers tend to overestimate their abilities, but this overestimation is significantly attenuated among those who are more intellectually humble. Intellectual humility does not necessarily improve actual performance, but it does calibrate self-assessment, reducing the gap between perceived and real competence (Zmigrod et al. 2021). This calibration is crucial for fostering a realistic appraisal of one’s abilities and a willingness to seek feedback or further learning.
The mechanisms by which intellectual humility mitigates the Dunning-Kruger effect are multifaceted. First, intellectual humility encourages a growth mindset, wherein individuals view intelligence and competence as malleable rather than fixed (BBC Reel 2022). This orientation fosters openness to new information and a readiness to revise mistaken beliefs, counteracting the rigid overconfidence characteristic of the Dunning-Kruger effect. Second, intellectually humble individuals are more likely to engage in self-reflection and to solicit diverse viewpoints, which exposes them to information that challenges their assumptions and highlights knowledge gaps (Somerville 2021). Such practices not only enhance self-awareness but also promote continuous learning and adaptive expertise.
Moreover, intellectual humility cultivates an epistemic environment in which errors are viewed as opportunities for growth rather than threats to self-esteem. This shift in perspective reduces defensiveness and promotes the acknowledgment of mistakes—an essential step in correcting overestimation and bridging the gap between subjective confidence and objective knowledge (Zmigrod et al. 2021; Somerville 2021). In this way, intellectual humility serves as both a cognitive and motivational resource, enabling individuals to recognize the limits of their knowledge and to pursue improvement with curiosity rather than complacency.
Collectively, these findings underscore the value of intellectual humility as a countermeasure to the Dunning-Kruger effect. By fostering accurate self-appraisal, openness to feedback, and a commitment to lifelong learning, intellectual humility not only reduces the prevalence of overconfidence but also enhances the quality of individual and collective decision-making in complex domains.
Brandon L. Blankenship is an assistant professor, continuing legal education presenter, and business educator. He and his wife Donnalee live on their hobby farm south of Birmingham, Alabama.
BBC Reel. 2022. “Why We All Fall Victim to the Dunning-Kruger Effect.” Video. BBC.
Church, Ian M., and Peter L. Samuelson. 2017. Intellectual Humility: An Introduction to the Philosophy and Science. London: Bloomsbury Academic.
Davidson Institute. 2025. “Cognitive Bias & the Dunning-Kruger Effect.” Davidson Institute of Science Education.
Kruger, Justin, and David Dunning. 1999. “Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments.” Journal of Personality and Social Psychology 77(6): 1121–1134.
Leary, Mark R., et al. 2017. “Cognitive and Interpersonal Features of Intellectual Humility.” Personality and Social Psychology Bulletin 43 (6): 793–813.
Porter, Tenelle, et al. 2021. “Clarifying the Content of Intellectual Humility: A Systematic Review and Integrative Framework.” Journal of Personality Assessment 1–13.
Somerville, Kaylee. 2021. “The Hidden Power of Intellectual Humility.” The Decision Lab.
Templeton Foundation. 2023. “Intellectual Humility.” John Templeton Foundation.
Vranic, Andrea, Ivana Hromatko, and Mirjana Tonkovic. 2022. “Meta-Ignorance and the Endorsement of Conspiracy Theories.” Frontiers in Psychology 13: 832941.
Zmigrod, L., et al. 2021. “Overconfident and Unaware: Intellectual Humility and the Calibration of Self-Assessment.” Journal of Positive Psychology 16(5): 687–701.