The Best Available Human

A Pragmatic Framework for AI Decision-Making

The "Best Available Human" (BAH) standard represents a pragmatic framework for determining when artificial intelligence should be employed over human expertise, proposed by Ethan Mollick of the University of Pennsylvania's Wharton School. This concept fundamentally shifts the evaluation criteria from comparing AI to perfect or ideal human performance to assessing AI against the actual human resources available in a given situation. Rather than asking whether AI can outperform the world's leading experts, the BAH standard poses a more practical question: "Is the AI better or worse than the best human you have access to at that moment?"14 This approach acknowledges the reality that most people and organizations do not have access to top-tier experts for every task, making AI a potentially valuable alternative when human expertise is limited, unavailable, or insufficient.

Origins and Theoretical Foundation

The Best Available Human standard was articulated by Ethan Mollick, an associate professor of management at the University of Pennsylvania's Wharton School and author of "Co-Intelligence: Living and Working With AI."1 Mollick developed this framework as part of his broader philosophy of AI pragmatism, positioning himself not as an "AI optimist" but as an "AI pragmatist" who recognizes that artificial intelligence technologies are already here and require practical approaches for implementation.4 The standard emerged from Mollick's observation that discussions about AI's benefits and harms were often theoretical, despite AI being readily available for actual use.4

The conceptual foundation of the BAH standard rests on three fundamental truths about contemporary AI that Mollick identified. First, AI is ubiquitous, with the same advanced language models available to individuals and organizations regardless of their size or resources.4 Second, AI capabilities are not immediately clear to users, requiring hands-on experimentation to understand potential applications.4 Third, the comparison point for AI effectiveness should be realistic rather than idealistic, focusing on available human resources rather than theoretical perfection.4

Mollick's framework specifically addresses the tendency to evaluate AI against the highest possible human performance standards, which he argues creates an unfair and impractical comparison.5 Instead, the BAH standard recognizes that "AI does not have to beat all human experts at a task to be useful to many people," particularly when considering that "most of humanity is pretty bad" at many skilled tasks that require judgment.5 This pragmatic approach acknowledges that if we consider the top 1% of people in a given field as the most capable experts, an AI system that is correct 80% of the time could be useful to 99% of people.5

Practical Applications and Implementation Strategies

The implementation of the Best Available Human standard involves a contextual assessment that considers both the AI's capabilities and the human resources actually available in a specific situation. This approach has found particular relevance in scenarios where human expertise is scarce, expensive, or geographically inaccessible.3 The standard asks practitioners to evaluate whether "the best available AI in a particular moment, in a particular place, would do a better job solving a problem than the best available human that is actually able to help in a particular situation."3

One innovative application approach involves using multiple AI models to simulate expert consensus, similar to how humans might seek multiple expert opinions before making decisions.6 This method employs multiple specialized small language models to generate draft opinions in specific areas of reasoning, which are then evaluated by a larger language model that selects or synthesizes the best response.2 This approach mimics the human practice of consulting multiple experts and choosing the most informed opinion, potentially bridging the gap between AI capabilities and human expertise in complex decision-making scenarios.2

The practical value of the BAH standard becomes particularly evident in resource-constrained environments. In many developing country contexts, the standard highlights how AI can fill critical skill gaps where qualified human experts are simply unavailable.3 For example, in Kenya, there were only 25,589 registered accountants in 2021 to serve 144,000 registered business entities and 1.5 million formally registered micro and small businesses, plus over 5 million informal businesses.3 In such contexts, the "best available human" may not actually be capable of providing the needed expertise, making AI a valuable alternative despite its limitations.

Workplace Implications and Learning Development Concerns

The application of the Best Available Human standard raises significant concerns about workplace learning and professional development pathways. Mollick has expressed particular worry about how AI adoption using the BAH framework might disrupt traditional apprenticeship systems and entry-level learning opportunities.1 His concern centers on the observation that AI often performs better than interns and may be displacing the vital low-level tasks that people need to understand as they advance in their careers.1

The fundamental challenge lies in what Mollick describes as the broken deal of traditional workplace learning: "I help train you, you do work."1 When AI can perform the work component more effectively than entry-level employees, organizations may question whether they should continue investing in human training and development.1 This creates a paradox where the efficiency gains from AI implementation could undermine the very learning systems that develop future human expertise.1

Mollick's concern extends beyond immediate productivity gains to long-term workforce development. He teaches students to be "good generalists" who then learn through apprenticeship systems about how to work effectively within specific companies and industries.1 The work may be "low level and tedious," but it provides essential learning about how particular fields operate.1 When AI removes these apprenticeship opportunities, it potentially creates a gap in professional development that could have lasting implications for human expertise development.1

The implications suggest that organizations implementing the BAH standard need to consider not just immediate task efficiency but also the long-term development of human capabilities. This may require developing new models for professional learning that can coexist with AI systems while still providing meaningful development opportunities for human workers.1

Research Evidence on AI Effectiveness and Human Perception

Scientific research has provided empirical support for some aspects of the Best Available Human standard, particularly in contexts involving emotional support and interpersonal communication. A study published in the Proceedings of the National Academy of Sciences found that AI can indeed make people feel heard, with recipients rating their sense of being heard at 5.41 on a 7-point scale when receiving AI-generated responses.7 This research demonstrated that AI's effectiveness stems from its ability to accurately detect emotions and generate responses that provide effective emotional support.7

The research revealed that AI showed greater empathic accuracy than human responders in detecting four out of six basic emotions, including happiness, sadness, fear, and disgust.7 AI responses were characterized by higher levels of emotional support compared to human responses, employing techniques such as acknowledging recipients' feelings, expressing understanding, and offering comfort more frequently than human responders.7 In contrast, human responders provided more practical support by sharing personal experiences and insights, but this approach proved less effective at making people feel heard.7

However, the research also identified a significant limitation in AI's effectiveness: people feel less heard when they know a response comes from AI rather than a human.7 This "AI label effect" completely offset the positive response effect in the study, suggesting that people's perceptions of AI significantly influence their experience of its support.7 The study found that attitudes toward AI mediated this effect, with individuals holding more positive attitudes toward AI being less influenced by knowing the source of the response.7

The research provides important context for the BAH standard by demonstrating that AI's effectiveness compared to available humans depends on several factors. When comparing AI to anonymous, non-interacting strangers online (similar to responses on platforms like Reddit or Twitter), AI performed comparably.7 However, the researchers noted that AI would likely fall short when compared to close relationships or trained professionals who have greater capability and motivation than average responders.7

Global Development and Emerging Market Applications

The Best Available Human standard has found particularly compelling applications in global development and emerging market contexts, where skill shortages create significant opportunities for AI to provide value.3 Developing country economies are characterized by substantial mismatches between employee skills and actual needs, creating situations where the "best available human" may not be adequately qualified for many tasks.3 According to an International Labour Organization report from 2019, skills mismatch can negatively affect labor market outcomes, worker productivity, competitiveness, and economic growth.3

In these contexts, the line where AI becomes more useful than the best available human is drawn much earlier than in mature markets because human expertise is more limited.3 Management practices tend to be weaker in emerging markets, leading to worse firm performance and creating situations where employees need more supervision and coaching, yet managers themselves may lack necessary competencies.3 This creates a particularly strong case for AI implementation under the BAH standard, as AI can potentially step in to handle not just routine tasks but also those requiring technical skills, writing ability, understanding of complex subjects, and business acumen.3

Mollick has specifically advocated for urgent development work to identify circumstances and topics under which AI offers good-enough advice versus when AI advice might be harmful in development and education contexts.2 He suggests creating validated prompts and AI systems that people can use worldwide, with the explicit understanding that "your comparison is not perfection, it is the Best Available Human."2 This approach aims to leverage AI's accessibility to provide support in situations where qualified human experts are simply not available.2

The global development application of the BAH standard also highlights the importance of cultural context and local validation. While AI may outperform available human resources in technical tasks, the implementation requires careful consideration of cultural values and local expertise.2 Some practitioners suggest that human oversight should remain as the "ultimate arbiter providing cultural values-based feedback," ensuring that AI implementation enhances rather than replaces appropriate human judgment.2

Limitations and Critical Considerations

Despite its pragmatic appeal, the Best Available Human standard faces several significant limitations and criticisms that practitioners must carefully consider. One fundamental challenge involves the difficulty of validating AI responses, particularly given the probabilistic nature of large language models.2 Critics question how to validate queries to what they describe as "probabilistic word generators," especially when AI systems fail in distinctly non-human ways that make human comparisons problematic.2

The standard also confronts what some observers call the "First Available Agent" problem, where the convenience of AI systems leads people to turn to AI before seeking available human help.2 This convenience factor, combined with the perceived value neutrality of AI systems, often causes people to trust AI over available human experts, even when qualified humans are accessible.2 This behavioral tendency may lead to suboptimal outcomes when human expertise would actually be superior but less immediately available.2

Another significant limitation emerges from the gap between AI hype and actual performance capabilities. Some practitioners report disappointment with current AI tools approximately two years after the initial excitement around generative AI.2 Microsoft executives have suggested that truly agentic AI capabilities might not be possible until GPT-6, leading to questions about the long-term viability of current AI implementations.2 This reality check suggests that the BAH standard may need to be applied more conservatively than initial enthusiasm suggested.2

The temporal and contextual nature of the "best available human" also creates implementation challenges. The standard requires ongoing assessment of both AI capabilities and available human resources, as both factors can change rapidly.4 What constitutes the best available human in a particular moment and place is highly dynamic, requiring sophisticated judgment about resource availability, urgency, and task requirements.3

Conclusion

The Best Available Human standard represents a significant contribution to practical AI implementation frameworks, offering a pragmatic alternative to perfectionist approaches that compare AI to idealized human performance. Ethan Mollick's framework acknowledges the reality that most individuals and organizations do not have access to world-class experts for every task, making AI a potentially valuable resource when human expertise is limited or unavailable. The standard has demonstrated particular value in emerging market contexts and global development applications, where skill shortages create clear opportunities for AI to provide meaningful support.

However, the implementation of the BAH standard requires careful consideration of multiple factors, including the disruption of traditional learning pathways, the influence of human perceptions and biases toward AI, and the dynamic nature of both AI capabilities and available human resources. The research evidence suggests that while AI can be effective in specific contexts, particularly those involving emotional support and technical tasks, its effectiveness is significantly influenced by user perceptions and the specific nature of available human alternatives.

Moving forward, successful application of the Best Available Human standard will likely require nuanced implementation that considers not only immediate task performance but also long-term implications for human skill development, organizational learning, and societal outcomes. As AI capabilities continue to evolve and human attitudes toward AI technology shift, the practical application of this standard will need to adapt accordingly, maintaining its pragmatic focus while addressing emerging challenges and opportunities.

  1. https://www.forbes.com/sites/joemckendrick/2024/07/12/how-generative-ai-rattles-the-workplace/
  2. https://www.linkedin.com/posts/emollick_the-best-available-human-standard-activity-7219350124689293314-6UDT
  3. https://bfaglobal.com/insights/when-the-best-available-human-is-an-ai/
  4. https://www.oneusefulthing.org/p/the-best-available-human-standard
  5. https://www.linkedin.com/posts/emollick_the-best-available-human-standard-activity-7153777419978489856-w6qZ
  6. https://www.linkedin.com/posts/nicholasxthompson_the-most-interesting-thing-in-tech-the-best-activity-7259676528144191488-r9C4
  7. https://www.pnas.org/doi/10.1073/pnas.2319112121
  8. https://www.oneusefulthing.org/p/15-times-to-use-ai-and-5-not-to
  9. https://twitter.com/emollick/status/1738033016826425462
  10. https://mitsloan.mit.edu/ideas-made-to-matter/how-to-tap-ais-potential-while-avoiding-its-pitfalls-workplace