Tyler Cowen, everyone’s favourite public intellectual, has this great catchphrase: “Context is that which is scarce.” Wise and always true in so many contexts. This idea also perfectly captures a fundamental issue in contemporary use of artificial intelligence. The best AI systems, though incredibly powerful, struggle to take on complex creative work and achieve human-level results because they're missing context. These models have a wealth of general knowledge from their training, but when asked to complete specific tasks, they have surprisingly little context to rely on. Humans, in comparison, excel at context. We bring to every task years of accumulated experience, informal conversations, and implicit knowledge, making our contributions nuanced and deeply relevant, though not always as precise and thorough.
Take software development as an example. AI agents today show impressive capability in coding, completing tasks efficiently and effectively. Yet they lack the rich contextual understanding that human developers naturally possess. Human developers understand their company's mission, grasp the project's overarching goals, know who the beneficiaries of the product are, and recognize the preferences and styles of their teammates. They recall casual discussions, previous code reviews, and subtle hints from colleagues. An AI agent, given just a GitHub issue or a debugging report, doesn't have access to this valuable context. As a result, even technically correct code from AI can feel generic or disconnected from the real-world needs of the project.
The same issue arises when AI generates research reports. AI agents like Deep Research can quickly compile information and summarize vast amounts of data. Yet the reports they produce often lack style, nuance, and tailored relevance because the AI doesn't understand who its readers are, or the implicit knowledge that human analysts naturally consider. Human researchers instinctively tailor their reports to their audience, considering industry nuances and audience preferences.
The solution to context scarcity lies in actively collecting, curating, and providing explicit context for AI systems. This means turning implicit human knowledge—previously scattered across informal conversations, individual memories, or casual notes—into explicit, structured context that AI can easily access and understand. For software teams, it means meticulously documenting everything from project goals and company missions to team dynamics, coding standards, and individual preferences. For general knowledge workers, it involves compiling assets like clear audience profiles, detailed style guides, and comprehensive industry insights. Most current AI users severely underinvest in context. We need to change that - collecting and curating context for AIs is now Job #1 for most practitioners.
However, simply gathering context isn't enough. We also need to continuously test and validate AI's understanding of the provided context. Regularly asking context-specific questions—such as appropriate library selections, file structures, formatting and styling choices, and even semantic, philosophical, and ideological tendencies—ensures AI truly understands and effectively uses the context it's given.
By carefully investing in context, we can significantly improve AI-generated results, moving beyond efficiency to achieve outputs that match the nuance, subtlety, and deep relevance humans naturally provide. Context isn't just scarce—it's the only thing that matters.