

Bottom Line Up Front: AI breakthroughs like OpenAI’s o1 and o3 models introduce dynamic, inference-time intelligence—redefining how compute can boost output quality. But unlocking their value depends on empowering the right people—“barrels”—who can drive AI initiatives forward. As super-capable AI becomes widely available, organizations that fail to identify and equip their key talent risk falling behind. Success won’t come from having the best tools alone—it will come from knowing how to aim them.
Executive Summary:
Recent advancements in AI, particularly OpenAI’s o1 and o3 models, are redefining the way organizations can enhance productivity and adaptability. The “barrels and ammunition” metaphor illustrates a critical lesson: true organizational success relies on identifying and empowering key individuals (“barrels”) who can lead initiatives from concept to completion, leveraging tools (“ammunition”) like AI to achieve outcomes.
Blog Post
A decade ago, Standford University and YCombinator created a course offereed at Stanford called “How to Start a Startup.” Sam Altman, who is now had of OpenAI, brought together a group of people to talk about how to run a startup. The focus of this write up is Lecture 14 - How to Operate. And specifically on an idea in there called “Barrels and Ammunition”.
Watch now: Barrels and Ammunition
The o1 Breakthru
Traditionally, LLMs improved their performance through scale during training: larger models required more resources but delivered greater accuracy by processing more information. Once trained, the accuracy of these models was fixed, and additional compute during inference only made generating answers faster, not better.
With “o1,” there’s a game-changing approach. Accuracy can improve during inference/run time by allocating more compute time. This allows “o1” to refine its responses by iteratively exploring multiple options and evaluating different solutions. This dynamic process contrasts with traditional models, where inference primarily focused on speed.
In essence, “o1” introduces a new way of thinking about compute resources: not just as a means to support the model but as a way to actively enhance the quality of outputs. This makes inference compute more like a dynamic memory system, enabling deeper exploration and refinement. For organizations, this represents a significant opportunity to leverage compute power more strategically for higher-quality AI-driven insights.
o3 significantly outperforms o1
The o3 model demonstrates significant performance improvements over o1, with the degree of outperformance directly tied to the amount of compute allocated to the task. A striking example of this is o3’s exceptional results on the ARC prize, a visual puzzle test specifically designed to be straightforward for humans but challenging for LLMs:
OpenAI’s new o3 system has achieved a groundbreaking milestone, scoring 75.7% on the Semi-Private Evaluation set within the $10k compute limit of the public leaderboard. A high-compute configuration (172x more expensive) pushed this even further to an impressive 87.5%. This represents a significant leap in AI capabilities, demonstrating a level of task adaptability not seen before in the GPT family of models.
For perspective, it took four years for ARC-AGI-1 performance to move from 0% with GPT-3 in 2020 to just 5% with GPT-4o in 2024. With o3, this trajectory has shifted dramatically, signaling a fundamental change in AI’s ability to tackle new and complex challenges. This isn’t just an incremental improvement; it’s a genuine breakthrough that approaches human-level performance in the ARC-AGI domain.
That said, this capability currently comes at a premium. For now, solving ARC-AGI tasks using o3 in its low-compute mode costs $17–$20 per task, compared to approximately $5 per task using human labor. However, as with many emerging technologies, cost-performance is expected to improve significantly in the coming months and years. WE should anticipate these advancements, as o3’s capabilities will likely become competitive with human work in a relatively short timeframe.
The potential of AI, especially inference-time scaling models, lies in their ability to act as powerful tools. Over time, the costs associated with these models — even marginal ones — are likely to become negligible compared to the expenses of human labor, particularly when considering additional factors like coordination and motivation beyond salaries.
While significant technical advancements are still needed to fully realize this vision, the breakthroughs seen with o1 and now o3 suggest that this future is approaching faster than many anticipated. OpenAI CEO Sam Altman captured this sentiment on his blog:
We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.
We all can appreciate the technical optimism surrounding AI advancements. My working definition of AGI (Artificial General Intelligence) is a system that can be assigned tasks and trusted to complete them with a good-enough success rate. By contrast, ASI (Artificial Super Intelligence) would be characterized by its ability to define those tasks independently. To put that in the terms of ammunition and barrels, AGI is Ammunition, while ASI has the potential to be a barrel.
When AI “ammunition” becomes widely available, many organizations will likely discover they are ill-prepared to fully leverage high-precision tools, much as P&G was not initially equipped to capitalize on highly targeted advertising outcomes from Youtube and Facebook. Even with well-documented processes, the gaps filled by human expertise, tacit knowledge, and experiential judgment will become glaringly apparent.
On the other hand some companies will be able to make the transition, due to the quality of their data, their documented processes, and the fact that their people’s adaptability and growth curves exceed their organizations adaptability and growth curve. Now unfortunately a given companies adaptability and growth curve is set by the industry and its peers. It’s set by their customers adaptability and growth curves in finding ways to get their Jobs to be Done.
Historical precedents suggest that transformative capabilities are rarely adopted first by incumbents. Facebook, for instance, built its success by creating its own ecosystem of advertisers rather than becoming a tool for legacy players like P&G. Similarly, the old television-centric advertising model remained relevant far longer than expected. Over time, both traditional and digital paradigms evolved to coexist. If AI follows a similar trajectory, it is likely that the most significant beneficiaries of AI will be new companies, particularly those capable of leveraging AI agents to their fullest potential.
However, if a given industry remains immune to AI transformation venture capitalists will look at those industries as ripe for disruption by proving customers with alternatives to getting their jobs done. These new companies, many of which may be niche or “long tail” players, will capitalize on the flexibility and precision AI enables from the outset and leverage the reduced operational overhead and their massive stockpile of ammunition affords them.
Meanwhile, traditional enterprises will struggle to integrate AI meaningfully in the near term, except in scenarios where job replacement at scale (similar to the mainframe era) is achievable. For enterprises with deep real-world differentiation, true AI adoption will likely be a long-term process. This perspective is not meant to downplay the significance of AI. Rather, as the saying goes, “the future is here, but it’s not evenly distributed.” Ironically, the larger and more established a company is, the less it may initially benefit from AI.
What Barrels, Ammunition, & the Future of AI in the Electric Utility Industry Means for SCE
SCE faces unique challenges: managing aging infrastructure, meeting sustainability goals, and navigating regulatory complexities. AI advancements, particularly those that improve task adaptability and decision-making precision, can significantly enhance grid reliability, optimize energy distribution, and accelerate renewable energy integration.
However, the successful adoption of AI will depend on identifying and nurturing barrels within the organization. These individuals can:
- Lead cross-functional teams to implement AI-driven systems.
- Drive the adoption of real-time AI insights.
- Pioneer initiatives that leverage AI to optimize forecasting.
Utilities that recognize and invest in their barrels will be better positioned to harness AI’s transformative potential.
What This Means for IT
IT organizations within the electric utility sector must evolve to support these changes. The rapid development of AI models like o1 and o3 requires:
- Infrastructure Modernization: Upgrading computational and network capacity to accommodate inference-time scaling models.
- Data Strategy Overhauls: Ensuring data quality, accessibility, and integration across legacy and modern systems.
- Skill Development: Training IT teams to recognize and support barrels who can bridge the gap between technical and operational domains.
- Collaboration Models: Fostering closer collaboration between IT and operational units to align AI initiatives with business objectives.
By becoming strategic partners in AI adoption, IT organizations can drive meaningful outcomes and position themselves as indispensable enablers of innovation.
What I Should Do to Get Ready for This Change
Here are actionable steps:
- Identify and Cultivate Your Inner Barrel: Reflect on your own strengths and how you naturally attract collaboration or achieve results. Challenge yourself with progressively complex tasks to expand your skills and discover your limits within that one area.
- Invest in Your AI Readiness: Learn about advanced AI models and their infrastructure requirements through online courses or industry resources. Improve your data literacy by understanding how to collect, clean, and analyze data effectively.
- Build Your Cross-Functional Expertise: Seek opportunities to learn about adjacent fields, such as IT, operations, or business strategy. Commit to continuous learning through workshops, webinars, or hands-on projects related to AI and its applications.
- Evaluate Your Growth Curve: Align your personal career goals with the growth trajectory of your organization and the electric utility industry. Identify areas where you can grow faster by seeking mentorship, taking on new challenges, or pursuing advanced training.
- Engage in Strategic Planning for Your Future: Stay informed about how AI is reshaping industries, and think about how these changes could affect your role or career path. Develop your own contingency plans by acquiring skills that prepare you for potential disruptions, ensuring you remain adaptable and valuable.
Comments
With an account on the Fediverse or Mastodon, you can respond to this post. Since Mastodon is decentralized, you can use your existing account hosted by another Mastodon server or compatible platform if you don't have an account on this one. Known non-private replies are displayed below.