The conversation around AI governance is moving faster than most people can track. OpenAI's new policy makes that clear. It offers bold ideas for how society might adapt, but exposes a deeper challenge: we still lack a realistic path for how organizations will navigate the transition.

Introduction

OpenAI's April 2026 publication, Industrial Policy for the Intelligence Age: Ideas to Keep People First, attempts to spark a national conversation about how society should prepare for a future shaped by superintelligent AI systems. The document is ambitious, proposing ideas from public wealth funds and robot taxes to treating AI as a public utility. It also warns of near-term risks such as AI-enabled cyberattacks and longer-term risks related to economic disruption and concentration of power.

The challenge is that many readers will encounter these ideas and feel an immediate disconnect. The proposals seem far ahead of where most organizations, policymakers, and communities actually are. The document describes a future that feels abstract, while offering few practical steps for how we get from today's reality to the world that would require such policies.

This gap is what I call the "missing middle." It is the set of governance, risk, and compliance foundations that must be built before any industrial policy conversation can be meaningful. Without this middle layer, the public will continue to see proposals like wealth funds or robot taxes as unrealistic or ideological rather than as responses to real risks that are already emerging.

My goal in this commentary is not to endorse or dissect OpenAI's policy proposals, but to:

  • examine the gap between the proposals and today's operational realities,
  • outline the foundational work that must occur before any industrial policy can be meaningfully implemented, and
  • clarify what leaders can do today to prepare their organizations for a rapidly changing environment.

At EMG Advisory, we help leaders tackle these challenges head-on with practical, strategic solutions that propel innovation and momentum rather than restrict it.

I. Where We Are Now: The Gap Between Acceleration and Understanding

OpenAI's document begins with a sweeping narrative about the transition toward superintelligence. It describes a world where AI systems outperform the smartest humans, reshape industries, and accelerate scientific discovery. While this vision may be directionally plausible, it is not the world most organizations are operating in today. Most companies are still experimenting with early-stage AI use cases, determining where it creates real value, struggling with data quality, and trying to understand how to manage the risks.

This disconnect creates two problems.

First, the public hears terms like superintelligence and assumes the conversation is speculative. Without a clear explanation of the path from current systems to future capabilities, the proposals feel disconnected from reality.

Second, organizations underestimate the risks that already exist. OpenAI warns of AI-enabled cyberattacks, misuse by bad actors, and the possibility of misaligned systems evading human control. These risks are not theoretical. They are emerging now. But because the document jumps quickly to high-level policy ideas, it does not help readers understand the operational challenges that sit between today's systems and tomorrow's policy debates.

This is why many people react to the proposals with skepticism. They see the end state without understanding the transition. They see the policy ideas without appreciating the layered risks that motivate them. And they see the conversation without understanding their own role in shaping it.

II. The Missing Middle: What Must Happen Before Any Industrial Policy Makes Sense

The missing middle is the practical work that organizations and industry leaders must do to make any future industrial policy workable in practice. It is the work of governance, risk management, and operational readiness. It is the work that ensures AI systems are deployed responsibly, monitored effectively, and aligned with organizational values.

A. Governance as the Foundation

AI governance is not the same as AI compliance. Compliance is about meeting minimum requirements. Governance is about building systems that support responsible innovation. It includes (just to name a few):

  • Clear accountability structures
  • Documented model inventories
  • Defined risk thresholds
  • Transparent decision-making processes
  • Escalation paths

These are key elements that allow organizations to scale AI safely. They are also the elements that reduce the need for heavy-handed regulation later. When companies demonstrate responsible behavior, policymakers have more confidence in industry-led solutions.

My focus is helping organizations build governance systems that are credible, practical, and aligned with business strategy and objectives.

Governance is not bureaucracy or more red tape. It is a competitive advantage.

B. Translating Risks Into Operational Realities

OpenAI's document highlights real risks. But it does not explain how those risks translate into day-to-day operations. Organizations need to understand what these risks look like in practice. For example:

  • AI-enabled cyberattacks require new security controls
  • Model hallucinations require validation and monitoring
  • Workforce disruption requires training and adaptation
  • Concentration of power requires transparency and accountability
  • Misuse requires access controls and auditability

These are not abstract concerns. They are operational challenges that companies must address now. The missing middle is the set of practices that turn high-level risks into concrete actions.

C. Industry Leadership Before Government Intervention

If organizations do not build strong governance systems, policymakers will eventually step in with more aggressive measures. This is not a political statement. It is a predictable pattern in every industry where risk outpaces oversight.

The pro-business perspective is simple. Companies that invest in governance today will shape the regulatory environment tomorrow. They will have more flexibility, more credibility, and more influence. They will also be better positioned to innovate responsibly.

III. Preparing for the Future: What Leaders Should Do Now

OpenAI's document raises important questions. It highlights risks that deserve attention. But it does not provide the practical guidance that leaders need today. The missing middle is where organizations can take meaningful action.

A. What OpenAI Gets Right and What It Leaves Unanswered

OpenAI is right to raise concerns about disruption, safety, and concentration of power. These are real issues. But the document leaves several questions unanswered:

  • How do organizations prepare for these risks?
  • What governance structures are needed?
  • How do we transition from current systems to future capabilities?
  • What responsibilities do companies have before policymakers act?

These are the questions that matter most to leaders today.

B. Practical Steps Organizations Can Take Now

While there are many items to consider, every organization in the AI value chain should consider the components listed below. The level of rigor an organization applies to AI risk management and governance should be proportionate to the risks created by its activities and the role it plays in the value chain. What is sufficient for one organization may be inadequate for another. Likewise, not all AI use cases within a single organization require the same level of oversight. Solutions and frameworks must be right-sized based on factors such as the nature of the system, the potential impact, and the sensitivity or classification of the data involved.

  • Develop an AI strategy that supports the organization's overall strategy (just because you can automate doesn't mean you should)
  • Establish cross-functional roles, responsibilities, and oversight
  • Determine the right mix of technology and talent by ensuring the organization retains the skills and capacity needed to stay resilient as AI adoption accelerates
  • Implement AI operational controls and processes that are integrated with and complementary to existing risk, compliance, information security, and privacy functions; examples include an AI-focused policy, asset inventory, risk taxonomy, risk assessment, incident response plan, and monitoring approach
  • Invest in workforce readiness and training on acceptable use
  • Build documentation such as use cases, risk appetite, and key decisions

These steps are not about slowing innovation, but about enabling organizations to move faster without losing control. When implemented well, they position the organization to scale with clarity, stability, and confidence.

C. Why This Matters for the Broader Policy Conversation

Strong governance systems create the conditions for reasonable policy debates. When organizations demonstrate responsible behavior, policymakers can focus on targeted interventions rather than sweeping proposals. When companies build the missing middle, the public can better understand why certain policies may be necessary in the future.

Conclusion: Building the Bridge Between Today and Tomorrow

OpenAI's Industrial Policy for the Intelligence Age is an important contribution to the national conversation about AI; but it is not a roadmap. It is a starting point. The real work lies in the missing middle. It lies in the governance systems, risk management practices, and operational foundations that organizations must build today.

Leaders do not need to agree with OpenAI's proposals to recognize the importance of the underlying risks. They do not need to endorse wealth funds or robot taxes to understand that AI will reshape industries and challenge existing systems. What they need is a clear path forward; a way to move from where we are to where we need to be.

That path begins with governance, with responsible scaling, and with organizations taking ownership of their role in shaping the future. This is the work we support at EMG Advisory. It is the work that will determine whether AI becomes a source of opportunity or instability.

The missing middle is not a policy debate. It is a leadership challenge. And it is one that every organization must take seriously.
Andrea Elliott is the Founder & Managing Partner of EMG Advisory.
To discuss your organization's AI governance posture, request a meeting.