Part VI

Governance of Distributed Power

The unresolvable tension between broad access and catastrophic misuse, and the search for managed openness in a world without consensus.

The third essay in this series argued that distributing capability, not just products, is essential to a healthy AI transition. People who have tools feel agency. People who have only services feel dependency. Open access drives innovation, prevents monopolistic control, and ensures that the benefits of AI are not captured by a small number of corporations or governments. This argument is sound. It is also incomplete.

Distributed capability means distributed destructive power. The same open source model that enables a teenager in Lagos to build a startup enables a different actor to design a bioweapon. The same robotics platform that empowers a community workshop can be adapted into a weapons system. The same AI that democratizes education democratizes disinformation. You cannot distribute power selectively. Capability is capability. The tension between these two truths has no clean resolution, and honesty about that is the necessary starting point for any serious discussion of governance.

Why AI Is Qualitatively Different

Previous technologies posed dual-use dilemmas, but AI is qualitatively different in three ways that make governance harder than anything that has come before.

First, the skill barrier is collapsing. Previously, dangerous capabilities required extensive specialized training. Building a functional explosive required chemistry or engineering knowledge. Synthesizing a pathogen required years of laboratory experience. Conducting sophisticated cyber attacks required deep technical expertise. AI lowers these barriers by encoding expert knowledge in accessible tools. The gap between intention and capability is shrinking rapidly. A motivated individual with access to the right model can now accomplish what previously required a team, a laboratory, or a state-level program.

Second, the scale is uncapped. A single person operating a disinformation model can generate more persuasive content than an entire twentieth-century propaganda ministry. A single autonomous weapons system can make targeting decisions faster than any human chain of command. The leverage ratio between individual capability and societal impact has never been this large. Previous destructive technologies had inherent scale limits. Manufacturing a bomb required materials that could be controlled. Distributing propaganda required printing presses or broadcast infrastructure that could be regulated. AI-enabled capabilities are software, replicable at near-zero marginal cost.

Third, the speed of iteration defeats traditional regulatory approaches. By the time a regulatory body identifies a novel AI capability as a threat, drafts rules, solicits public comment, revises, and implements policy, the technology has advanced several generations beyond what the regulation addresses. This is not a complaint about bureaucratic inefficiency. It is a structural mismatch between the temporal rhythms of governance and the temporal rhythms of technological development. Governance processes that require years cannot effectively regulate technologies that transform in months.

The Binary Trap

Current public discourse treats the governance of AI capability as a binary choice between open and closed. Both positions are internally coherent and practically catastrophic.

Figure 7 — The Binary Trap: Both Extremes Fail
CLOSE IT DOWN ✓ Reduces distributed risk ✗ Concentrates power dangerously ✗ Creates dependency, not agency ✗ Knowledge diffuses anyway ✗ Other nations won't comply Temporary dam, not solution OPEN EVERYTHING ✓ Distributes power broadly ✓ Drives innovation ✗ Enables catastrophic misuse ✗ No accountability mechanism ✗ Ignores tail risks entirely Not a governance framework ?

The "close it down" position argues that advanced AI models are too dangerous to release openly. Concentrate development in a small number of well-resourced laboratories subject to government oversight. Restrict access to model weights and training data. Treat frontier AI the way we treat nuclear materials: as inherently dangerous capabilities that require strict control. This approach solves the distributed risk problem but creates a concentration problem that is arguably worse. A world in which AI capability is monopolized by three corporations and two governments is a world optimized for exactly the kind of power asymmetry that makes the Mad Max scenario more likely. It also fails practically. Knowledge diffuses. Models are leaked or reverse-engineered. Open source alternatives emerge from jurisdictions that do not share the restriction philosophy. Closure is a temporary barrier, not a durable solution.

The "open everything" position argues that broad access is a fundamental right, that concentration is the real existential risk, and that the benefits of openness vastly outweigh the costs of misuse. Innovation thrives on open ecosystems. Monopoly power is the historically validated danger. Trying to control information has never worked. These arguments are compelling on their own terms but refuse to grapple seriously with tail risks. "Some people will misuse it" is not an acceptable shrug when the misuse in question includes engineered pandemics, automated mass casualty attacks, or disinformation campaigns capable of destabilizing democracies. The phrase "information wants to be free" is a slogan, not a governance framework.

Toward Managed Openness

What is needed lies between the two poles: a model for managed openness that preserves broad access while reducing the probability and severity of catastrophic misuse. Several approaches are emerging, none of them mature.

Tiered Access

Base models are open. Fine-tuning for high-risk domains (biosynthesis, weapons design, vulnerability exploitation) requires verification and monitoring.

Conceptually sound but technically difficult. The boundary between "helpful chemistry assistant" and "bioweapons advisor" is not a clean line in model behavior.

Licensing

Require registration and accountability for deploying models above certain capability thresholds, analogous to regulatory regimes for radioactive materials or controlled chemicals.

Works within nations but fails across borders. A licensing regime covering the US and EU but not every other jurisdiction is a leaky system.

Norm-Based

Build shared professional norms among developers, deployers, and users. Similar to how the biology community has handled dual-use research through institutional review boards and community self-policing.

Slow and dependent on good faith, but scales across borders better than law.

The most realistic answer probably involves all three approaches in combination, plus mechanisms that have not yet been invented. This is not a satisfying conclusion. It reflects the genuine state of the problem: we are in the early stages of a governance challenge that will take decades to resolve, and the technology is not waiting for governance to catch up.

The Deepest Problem

Underneath the technical and institutional challenges lies something more fundamental. Governance requires consensus about values. Effective regulation requires broad agreement about what a technology should and should not be used for, who bears responsibility for misuse, and what levels of risk are acceptable. No such consensus exists at the global level for AI.

The United States, China, the European Union, and the developing world hold fundamentally different positions on the appropriate role of AI in society, the balance between innovation and regulation, the relationship between individual rights and collective security, and the acceptable degree of surveillance and control. A governance framework that works within the EU's regulatory philosophy may be incompatible with China's industrial policy. An approach that satisfies American market liberalism may be unacceptable to developing nations seeking technological sovereignty.

The technology is global. Models trained in one jurisdiction can be deployed anywhere. Research published in one country is available to all. The governance challenge requires international coordination, but the coordination requires value alignment that does not exist and cannot be imposed.

The technology is global. The governance is local. That gap may be the single hardest problem in this entire discussion.

This connects directly to the pace asymmetry described in the previous essay. The governance challenge is not merely that regulation is slow. It is that the thing being governed is evolving faster than our ability to understand it, let alone control it. It also connects to the meaning crisis: populations that have lost trust and purpose are more susceptible to the destructive applications of distributed power. Disinformation is most effective against populations that have already lost faith in institutions. Radicalization finds its most fertile ground among people who have lost meaning. The three challenges are not separate problems. They are one problem with three faces.


The final essay in this series draws these three threads together into a single framework, showing how the pace asymmetry, the meaning crisis, and the governance of distributed power form an interconnected system whose dynamics will determine whether the future lies closer to Star Trek or Mad Max, and what adequate preparation for the transition would actually require.