Extreme power concentration – 80,000 Hours

  • Home
  • Current affairs
Extreme power concentration – 80,000 Hours

Extreme power concentration – 80,000 Hours

Why might AI-enabled power concentration be a pressing problem?

The main reasons we think AI-enabled power concentration is an especially pressing problem are:

  1. Historically unprecedented levels of automation could concentrate the power to get stuff done, by reducing the value of human labour, empowering small groups with big AI workforces, and potentially giving one AI developer a huge capabilities advantage (if automating AI development leads to runaway AI progress).
  2. This could lead to unprecedented concentration of political power. A small number of people could use a huge AI workforce to seize power over existing institutions, or render them obsolete by amassing enormous wealth.
  3. AI-enabled power concentration could cause enormous and lasting harm, by disempowering most people politically, and enabling large-scale abuses of power.
  4. There are ways to reduce this risk, but very few are working on them.

In this section we’ll go through each of these points in turn, but first we’ll give an illustrative scenario where power becomes extremely concentrated because of advanced AI. The scenario is very stylised and there are loads of other ways things could go, but it gives a more concrete sense of the kind of thing we’re worried about.

An AI-enabled power concentration scenario

Note that this scenario, and the companies and institutions in it, are made up. We’re trying to illustrate a hypothetical, and don’t have particular real-world actors in mind.

In 2029, a US AI company called Apex AI achieves a critical breakthrough: their AI can now conduct AI research as well as human scientists can. This leads to an intelligence explosion, where AI improving AI improving AI leads to very rapid capability gains. But their competitors — including in China — are close on their heels, and begin their own intelligence explosions within months. Fearing that China will soon be in a position to leverage its industrial base to overtake the US, the US government creates Project Fortress — consolidating all US AI development under a classified Oversight Council of government officials and lab executives. Apex leverages their early lead to secure three of nine board seats and provides the council’s core infrastructure: security systems, data analytics, and AI advisors.

By 2032, AI companies generate the majority of federal tax revenue as AI systems automate traditional jobs. Unemployment rises. The Oversight Council now directs hundreds of millions of AI workers, controls most of the tax base, and makes the most important decisions about military AI procurement, infrastructure investment, and income redistribution. Only those with direct connections to the council or major AI companies have access to the most advanced AI tools, while most citizens interact with limited consumer versions. When the president proposes blocking Apex’s merger with Paradox AI (which would create a combined entity controlling 60% of compute used to train and run US AI systems), council-generated economic models warn of China overtaking and economic collapse if the move is carried out. The proposal dies quietly. The council’s AI systems — all running on Apex architecture — are subtly furthering Apex’s interests, but the technical traces are too subtle for less advanced models to detect. Besides, most people are bought into beating China, and when they ask their personal AI advisors (usually less advanced versions of either Paradox or Apex models) about the merger, they argue persuasively that it serves the national interest.

By 2035, the US economy has tripled while other nations have stagnated. Project Fortress’ decisions now shape global markets — which technologies get developed, which resources get allocated, which countries receive AI assistance. Apex and Paradox executives gradually cement their influence: their AI systems draft most proposals, their models evaluate the options, their security protocols determine what information reaches other council members. With all major information channels — from AI advisors to news analysis to government briefings — filtered through systems they control, it becomes nearly impossible for anyone to get an unbiased picture of the concentration of power taking place. Everything people read on social media or hear on the news seems to support the idea that there is nothing much to worry about.

The executives are powerful enough to unilaterally seize control of the council and dictate terms to other nations, but they don’t need to. Through thousands of subtle nudges — a risk assessment here, a strategic recommendation there — their AI systems ensure every major decision aligns with their vision for humanity’s future.

Automation could concentrate the power to get stuff done

We’ve always used technology to automate bits of human labour: water-powered mills replaced hand milling, the printing press replaced scribes, and the spinning jenny replaced hand spinning. This automation has had impacts on the distribution of power, some of them significant — the printing press helped shift power from the church towards city merchants; and factory machines shifted power from landowners to capitalists and towards industrialising countries.

The thing that’s different with AI is that it has the potential to automate many kinds of human labour at once. Top AI researchers think that there’s a 50% chance that AI can automate all human tasks by 2047 — though many people think this could happen much sooner (several AI company CEOs expect AGI in the next few years) or much later. Even if full human automation takes a long time or never happens, it’s clear that AI could automate a large fraction of human labour — and given how fast capabilities are currently progressing, this might start happening soon.

This could have big implications for how power is distributed:

  • By default, less money will go to workers, and more money will go to the owners of capital. Automation could reduce the value of people’s labour, in extreme scenarios causing wages to collapse to very low levels indefinitely. This would increase how much of the pie goes to capital compared to labour, and those with capital could become even more disproportionately powerful than they are now.
  • Small groups will be able to do more. Right now, large undertakings require big human workforces. At its peak, the Manhattan project employed 130,000 people. It takes 1.5 million people just to run Amazon. As AI becomes more capable, it’ll become possible to get big stuff done without large human teams — and the attendant need to convince them what you’re doing is good or at least OK — by using AI workforces instead.
    • This would already empower small groups to do more. But the effect will be even stronger because using AI to get stuff done won’t empower everyone equally: it’ll especially empower those with access to the best AI systems. Companies already deploy some models without releasing them to the public, and if capabilities get more dangerous or the market becomes less competitive, access to the very best capabilities could become very limited indeed.
  • Runaway progress from automated AI development could give one developer a big capabilities advantage. The first project to automate AI R&D might trigger an intelligence explosion, where AI systems improving AI systems which improve AI systems leads to a positive feedback loop, meaning their AI capabilities can rapidly pull ahead of everyone else’s. Competitors might follow on with intelligence explosions of their own, but if they are far enough behind the leader to begin with or the leader’s initial boost in capabilities is sufficiently huge, one company might be able to entrench a lasting advantage.

If these dynamics are strong enough, we could end up with most of the power to earn money and get stuff done in the hands of the few organisations (either AI companies or governments) which have access to the best AI systems — and hence to huge amounts of intelligent labour which they can use for any means.

Furthermore, within these organisations, more and more employees may get replaced by AI systems, such that a very small number of people wield huge amounts of power.

Phttps://kmmedianewsol.com/wp-content/uploads/2025/10/Automation.pngPhttps://kmmedianewsol.com/wp-content/uploads/2025/10/Automation.png

It’s plausible that entry-level white collar jobs will be automated first. Organisations could become more top-heavy, with an expanded class of managers overseeing many AI agents.

There are many other ways this could go, and it’s not a foregone conclusion that AI will lead to this kind of power concentration. Perhaps we’ll see a stronger shift from expensive pre-training to more accessible inference scaling, and there will be a boom in the number of frontier companies, putting equally-powerful AI in more hands. There might be no intelligence explosion, or it might fizzle quickly, allowing laggards to catch up. If commercial competition remains high, consumers will have access to smarter and smarter models, which could even out differences in capabilities between humans and push towards greater egalitarianism. AI might allow for much more direct democracy by making it easier to aggregate preferences, and for greater transparency. And so on (more on this below).

So there are forces pushing against power concentration, as well as forces pushing towards it. It’s certainly possible that society naturally adjusts to these changes and successfully defends against AI-enabled power concentration. But given the speed that AI progress might reach, there’s a real risk that we don’t have enough time to adapt.

This could lead to unprecedented concentration of political power

So we could end up in a situation where most of the power to earn money and get stuff done is in the hands of the few.

This power might be kept appropriately limited by existing institutions and laws, such that influence over important decisions about the future remains distributed. But it’s not hard to imagine that huge capabilities advantages for some actors and the erosion of the value of most human labour could undermine our current checks and balances, which were designed for much more even levels of capabilities in a world which runs on human labour.

But how would this actually happen? People who are powerful today will fight tooth and nail to retain their power, and just having really good AI doesn’t automatically put you in charge of key institutions.

We think that power could become extremely concentrated through some combination of:

  • AI-enabled power grabs, where actors use AI to seize control over existing institutions
  • Economic forces, which might make some actors so wealthy that they can easily influence or bypass existing institutions
  • Epistemic interference, where the ability of most people to understand what’s happening and coordinate in their own interests gets eroded

Experts we’ve talked to disagree about which of these dynamics is most important. While it might be possible for just one of these dynamics to lead all the way to AI-enabled power concentration, we’re especially worried about the dynamics in combination, as they could be mutually reinforcing:

  • Power grabs over leading companies or governments would make it easier to amass wealth and control information flows.
  • The more that wealth becomes concentrated, the easier it becomes for the richest to gain political influence and set themselves up for a power grab.
  • The more people’s ability to understand and coordinate in their own interests is compromised, the easier it becomes for powerful actors to amass wealth and grab power over institutions.

Below, we go into more detail on how each of these factors – power grabs, economic forces, and epistemic interference – could lead to AI-enabled power concentration, where a small number of people make all of the important decisions about the future.

AI-enabled power grabs

There are already contexts today where actors can use money, force, or other advantages to seize control of institutions — as demonstrated by periodic military coups and corporate takeovers worldwide. That said, there are limits to this: democracies sometimes backslide all the way to dictatorship, but it’s rare; and there are almost never coups in mature democracies.

Advanced AI could make power grabs possible even over very powerful and democratic institutions, by putting huge AI workforces in the hands of the few. This would fundamentally change the dynamic of power grabs: instead of needing large numbers of people to support and help orchestrate a power grab, it could become possible for a small group to seize power over a government or other powerful institution without any human assistance, using just AI workforces.

What would this actually look like though?

One pathway to an AI-enabled power grab over an entire government is an automated military coup, where an actor uses control over military AI systems to seize power over a country. There are several different ways an actor could end up with control over enough military AI systems to stage a coup:

  • Flawed command structure. Military AI systems might be explicitly trained to be loyal to a head of state or government official instead of to the rule of law. If systems were trained in this way, then the official who controlled them could use them however they wanted to, including to stage a coup.
  • Secret loyalties. As AI capabilities advance, it may become possible to make AI systems secretly loyal to a person or small group. Like human spies, these systems would appear to behave as intended, but secretly further other ends. Especially if one company has much more sophisticated AI than everyone else, and only a few actors have access to it, these secret loyalties might be very hard for external people to detect. So subsequent generations of AIs deployed in government and the military might also be secretly loyal, and could be used to stage a coup — either by AI company leaders or foreign adversaries, or by parts of the government or military.
  • Hacking. If one company or country has a strong advantage in cyber offense, they could hack into many military AI systems at once, and either disable them or use them to actively stage a coup.

Diagram showing how AI systems could propagate secret loyalties forwards into future generationsDiagram showing how AI systems could propagate secret loyalties forwards into future generations

AI systems could propagate secret loyalties forwards into future generations of systems until secretly loyal AI systems are deployed in powerful institutions like the military.

These scenarios may sound far-fetched. Militaries will hopefully be cautious about deploying autonomous military systems, and require appropriate safeguards to prevent these kinds of misuse. But competition or great power conflict might drive rushed deployment, and secret loyalties could be hard to detect even with rigorous testing. And it might only take a small force to successfully stage a coup, especially if they have AI to help them (there are several historical examples of a few battalions successfully seizing power even without a technological advantage, by persuading other forces not to intervene).

Outside military coups, another potential route to an AI-enabled power grab is overwhelming cognitive advantage, where an actor has such a huge advantage in skilled AI labour that it can directly overpower a county or even the rest of the world. With a very large cognitive advantage, it might be possible to seize power by using superhuman strategy and persuasion to convince others to cede power, or by rapidly building up a secret military force. This is even more sci-fi, but some people think it could happen if there’s a big enough intelligence explosion.

An AI-enabled power grab — whether via an automated military coup or via overwhelming cognitive advantage — wouldn’t automatically constitute AI-enabled power concentration as we’ve defined it. There’s no single institution today which makes all of the important decisions — not even the most powerful government in the world. So there might still be a long path between ‘successful power grab over one institution’ and ‘making all of the important decisions about what happens in the future’. But a power grab could be a very important incremental step on the way to a small number of people ending up with the power to make all of the important decisions about the future — or if power had already become very concentrated, a power grab could be the final step.

Economic forces

There are several different ways that a small group could become wealthy enough to effectively concentrate power, in extreme cases making existing institutions irrelevant:

  • Eroding the incentives for governments to represent their people, by making the electorate economically irrelevant. Of course, the mission of governments in democracies is to represent and serve the interests of their citizens. But currently, governments also have direct economic incentives to do so: happier and healthier people make more productive workers, and pay more taxes (plus they’re less likely to rebel). If this link were broken by automation, and AI companies provided the vast majority of government revenues, governments would no longer have this self-interested reason to promote the interests of their people.
    • There might still be elections in democracies, but very fast rates of progress could make election cycles so slow that they don’t have much influence, and misinformation and lobbying could further distort voting. In scenarios like this, there might still be governments, but they’d no longer serve the functions that they currently do, and instead would mostly cater to the interests of huge AI companies.
  • Outgrowing the world, where a country or company becomes much richer than the rest of the world combined. An intelligence explosion of the kind discussed above could grant the leading AI developer a (maybe temporary) monopoly on AI, which could allow them to make trillions of dollars a year, and design and build powerful new technologies. Naively, if that actor could maintain its monopoly and grow at a faster rate than the rest of the world for long enough, it would end up with >99% of resources. There are lots of complications here which make outgrowing the world less likely, but it still seems possible that an actor could do this with a very concerted and well-coordinated effort if they had privileged access to the most powerful technology in the world. Today’s institutions might continue to exist, but it’s not clear that they would be able to enact important decisions that the company or country didn’t like.
  • First mover advantages in outer space, where the leader in AI leverages their advantage to claim control over space resources. If AI enables rapid technological progress, the leader in AI might be the first actor to develop advanced space capabilities. They could potentially claim vast resources beyond Earth — and if space resources turn out to be defensible, they could maintain control indefinitely. It’s not clear that such first mover advantages actually exist, but if they do, the first mover in space would be able to make unilateral decisions about humanity’s expansion into the universe — decisions that could matter enormously for our long-term future.

All of these routes are quite speculative, but if we don’t take steps to prevent them, it does seem plausible that economic forces could lead to one country or company having much more political power than everyone else combined. If that actor were very centralised already (like an autocratic government or a company where most employees had been automated), or if there were later a power grab that consolidated power in the hands of a small group, this could lead to all important decisions about the future being made by a handful of individuals.

Epistemic interference

Power grabs and economic forces that undermine existing institutions would be bad for most people, so it would be in their interests to coordinate to stop these dynamics. But the flip side of this is that it’s in the interests of those trying to amass power to interfere with people’s ability to understand what’s happening and coordinate to stop further power concentration.

This is the least well-studied of the three dynamics we’ve pointed to, but we think it could be very important. Tentatively, here are a few different factors that could erode the epistemic environment, some of which involve deliberate interference and some of which are emergent dynamics which favour the few:

  • Lack of transparency. Powerful actors in AI companies and governments will have incentives to obfuscate their activities, particularly if they are seeking power for themselves. It might also prove technically difficult to share information on AI capabilities and how they are being used, without leaking sensitive information. The more AI development is happening in secret, the harder it is for most people to oppose steps that would lead to further power concentration.
  • Speed of AI progress. Things might be shifting so quickly that it’s hard for any humans to keep up. This would advantage people who have access to the best AI systems and the largest amounts of compute: they might be the only ones who are able to leverage AI to understand the situation and act to promote their own interests.
  • Biased AI advisors. As AI advice improves and the pace of change accelerates, people may become more and more dependent on AI systems for making sense of the world. But these systems might give advice which is subtly biased in favour of the companies that built them — either because they’ve been deliberately trained to, or because no one thought carefully about how the systems’ training environments could skew them in this direction. If AI systems end up favouring company interests, this could systematically bias people’s beliefs and actions towards things which help with further power concentration.
  • Persuasion and manipulation campaigns. Those with access to superior AI capabilities and compute could deliberately interfere with other people’s ability to limit their power, by conducting AI-powered lobbying campaigns or manipulating individual decision makers. For example, AI could make unprecedentedly intensive and personalised efforts to influence each individual congressperson to gain their support on some policy issue, including offers of money and superhuman AI assistance for their reelection campaigns. It’s not yet clear how powerful these techniques will be (maybe humans’ epistemic defences are already quite good and AI won’t advance much on what humans can already do), but if we’re unlucky this could severely impair society’s ability to notice and respond to power-seeking.

That list of factors might be missing important things and including things that are not really going to be problems — again, the area is understudied. But we’re including it to give a more concrete sense of how AI might erode (or be used to erode) the epistemic environment, making it harder for people to realise what’s happening and resist further power concentration. Epistemic interference in isolation probably won’t lead to extreme AI-enabled power concentration, but it could be a contributing factor.

AI-enabled power concentration could cause enormous and lasting harm

In a commonsense way, handing the keys of the future to a handful of people seems clearly wrong, and it’s something that most people would be strongly opposed to. We put a fair bit of weight on this intuitive case.

We also put some weight on specific arguments for ways in which AI-enabled power concentration would be extremely harmful, though the reasoning here feels more brittle:

  • It could lead to tyranny. Democracy usually stops small groups of extremists from taking the reins of government and using them to commit mass atrocities against their peoples, by requiring that a large chunk of the population supports the general direction of the government. If power became extremely concentrated, a small group could commit atrocities that most people would be appalled by. Many of the worst atrocities in human history were perpetrated by a small number of people who had unchecked power over their people (think of the Khmer Rouge murdering a quarter of all Cambodians between 1975 and 1979). We can think of two main ways that AI-enabled power concentration could lead to tyranny:
    • Malevolent — or just extremely selfish — humans could end up in power. Particularly for scenarios where power gets concentrated through AI-enabled power grabs, it seems quite likely that the sorts of humans who are willing to seize power will have other bad traits. They might actively want to cause harm.
    • Power corrupts. Even if those in power start out with good intentions, they’d have no incentive to continue to promote the interests of most people if their power were secure. Whenever other people’s interests became inconvenient, there would be a strong temptation to backtrack, and no repercussions to doing so.
  • It could lead us to miss out on really good futures. AI-enabled power concentration might not lead to tyranny in the most egregious sense: we might somehow end up with a benevolent dictator or an enlightened caste of powerful actors who keep an eye out for the rest of us. But even in this case, the future might be much less good than it could have been, because there’d be:
    • Injustice and disempowerment. AI-enabled power concentration would disempower the vast majority of people politically. From some philosophical perspectives, justice and political empowerment are intrinsically valuable, so this would make the future much less good.
    • Less diversity of values and ways of life. A narrower set of people in power means a narrower set of values and preferences get represented into the future. Again, from many perspectives this kind of diversity is intrinsically valuable.
    • Less moral reflection (maybe). Making good decisions about the future might require thinking deeply about what we value and what we owe to others. If power over the future is distributed, there’s a good chance that at least some people choose to reflect in this way — and there will be more disagreement and experimentation, which could prompt others to reflect too. But if power is extremely concentrated, those in charge might simply impose their current worldview without ever questioning it. This could lead to irreversible mistakes: imagine if the Victorians or the Romans’ moral blindspots had become permanent policy. If those in power happen to care about figuring out what’s right, power concentration could also lead to more moral reflection than would happen in a default world — but it would be limited to a narrow set of experiences and perspectives, and might miss important insights that emerge from broader human dialogue.

Extreme AI-enabled power concentration would also probably be hard to reverse, making any harms very long-lasting. As is already the case, the powerful will try to hold onto their power. But AI could make it possible to do this in an extremely long-lasting way that hasn’t been possible historically:

  • Even if most people opposed an AI-powered regime, they might have even less power than historically disenfranchised groups have had to overturn it. If all economic and military activity is automated, humans won’t have valuable labour to withhold or compelling force to exert, so strikes and uprisings won’t have any bite.
  • Human dictators die, but a government run by AI systems could potentially preserve the values of a dictator or other human leader permanently into the future.
  • If power becomes so concentrated there’s just one global hegemon, then there won’t be any external threats to the regime.

These harms need to be weighed against the potential benefits from AI-enabled power concentration, like reducing competitive dynamics. We’re not certain how all of this will go down, but both our intuitions and the analysis above suggest that AI-enabled power concentration poses serious risks to human flourishing that we should work to avoid.

There are ways to reduce this risk, but very few are working on them

Many people are working to prevent more moderate forms of power concentration. Considered broadly, a lot of the work that happens in governments, the legal system, and many parts of academia and civil society contributes to this.

But very few are focused on the risk of extreme power concentration driven by AI — even though, if the above arguments are right, this is a very serious risk. We’re aware of a few dozen people at a handful of organisations who are working on reducing this risk, and even fewer who work on this full time. As of September 2025, the only public grantmaking round we know of on AI-enabled power concentration is a $4 million grant programme (though there’s more funding available privately).

This is in spite of the fact that there are concrete things we could do now to reduce the risk. For example, we could:

  • Work on technical solutions to prevent people misusing massive AI workforces, like:
    • Training AI to follow the law
    • Red-teaming model specs (documents that AI systems are trained to follow which specify how they should behave) to make sure AIs are trained not to help with power grabs
    • Auditing models to check for secret loyalties
    • Increasing lab infosecurity to prevent tampering with the development process and unauthorised access, which would make it harder to insert secret loyalties or misuse AI systems
  • Develop and advocate for policies which distribute power over AI, like:
    • Designing the terms of contracts between labs and governments to make sure no one actor has too much influence
    • Sharing access to the best AI capabilities widely whenever this is safe, and with multiple trusted actors like congress and auditors when it isn’t, so that no actor has much more powerful capabilities than everyone else
    • Building datacentres in non-US democracies, to distribute the power to run AI systems amongst more actors
    • Mandating transparency into AI capabilities, how they are being used, model specs, safeguards and risk assessments, so it’s easier to spot concerning behaviour
    • Introducing more robust whistleblower protections to make it harder for insiders to conspire or for company executives to suppress the concerns of their workforces
    • All of the technical solutions above
  • Build and deploy AI tools that improve people’s ability to reason and coordinate, so they can resist epistemic interference

To be clear, thinking about how to prevent AI-enabled power concentration is still at a very early stage. Not everyone currently working on this would support all of the interventions in that list, and it’s not clear how much of the problem would be solved even if we implemented the whole list. It might be that the structural forces pushing towards AI-enabled power concentration are too strong to stop.

But it certainly doesn’t seem inevitable that power will become extremely concentrated:

  • It’s in almost everyone’s interests to prevent AI-enabled power concentration — including the interests of most powerful people today, since they have a lot to lose if they get out-competed.
  • It’s promising that we can already list some concrete, plausibly achievable interventions even though thinking about how to solve the problem is so early stage.

There’s a lot more work to be done here than there are people doing the work.

What are the top arguments against working on this problem?

We’ve touched on these arguments in other places in this article, but we’ve brought them all together here so it’s easier to see what the weakest points are in the argument for prioritising AI-enabled concentration of power, and to go into a bit more depth.

AI-enabled power concentration could reduce other risks from AI

Some forms of power concentration could reduce various other risks from AI:

  • If there were no competition in AI development, the sole AI developer wouldn’t have competitive pressures to skimp on safety, which might reduce the risk of AI takeover. These competitive pressures are a major reason to worry that AI companies will race ahead without taking adequate AI safety precautions.
  • The risk of great power war would fall away if power became entirely concentrated in one country.
  • The risk of catastrophic misuse of bioweapons and other dangerous technologies would be much lower if only one actor had access to dangerous capabilities. The fact that AI could democratize access to extremely dangerous technology like bioweapons is one of the major reasons for concern about misuse.

That said:

  • There are other ways to manage those risks. It’s not the case that either we have a benevolent dictatorship, or we suffer existential catastrophe from other AI risks. Some combination of domestic regulation, international coordination, technical progress on alignment and control, and AI tools for epistemic security could allow us to navigate all of these risks.
  • The prospect of AI-enabled power concentration could also exacerbate other risks from AI. It’s one thing to imagine a world where power is already extremely concentrated. But the process of getting to that world might drastically increase the stakes of competition, and make powerful actors more willing to make risky bets and take adversarial actions, to avoid losing out.
  • Many interventions to reduce AI-enabled power concentration also help reduce other risks. There isn’t always a trade-off in practice. For example, alignment audits help reduce the risk of both power concentration and AI takeover, by making it harder for both humans and AIs to tamper with AI systems’ objectives. And sharing capabilities more widely could both reduce power differentials and allow society to deploy AI defensively: if we can safeguard AI models sufficiently, this needn’t increase risks from catastrophic misuse.

Weighing up these risks is complicated, and we’re not claiming there aren’t tradeoffs here. We currently think it isn’t clear whether the effects of AI-enabled power concentration net out as helpful or harmful for other AI risks. Given that power concentration is an important and neglected problem in its own right, we think it’s still very worth working on. (But we would encourage people working on AI-enabled concentration of power to keep in mind that their actions might influence these other issues, and try to avoid making them worse.)

The future might still be all right, even if there’s AI-enabled power concentration

For the reasons we went into above, we think extremely concentrated power is likely to be bad. But even if you agree, there are some reasons to think a future with AI-enabled power concentration could still turn out all right on some metrics:

  • Material abundance: AI might generate such enormous wealth that most people live in material conditions that are far better than those of the very richest today. In a world with AI-enabled power concentration, people would be politically disempowered, but if the powerful chose to allow it, they could still be materially well-off.
  • Reduced incentives for repression and brutality: part of why autocracies repress their peoples is that their leaders are trying to shore up their own power. If power became so concentrated that leaders were guaranteed to remain in power forever, there’d no longer be rational incentives to do things like restrict freedom of speech or torture dissidents (but there’d still be irrational ones like spite or fanatical ideologies.)
  • Selection effects: while perhaps not likely, it’s possible that the people who end up in power would genuinely want to improve the world. Maybe getting into such a powerful position selects for people who are unusually competent, and maybe they assumed power reluctantly because people were racing to develop unsafe AI, and power concentration seemed like the lesser of two evils.

Again, we don’t find these arguments particularly compelling, but believe they’re plausible enough to be worth considering and weighing.

Efforts to reduce AI-enabled power concentration could backfire

AI-enabled power concentration is a spicy topic, and efforts to prevent it could easily backfire. The more salient the risk of AI-enabled power concentration is, the more salient it is to power-seeking actors. Working to reduce AI-enabled power concentration could:

  • Galvanise opposition to interventions by those who stand to gain from power concentration.
  • Directly give power-seeking actors ideas, by generating and publicising information on how small groups could end up with large amounts of power.
  • Trigger a scramble for power. If everyone thinks that everyone else is trying to consolidate their power, they might be more likely to try to seize power for themselves to preempt this.

Some interventions might also reduce the probability that one actor ends up with too much power, but by increasing the probability that another actor does. For example, increasing government oversight over AI companies might make company power grabs harder, but simultaneously make it easier for government officials to orchestrate a power grab.

We do think that preventing AI-enabled power concentration is a bit of a minefield, and that’s part of why we think that for now, most people should be bearing the risk in mind rather than working on it directly. But there are ways of making this work less likely to backfire, like:

  • Being thoughtful and aware of backfire risks. If you don’t think you have good judgement on this sort of thing (or wouldn’t have anyone with good judgement to give you feedback), it’s probably best to work on something else.
  • Using frames and language which are less adversarial. For example, ‘power grabs’ seems spicier than ‘power concentration’ as a framing.
  • Focusing on kinds of work that are hard for power-seeking actors to misuse. For example, developing and implementing mitigations like transparency measures or alignment audits is harder for a power-seeking actor to make use of than detailed threat-modelling.

Power might remain distributed by default

Above, we argue that power could become extremely concentrated. But this isn’t inevitable, and the arguments may turn out to be wrong. For example:

  • AI capabilities might just not get that powerful. Maybe the ceiling on important capabilities like persuasion or AI R&D is quite low, so the effects of AI are less transformative across the board.
    • A particularly important variant of this is that maybe self-reinforcing dynamics from automating AI R&D will be weak, in which case there might be no intelligence explosion or only a small one. This would mean that no single AI developer would be able to get and maintain a big capabilities lead over other developers.
  • The default regulatory response (and the institutional setup in places like the US) might be enough to redistribute gains from automation and prevent misuse of big AI workforces. People with power today — which in democracies includes the electorate, civil society, and the media — will try very hard to maintain their own power against newcomers if they are able to tell what’s going on, and most people stand to lose from AI-enabled power concentration.
  • If people are worried that AI is misaligned, meaning that it doesn’t reliably pursue the goals that its users or makers want it to, this could both reduce the economic impacts of AI (because there’d be less deployment), and make power-seeking individuals less willing to use AI to attempt power grabs (because the AI might turn on them).

We think that the probability that power becomes extremely concentrated is high enough to be very concerning. But we agree that it’s far from guaranteed.

It might be too hard to stop AI-enabled power concentration

On the flip side, it might turn out that AI-enabled power concentration is not worth working on because it is too difficult to stop:

  • The structural forces pushing towards AI-enabled power concentration could be very strong. For example, if there’s an enormous intelligence explosion which grants one AI developer exclusive access to godlike AI capabilities, then what happens next would arguably be at their sole discretion.
  • Most actors who could stand to gain from AI-enabled power concentration are already very powerful. They might oppose efforts to mitigate the risk, obfuscate what’s going on, and interfere with other people’s ability to coordinate against power concentration.

That said, we don’t think that we should give up yet:

  • We don’t know yet how the structural dynamics will play out. We might be in a world where it is very possible to limit power concentration.
  • It’s in almost everyone’s interests to prevent AI-enabled power concentration — including the interests of most powerful people today, since most of them stand to lose out if one small group gains control of most important decisions. It might be possible to coordinate to prevent power concentration and make defecting very costly.
  • There are already some interventions to prevent AI-enabled power concentration that look promising (see above). If this area receives more attention, we may well find more.

What can you do to help?

Because so little dedicated work has been done on preventing extreme AI-enabled power concentration to date, there aren’t yet interventions that we feel confident about directing lots of people towards. And there certainly aren’t many jobs working directly on this issue!

For now, our main advice for most people is to:

  • Bear the risk of AI-enabled power concentration in mind. We’re more likely to avoid AI-enabled power concentration if reasonable people are aware of this risk and want to prevent it. This is especially relevant if you work at an AI company or in AI governance and safety: policies or new technologies will often have knock-on effects on power concentration, and by being aware of this you might be able to avoid inadvertently increasing the risk.
  • Be sensitive to the fact that efforts to reduce this risk could backfire or increase other risks.

There are also some promising early-stage agendas, and we think that some people could start doing good work here already. We’d be really excited to see more people work on:

For more ideas, you can look at the mitigations sections of these papers on AI-enabled coups, gradual disempowerment, and the intelligence curse; as well as these lists of projects on gradual disempowerment. The field is still very early stage, so a key thing to do might just be to follow the organisations and researchers doing work in the area, and look out for ways to get involved.

Learn more

The problem of AI-enabled power concentration

Power grabs

Economic dominance:

How bad AI-enabled power concentration could be

Some mitigations for AI-enabled power concentration