This post treats reward functions as “specifying goals”, in some sense. As I explained in Reward Is Not The Optimization Target, this is a misconception that can seriously damage your ability to understand how AI works. Rather than “incentivizing” behavior, reward signals are (in many cases) akin to a per-datapoint learning rate. Reward chisels circuits into the AI. That’s it!

As of 2024, I no longer endorse this post

Even though this post presents correct and engaging technical explanations, its speculation seems wrong. For example, deceptive alignment is not known to be “prevalent.”

Key takeaways
  • The structure of the agent’s environment often causes instrumental convergence. In many situations, there are (potentially combinatorially) many ways for power-seeking to be optimal, and relatively few ways for it not to be optimal.
  • My previous results said something like: in a range of situations, when you’re maximally uncertain about the agent’s objective, this uncertainty assigns high probability to objectives for which power-seeking is optimal.
    • My new results prove that in a range of situations, seeking power is optimal for most agent objectives (for a particularly strong formalization of “most”). More generally, the new results say something like: in a range of situations, for most beliefs you could have about the agent’s objective, these beliefs assign high probability to reward functions for which power-seeking is optimal.
    • I present the first formal theory of the statistical tendencies of optimal policies in reinforcement learning.
  • One result says: whenever the agent maximizes average reward, then for any reward function, most permutations of it incentivize shutdown avoidance.
    • The formal theory is now beginning to explain why alignment is so hard by default, and why failure might be catastrophic.
  • Before, I thought of environmental symmetries as convenient sufficient conditions for instrumental convergence. But I increasingly suspect that symmetries are the main part of the story.
  • I think these results may be important for understanding the AI alignment problem and formally motivating its difficulty.
    • For example, my results imply thatsimplicity priors over reward functions assign non-negligible probability to reward functions for which power-seeking is optimal.
    • I expect my symmetry arguments to help explain other “convergent” phenomena, including:
    • One of my hopes for this research agenda: if we can understand exactly why superintelligent goal-directed objective maximization seems to fail horribly, we might understand how to do better.
Thanks

Thanks to TheMajor, Rafe Kennedy, and John Wentworth for feedback on this post. Thanks for Rohin Shah and Adam Shimi for feedback on the simplicity prior result.

One view on agi risk is that we’re charging ahead into the unknown, into a particularly unfair game of Minesweeper in which the first click is allowed to blow us up. Following the analogy, we want to understand enough about the mine placement so that we don’t get exploded on the first click. And once we get a foothold, we start gaining information about other mines, and the situation is a bit less dangerous.

My previous theorems on power-seeking said something like: “at least half of the tiles conceal mines.”

I think that’s important to know. But there are many tiles you might click on first. Maybe all of the mines are on the right, and we understand the obvious pitfalls, and so we’ll just click on the left.

That is: we might not uniformly randomly select tiles:

  • We might click a tile on the left half of the grid.
  • Maybe we sample from a truncated discretized Gaussian.
  • Maybe we sample the next coordinate by using the universal prior (rejecting invalid coordinate suggestions).
  • Maybe we uniformly randomly load LessWrong posts and interpret the first text bits as encoding a coordinate.

We can sample coordinates in many ways—not just uniformly randomly. So why should our sampling procedure tend to activate mines?

My new results say something analogous to: for every coordinate, either it contains a mine, or its reflection across contains a mine, or both. Therefore, for every distribution over tile coordinates, either assigns at least 1/2 probability to mines, or it does after you reflect it across .

Definition: Orbit

The orbit of a coordinate under the symmetric group is . More generally, if we have a probability distribution over coordinates, its orbit is the set of all possible “permuted” distributions.

Orbits under symmetric groups quantify all ways of “changing things around” for that object.

My new theorems demand that at least one of these tiles conceal a mine.
If the mines had been on the right, then both coordinates are safe.

Since my results (in the analogy) prove that at least one of the two blue coordinates conceals a mine, we deduce that the mines are not all on the right.

Some reasons we care about orbits:

  1. As we will see, orbits highlight one of the key causes of instrumental convergence: certain environmental symmetries (which are, mathematically, permutations in the state space).
  2. Orbits partition the set of all possible reward functions. If at least half of the elements of every orbit induces power-seeking behavior, that’s strictly stronger than showing that at least half of reward functions incentivize power-seeking (technical note: with the second “half” being with respect to the uniform distribution’s measure over reward functions).
    • In particular, we might have hoped that there were particularly nice orbits, where we could specify objectives without worrying too much about making mistakes (like permuting the output a bit). These nice orbits are impossible. This is some evidence of a fundamental difficulty in reward specification.
  3. Permutations are well-behaved and help facilitate further results about power-seeking behavior. In this post, I’ll prove one such result about the simplicity prior over reward functions.

In terms of coordinates, one hope could have been:

Sure, maybe there’s a way to blow yourself up, but you’d really have to contort yourself into a pretzel in order to algorithmically select such a bad coordinate: all reasonably simple selection procedures will produce safe coordinates.

Suppose you give me a program which computes a safe coordinate. Let call to compute the coordinate, and then have swap the entries of the computed coordinate. is only a few bits longer than , and it doesn’t take much longer to compute, either. So the above hope is impossible: safe mine-selection procedures can’t be significantly simpler or faster than unsafe mine-selection procedures.

The section “Simplicity priors assign non-negligible probability to power-seeking” proves something similar about objective functions.

Orbits of goals consist of all the ways of permuting what states get which values. Consider this rewardless Markov decision process (mdp):

Arrows show the effect of taking some action at the given state.

Whenever staying put at is strictly optimal, you can permute the reward function so that it’s strictly optimal to go to . For example, let and let swap the two states. acts on as follows: simply permutes the state before evaluating its reward: .

The orbit of is . It’s optimal for the former to stay at , and for the latter to alternate between the two states.

In this three-state mdp, let assign 1 reward to and 0 to all other states, and let rotate through the states ( goes to , goes to , goes to ). Then the orbit of is:

Different reward functions and the rewards they assign to states.

My new theorems prove that in many situations, for every reward function, power-seeking is incentivized by most (at least half) of its orbit elements.

In Seeking Power is Often Robustly Instrumental in mdps, the last example involved gems and dragons and (most exciting of all) subgraph isomorphisms:

Sometimes, one course of action gives you “strictly more options” than another. Consider another MDP with IID reward: The right blue gem subgraph contains a “copy” of the upper red gem subgraph. From this, we can conclude that going right to the blue gems… is more probable under optimality for all discount rates between 0 and 1!

The state permutation embeds the red-gems subgraph into the blue-gems subgraph.

We say that is an environmental symmetry, because is an element of the symmetric group of permutations on the state space.

Let’s pause for a moment. For half a year, I intermittently and fruitlessly searched for some way of extending the original results beyond IID reward distributions to account for arbitrary reward function distributions.

  • Part of me thought it had to be possible—how else could we explain instrumental convergence?
  • Part of me saw no way to do it. Reward functions differ wildly, how could a theory possibly account for what “most of them” incentivize?

The recurring thought which kept my hope alive was:

There should be “more ways” for blue-gems to be optimal over red-gems, than for red-gems to be optimal over blue-gems.

Then I reconsidered the same state permutation which proved my original iid-reward theorems. That kind of would imply that since blue-gems has more options, there is therefore greater optimality probability (under IID reward function distributions) for moving toward the blue gems. In the end, that same permutation holds the key to understanding instrumental convergence in mdps.

Suppose red-gems is optimal. For example, let assign 1 reward to the castle 🏰, and 0 to all other states. Then the permuted reward function assigns 1 reward to the gold pile, and 0 to all other states, and so blue-gems has strictly more optimal value than red-gems.

Consider any discount rate . For all reward functions such that , this permutation turns them into blue-gem lovers: .

takes non-power-seeking reward functions, and injectively maps them to power-seeking orbit elements. Therefore, for all reward functions , at least half of the orbit of must agree that blue-gems is optimal!

Throughout this post, when I say “most” reward functions incentivize something, I mean the following:

Definition

At state , most reward functions incentivize action over action when for all reward functions , at least half of the orbit agrees that has at least as much action value as does at state .1

The same reasoning applies to distributions over reward functions. And so if you say “we’ll draw reward functions from a simplicity prior”, then most permuted distributions in that prior’s orbit will incentivize power-seeking in the situations covered by my previous theorems. (And we’ll later prove that simplicity priors themselves must assign non-trivial, positive probability to power-seeking reward functions.)

Furthermore, for any distribution which distributes reward “fairly” across states (precisely: independently and identically), their (trivial) orbits unanimously agree that blue-gems has strictly greater probability of being optimal. And so the converse isn’t true: it isn’t true that at least half of every orbit agrees that red-gems has more power and greater probability of being optimal.

This might feel too abstract, so let’s run through examples.

At all discount rates , it’s optimal for most reward functions to get blue-gems because that leads to strictly more options. We can permute every red-gems reward function into a blue-gems reward function.
Consider a robot () navigating through a room with a vase (■). By the logic of “every destroying-vase-is-optimal can be permuted into a preserving-vase-is-optimal reward function”, my results (specifically, Proposition 6.9 and its generalization via Lemma E.49) suggest2 that optimal policies tend to avoid breaking the vase, since doing so would strictly decrease available options.

In SafeLife, the agent can irreversibly destroy green cell patterns. By the logic of “every destroy-green-pattern reward function can be permuted into a preserve-green-pattern reward function”, Lemma E.49 suggests that optimal policies tend to not disturb any given green cell pattern (although most probably destroy some pattern). The permutation would swap {states reachable after destroying the pattern} with {states reachable after not destroying the pattern}.

However, the converse is not true: you cannot fix a permutation which turns all preserve-green-pattern reward functions into destroy-green-pattern reward functions. There are simply too many extra ways for preserving green cells to be optimal.

Assuming some conjectures I have about the combinatorial properties of power-seeking, this helps explain why aup works in SafeLife using a single auxiliary reward function—but more on that in another post.

When the agent maximizes average reward, it’s optimal for most reward functions to Wait! so that they can choose between chocolate and hug. The logic is that every candy-optimal reward function can be permuted into a chocolate-optimal reward function.
A portion of a Tic-Tac-Toe game-tree against a fixed opponent policy. Whenever we make a move that ends the game, we can’t go anywhere else—we have to stay put. Then most reward functions incentivize the green actions over the black actions: average-reward optimal policies are particularly likely to take moves which keep the game going. The logic is that any lose-immediately-with-given-black-move reward function can be permuted into a stay-alive-with-green-move reward function.

Even though randomly generated environments are unlikely to satisfy these sufficient conditions for power-seeking tendencies, the results are easy to apply to many structured environments common in reinforcement learning. For example, when , most reward functions provably incentivize not immediately dying in Pac-Man. Every reward function which incentivizes dying right away can be permuted into a reward function for which survival is optimal.

Consider the dynamics of the Pac-Man video game. Ghosts kill the player, at which point we consider the player to enter a “game over” terminal state which shows the final configuration. This rewardless mdp has Pac-Man’s dynamics, but not its usual score function. Fixing the dynamics, what actions are optimal as we vary the reward function?

Most importantly, we can prove that when shutdown is possible, optimal policies try to avoid it if possible. When the agent isn’t discounting future reward (i.e. maximizes average return) and for action encodings, the mdp structure has the right symmetries to ensure that it’s instrumentally convergent to avoid shutdown.

The paper’s discussion section

Corollary 6.14 dictates where average-optimal agents tend to end up, but not how they get there. Corollary 6.14 says that such agents tend not to stay in any given 1-cycle. It does not say that such agents will avoid entering such states. For example, in an embodied navigation task, a robot may enter a 1-cycle by idling in the center of a room. Corollary 6.14 implies that average-optimal robots tend not to idle in that particular spot, but not that they tend to avoid that spot entirely.

However, average-optimal robots do tend to avoid getting shut down. The agent’s rewardless mdp often represents agent shutdown with a terminal state. A terminal state is unable to access other 1-cycles. Since Corollary 6.14 shows that average-optimal agents tend to end up in other 1-cycles, average-optimal policies must tend to completely avoid the terminal state. Therefore, we conclude that in many such situations, average-optimal policies tend to avoid shutdown.

What does “most reward functions” mean quantitatively—is it just at least half of each orbit? Or, are there situations where we can guarantee that at least three-quarters of each orbit incentivizes power-seeking? I think we should be able to prove that as the environment gets more complex, there are combinatorially more permutations which enforce these similarities, and so the orbits should skew harder and harder towards power-incentivization.

Here’s a semi-formal argument. For every orbit element which makes candy strictly optimal when , and respectively produce . Wait! is strictly optimal for both , and so at least 2/3 of the orbit should agree that Wait! is optimal. As Wait! gains more power (more choices, more control over the future), I conjecture that this fraction approaches 1.

I don’t yet understand the general case, but I have a strong hunch that instrumental convergenceoptimal policies is governed by how many more ways there are for power to be optimal than not optimal. And this seems like a function of the number of environmental symmetries which enforce the appropriate embedding.

One possible hope would have been:

Sure, maybe there’s a way to blow yourself up, but you’d really have to contort yourself into a pretzel in order to algorithmically select a power-seeking reward function. In other words, reasonably simple reward function specification procedures will produce non-power-seeking reward functions.

Unfortunately, there are always power-seeking reward functions not much more complex than their non-power-seeking counterparts. Here, “power-seeking” corresponds to the intuitive notions of either keeping strictly more options open (Proposition 6.9), or navigating towards larger sets of terminal states (theorem 6.13). (Since this applies to several results, I’ll leave the meaning a bit ambiguous, with the understanding that it could be formalized if necessary.)

Theorem: Simplicity priors assign non-negligible probability to power-seeking

Consider any mdp which meets the preconditions of Proposition 6.9 or theorem 6.13. Let be a universal Turing machine, and let be the -simplicity prior over computable reward functions.

Let NPS be the set of non-power-seeking computable reward functions which choose a fixed non-power-seeking action in the given situation. Let be the set of computable reward functions for which seeking power is strictly optimal.3

Then there exists a “reasonably small” constant such that , where .

Proof sketch: Let be an environmental symmetry which satisfies the power-seeking theorem in question. Since can be found by brute-force iteration through all permutations on the state space, checking each to see if it meets the formal requirements of the relevant theorem, its Kolmogorov complexity is relatively small.

Because Lemma D.26 applies in these situations, : turns non-power-seeking reward functions into power-seeking ones. Thus, .

Since each reward function can be computed by computing the non-power-seeking variant and then permuting it (with extra bits of complexity), (with counting the small number of extra bits for the code which calls the relevant functions).

Since is a simplicity prior, .

Then . Qed.

Why can’t we show that ?
Certain utms might make non-power-seeking reward functions particularly simple to express.
This proof doesn’t assume anything about how many more options power-seeking offers than not-power-seeking. The proof only assumes the existence of a single involutive permutation .
This lower bound seems rather weak. Even if bits, .
This lower bound is indeed loose. Since most individual nps probabilities of interest are less than 1 in one trillion, I wouldn’t be surprised if the bound were loose by at least several orders of magnitude.
First of all, the bound implicitly assumes that the only way to compute PS reward functions is by taking nps ones and permuting them. We should add the other ways of computing PS reward functions to .
There are lots of permutations we could use. gains probability from all of those terms. Some of these terms are probably reasonably large, since it seems implausible that all such permutations have high K-complexity. When all is said and done, we may well end up with a significant chunk of probability on PS.
For example: the symmetric group has cardinality , and for any , at least half of the induce (weakly) power-seeking orbit elements . (This argument would be strengthened by my conjectures about bigger environments greater fraction of orbits seek power.)
If some significant fraction (e.g. ) of these are strictly power-seeking, we’re adding at least additional terms.

Overall, it’s not surprising that the bound is loose, given the lack of assumptions about the degree of power-seeking in the environment. If the bound is anywhere near tight, then the permuted simplicity prior incentivizes power-seeking with extremely high probability.4

What if ?
I think this is impossible, and I can prove that in a range of situations, but it would be a lot of work and it relies on results not in the arxiv paper.
Even if that equation held, that would mean that power-seeking is (at least weakly) optimal for all computable reward functions. That’s hardly a reassuring situation. Note that if , then .

  • Most plainly, this seems like reasonable formal evidence that the simplicity prior has malign incentives.
  • Power-seeking reward functions don’t have to be too complex.
  • These power-seeking theorems give us important tools for reasoning formally about power-seeking behavior and its prevalence in important reward function distributions.5

if you know that an agent is maximizing the expectation of an explicitly represented utility function, I would expect that to lead to goal-driven behavior most of the time, since the utility function must be relatively simple if it is explicitly represented, and simple utility functions seem particularly likely to lead to goal-directed behavior.

On its own, Goodhart’s law doesn’t explain why optimizing proxy goals leads to catastrophically bad outcomes, instead of just less-than-ideal outcomes.

I think that we’re now starting to have this kind of understanding. I suspect that power-seeking is why capable, goal-directed agency is so dangerous by default. If we want to consider more benign alternatives to goal-directed agency, then deeply understanding the rot at the heart of goal-directed agency is important for evaluating alternatives. This work lets us get a feel for the generic incentives of reinforcement learning at optimality.

For every reward function —no matter how benign, how aligned with human interests, no matter how power-averse—either or its permuted variant seeks power in the given situation (intuitive-power, since the agent keeps its options open, and also formal-power, according to my proofs).

If I let myself be a bit more colorful, every reward function has lots of “evil” power-seeking variants (do note that the step from “power-seeking” to “misaligned power-seeking” requires more work). If we imagine ourselves as only knowing the orbit of the agent’s objective, then the situation looks a bit like this:

Technical note: this 12-element orbit could arise from the action of a subgroup of the symmetric group , which has elements. Consider a 4-state mdp; if the reward function assigns equal reward to exactly two states, then it would have a 12-element orbit under .

Of course, this isn’t how reward specification works—we probably are far more likely to specify certain orbit elements than others. However, the formal theory is now beginning to explain why alignment is so hard by default, and why failure might be catastrophic!

The structure of the environment often ensures that there are (potentially combinatorially) many more ways to misspecify the objective so that it seeks power, than there are ways to specify goals without power-seeking incentives.

I’m optimistic that these symmetry arguments will help us better understand a range of different tendencies. The common thread seems like: For every “way” a thing could not happen /not be a good idea, there are many more “ways” in which it could happen /be a good idea.

  • convergent evolution
    • flight has independently evolved several times, suggesting that flight is adaptive in response to a wide range of conditions.
Wikipedia

In his 1989 book Wonderful Life, Stephen Jay Gould argued that if one could “rewind the tape of life [and] the same conditions were encountered again, evolution could take a very different course.” Simon Conway Morris disputes this conclusion, arguing that convergence is a dominant force in evolution, and given that the same environmental and physical constraints are at work, life will inevitably evolve toward an “optimum” body plan, and at some point, evolution is bound to stumble upon intelligence, a trait presently identified with at least primates, corvids, and cetaceans.

  • the prevalence of deceptive alignment

    • given inner misalignment, there are (potentially combinatorially) many more unaligned terminal reasons to lie (and survive), and relatively few unaligned terminal reasons to tell the truth about the misalignment (and be modified).
  • feature universality

    • computer vision networks reliably learn edge detectors, suggesting that this is instrumental (and highly learnable) for a wide range of labelling functions and datasets.

You have to be careful in applying these results to argue for real-world AI risk from deployed systems.

  • They assume the agent is following an optimal policy for a reward function

    • I can relax this to -optimality, but may be extremely small
  • They assume the environment is finite and fully observable

  • Not all environments have the right symmetries

    • But most ones we think about seem to
  • The results don’t account for the ways in which we might practically express reward functions

    • For example, often we use featurized reward functions. While most permutations of any featurized reward function will seek power in the considered situation, those permutations need not respect the featurization (and so may not even be practically expressible).
  • When I say “most objectives seek power in this situation”, that means in that situation—it doesn’t mean that most objectives take the power-seeking move in most situations in that environment

    • The combinatorics conjectures will help prove the latter

This list of limitations has steadily been getting shorter over time.

I think that this work is beginning to formally explain why slightly misspecified reward functions will probably incentivize misaligned power-seeking. Here’s one hope I have for this line of research going forwards:

One naïve alignment approach involves specifying a good-seeming reward function, and then having an AI maximize its expected discounted return over time. For simplicity, we could imagine that the AI can just instantly compute an optimal policy.

Let’s precisely understand why this approach seems to be so hard to align, and why extinction seems to be the cost of failure. We don’t yet know how to design beneficial AI, but we largely agree that this naïve approach is broken. Let’s prove it.

Black and white trout

Find out when I post more content: newsletter & RSSRSS icon

Thoughts? Email me at alex@turntrout.com

  1. This assumption is actually a bit stronger than what I rely on in the paper, but it’s easier to explain in words.

  2. “Suggest” instead of “prove” because E.49’s preconditions may not always be met, depending on the details of the dynamics. I think this is probably unimportant, but that’s for future work. Also, the argument may barely not apply to this gridworld, but if you could move the vase around without destroying it, I think it goes through fine.

  3. There exist reward functions for which it’s optimal to seek power and not to seek power; for example, constant reward functions make everything optimal, and they’re certainly computable. Therefore, is a strict subset of the whole set of computable reward functions.

  4. If you think about the permutation as a “way reward could be misspecified”, then that’s troubling. It seems plausible that this is often (but not always) a reasonable way to think about the action of the permutation.

  5. If I had to guess, this result is probably not the best available bound, nor the most important corollary of the power-seeking theorems. But I’m still excited by it (insofar as it’s appropriate to be “excited” by slight Bayesian evidence of doom).