Table of contents
This post treats reward functions as “specifying goals”, in some sense. As I explained in Reward Is Not The Optimization Target, this is a misconception that can seriously damage your ability to understand how AI works. Rather than “incentivizing” behavior, reward signals are (in many cases) akin to a perdatapoint learning rate. Reward chisels circuits into the AI. That’s it!
Summary of the current powerseeking theoremsGive me a utility function, any utility function, and for most ways I could jumble it up—most ways I could permute which outcomes get which utility, for most of these permutations, the agent will seek power.
This kind of argument assumes that (the set of utility functions we might specify
) is closed under permutation. This is unrealistic, because practically speaking we reward agents based off of observed features of the agent’s environment.
For example, PacMan eats dots and gains points. A football AI scores a touchdown and gains points. A robot hand solves a Rubik’s cube and gains points. But most permutations of these objectives are implausible because they’re highentropy, they’re very complex, they assign high reward to one state and low reward to another state without a simple generating rule that grounds out in observed features. Practical objective specification doesn’t allow that many degrees of freedom in what states get what reward.
I explore how instrumental convergence works in this case. I also walk through how these new results retrodict the fact that instrumental convergence basically disappears for agents with utility functions over actionobservation histories.
Consider the following environment, where the agent can either stay put or move along a purple arrow.
Suppose the agent gets some amount of reward each timestep, and it’s choosing a policy to maximize its average pertimestep reward. Previous results tell us that for generic reward functions over states, at least half of them incentivize going right. There are two terminal states on the left, and three on the right, and 3 > 2; we conclude that at least $floor(3/2)+1floor(3/2) =21 $ of objectives incentivize going right.
But it’s damn hard to have so many degrees of freedom that you’re specifying a potentially independent utility number for each state.^{1} Meaningful utility functions will be featurized in some sense—only depending on certain features of the world state, and of how the outcomes transpired, etc. If the featurization is linear, then it’s particularly easy to reason about powerseeking incentives.
Let $feat(s)$ be the feature vector for state $s$, where the first entry is 1 iff the agent is standing on $△$. The second and third entries represent $◯$ and $★$, respectively. That is, the featurization only records what shape the agent is standing on. Suppose the agent makes decisions in a way which depends only on the featurized reward of a state: $R(s)=feat(s)_{⊤}α$, where $α∈R_{3}$ expresses the feature coefficients. Then the relevant terminal states are only {triangle, circle, star}, and we conclude that 2/3 of coefficient vectors incentivize going right. This is true more precisely in the orbit sense: For every coefficient vector $α$, at least^{2} 2/3 of its permuted variants make the agent prefer to go right.
This particular featurization increases the strength of the orbitlevel incentives—whereas before, we could only guarantee 1/2strength powerseeking tendency, now we guarantee 2/3level.^{3}^{4}
There’s another point I want to make in this tiny environment.
Suppose we find an environmental symmetry $ϕ$ which lets us apply the original powerseeking theorems to raw reward functions over the world state. Letting $e_{s}∈R_{6}$ be a column vector with an entry of 1 at state $s$ and 0 elsewhere, in this environment, we have the symmetry enforced by
$ϕ⋅{e_{△},e_{left}}State distributions, left ={e_{◯},e_{right}}⊊{e_{◯},e_{right},e_{★}}State distributions, right .$Given a state featurization, and given that we know that there’s a statelevel environmental symmetry $ϕ$, when can we conclude that there’s also featurelevel powerseeking in the environment?
Here, we’re asking “if reward is only allowed to depend on how often the agent visits each shape, and we know that there’s a raw statelevel symmetry, when do we know that there’s a shapefeature embedding from (left shape feature vectors) into (right shape feature vectors)?”
In terms of “what choice lets me access ‘more’ features?”, this environment is relatively easy—look, there are twice as many shapes on the right. More formally, we have:
$⎩⎨⎧ 1△0◯0★ , 0△0◯0★ ⎭⎬⎫ Feature vectors on the left ⎩⎨⎧ 0△1◯0★ , 0△0◯0★ , 0△0◯1★ ⎭⎬⎫ Feature vectors on the right ,$where the left set can be permuted two separate ways into the right set (since the zero vector isn’t affected by feature permutations).
But I’m gonna play dumb and walk through to illustrate a more important point about how powerseeking tendencies are guaranteed when featurizations respect the structure of the environment.
Consider the state $s_{△}$. We permute it to be $s_{◯}$ using $ϕ$ (because $ϕ(s_{△})=s_{◯}$), and then featurize it to get a feature vector with 1$◯$ and 0 elsewhere.
Alternatively, suppose we first featurize $s_{△}$ to get a feature vector with 1$△$ and 0 elsewhere. Then we swap which features are which, by switching $△$ and $◯$. Then we get a feature vector with 1$◯$ and 0 elsewhere—the same result as above.
The shape featurization plays nice with the actual nittygritty environmentlevel symmetry. More precisely, a sufficient condition for featurelevel symmetries: (Featurizing and then swapping which features are which) commutes with (swapping which states are which and then featurizing).^{5} And where there are featurelevel symmetries, just apply the normal powerseeking theorems to conclude that there are decisionmaking tendencies to choose sets of larger features.
In a different featurization, suppose the featurization is the agent’s $x/y$ coordinates. $R(s_{x,y})=α_{1}x+α_{2}y$.
Given the start state, if the agent goes up, its reachable feature vector is just {(x=0 y=1)}, whereas the agent can induce (x=1 y=0) if it goes right. Therefore, whenever up is strictly optimal for a featurized reward function, we can permute that reward function’s feature weights by swapping the x and ycoefficients ($α_{1}$ and $α_{2}$, respectively). Again, this new reward function is featurized, and it makes going right strictly optimal. So the usual arguments ensure that at least half of these featurized reward functions make it optimal to go right.
But sometimes these similarities won’t hold, even when it initially looks like they “should”!
The agent can induce the feature vectors ${(x:−1y:0 ),(x:−1y:−1 )}$ if it goes left. However, it can induce ${(x:1y:0 ),(x:1y:1 )}$ if it goes right.
There is no way of switching feature labels so as to copy the left feature set into the right feature set! There’s no way to just apply a feature permutation to the left set, and thereby produce a subset of the right feature set. Therefore, the theorems don’t apply, and so they don’t guarantee anything about how most permutations of every reward function incentivize some kind of behavior.
On reflection, this makes sense. If $α_{1}=α_{2}=−1$, then there’s no way the agent will want to go right. Instead, it’ll go for the negative feature values offered by going left. This will hold for all permutations of this feature labelling, too. So the orbitlevel incentives can’t hold.
If the agent can be made to “hate everything” (all feature weights $α_{i}$ are negative), then it will pursue opportunities which give it negativevalued feature vectors, or at least strive for the oblivion of the zero feature vector. Vice versa for if it positively values all features.
Consider a deep RL training process where the agent’s episodic reward is featurized into a weighted sum of the different resources the agent has at the end of the game, with weight vector $α$. For simplicity, we fix an opponent policy and a learning regime (number of epochs, learning rate, hyperparameters, network architecture, and so on). We consider the effects of varying the reward feature coefficients $α$.
 Outcomes of interest
 Game state trajectories.
 AI decisionmaking function
 $f(T∣α)$ returns the probability that, given our fixed learning regime and reward feature vector $α$, the training process produces a policy network whose rollouts instantiate some trajectory $τ∈T$.
 *What the theorems say:
 If $α$ is the zero vector, the agent gets the same reward for all trajectories, and so gradient descent does nothing, and the randomly initialized policy network quickly loses against any reasonable opponent. No powerseeking tendencies if this is the only plausible parameter setting.
 If $α$ only has negative entries, then the policy network quickly learns to throw away all of its resources and not collect any more. If and only if this has been achieved, the training process is indifferent to whether the game is lost. No real powerseeking tendencies if it’s only plausible that we specify a negative vector.
 If $α$ has a positive entry, then policies learn to gather as much of that resource as possible. In particular, there aren’t orbit elements $α$ with positive entries but where the learned policy tends to just die, and so we don’t even have to check that the permuted variants $ϕ⋅α$ of such feature vectors are also plausible. Powerseeking occurs.
 This reasoning depends on which kinds of feature weights are plausible, and so wouldn’t have been covered by the previous results.
Similar setup to StarCraft II, but now the agent’s episode reward is $α_{1}⋅$(Amount of iron ore in chests within 100 blocks of spawn after 2 ingame days)$+α_{2}⋅$(Same but for coal), where $α_{1},α_{2}∈R$ are scalars (together, they form the coefficient vector $α∈R_{2}$).
 Outcomes of interest
 Game state trajectories.
 AI decisionmaking function
 $f(T∣α)$ returns the probability that, given our fixed learning regime and feature coefficients $α$, the training process produces a policy network whose rollouts instantiate some trajectory $τ∈T$.
 What the theorems say
 If $α$ is the zero vector, the analysis is the same as before. No powerseeking tendencies. In fact, the agent tends to not gain power because it has no optimization pressure steering it towards the few action sequences which gain the agent power.
 If $α$ only has negative entries, the agent definitely doesn’t hoard resources in chests. Otherwise, there’s no real reward signal and gradient descent doesn’t do a whole lot due to sparsity.
 If $α$ has a positive entry, and if the learning process is good enough, agents tend to stay alive. If the learning process is good enough, there just won’t be a single feature vector with a positive entry which tends to produce nonselfempowering policies.
The analysis so far is nice to make a bit more formally, but it isn’t really pointing out anything that we couldn’t have figured out pretheoretically. I think I can sketch out more novel reasoning, but I’ll leave that to a future post.
Consider some arbitrary set $D⊆R_{d}$ of “plausible” utility functions over $d$ outcomes. If we have the usual big set $B$ of outcome lotteries (which possibilities are, in the view of this theory, often attained via “powerseeking”), and $B$ contains $n$ copies of some smaller set $A$ via environmental symmetries $ϕ_{1},…,ϕ_{n}$, then when are there orbitlevel incentives within $D$—when will most reasonable variants of utility functions make the agent more likely to select $B$ rather than $A$?
When the environmental symmetries can be applied to the $A$preferringvariants, in a way which produces another plausible objective. Slightly more formally, if, for every plausible utility function $u∈D$ where the agent has a greater chance of selecting $A$ than of selecting $B$, we have the membership $ϕ_{i}⋅u∈D$ for all $i=1,...,n$.
This covers the totally general case of arbitrary sets of utility function classes we might use. (And, technically, “utility function” is decorative at this point—it just stands in for a parameter which we use to retarget the AI policyproduction process.)
The general result highlights how $D$ ≝ { plausible objective functions } affects what conclusions we can draw about orbitlevel incentives. All else equal, being able to specify more plausible objective functions for which $f(B∣u)≥f(A∣u)$ means that we’re more likely to ensure closure under certain permutations. Similarly, adding plausible $A$dispreferring objectives makes it harder to satisfy $f(B∣u)<f(A∣u)⟹ϕ_{i}⋅u∈D$, which makes it harder to ensure closure under certain permutations, which makes it harder to prove instrumental convergence.
Structural assumptions on utility really do matter when it comes to instrumental convergence:
Setting Strength of instrumental convergence u_{aoh} Nonexistent u_{OH} Strong Statebased objectives (e.g. statebased reward in mdps) Moderate Environmental structure can cause instrumental convergence, but (the absence of) structural assumptions on utility can make instrumental convergence go away (for optimal agents).
In particular, for the mdp case, I wrote:
mdps assume that utility functions have a lot of structure: the utility of a history is timediscounted additive over observations. Basically, $u(a_{1}o_{1}a_{2}o_{2}…)=∑_{t=1}γ_{t−1}R(o_{t})$, for some $γ∈[0,1)$ and reward function $R:O→R$ over observations. And because of this structure, the agent’s average pertimestep reward is controlled by the last observation it sees. There are exponentially fewer last observations than there are observation histories. Therefore, in this situation, instrumental convergence is exponentially weaker for reward functions than for arbitrary u_{OH}.
This is equivalent to a featurization which takes in an actionobservation history, ignores the actions, and spits out timediscounted observation counts. The utility function is then over observations (which are just states in the mdp case). Here, the symmetries can only be over states, and not histories, and no matter how expressive the plausible statebasedrewardset $D_{S}$ is, it can’t compete with the exponentially larger domain of the observationhistorybasedutilityset $D_{OH}$, and so the featurization has limited how strong instrumental convergence can get by projecting the highdimensional u_{OH} into the lowerdimensional u_{State}.
But when we go from u_{aoh} to u_{OH}, we’re throwing away even more information—information about the actions! This is also a sparse projection. So what’s up?
When we throw away info about actions, we’re breaking some symmetries which made instrumental convergence disappear in the u_{aoh} case. In any deterministic environment, there are equally many u_{aoh} which make me want to go e.g. down (and, say, die) as which make me want to go up (and survive). This is guaranteed by symmetries which swap the value of an optimal aoh with the value of an aoh going the other way:
But when we restrict the utility function to not care about actions, now you can only modify how it cares about observation histories. Here, the aoh environmental symmetry $ϕ_{AOH}$ which previously ensured balanced statistical incentives, no longer enjoys closure under $D_{OH}$, and so the restricted plausible set theorem no longer works, and instrumental convergence appears when restricting from u_{aoh} to u_{OH}.
ThanksI thank Justis Mills for feedback on a draft.
Find out when I post more content: newsletter & RSS
alex@turntrout.com
From last time:
Quote
 The results aren’t firstperson: They don’t deal with the agent’s uncertainty about what environment it’s in.
 Not all environments have the right symmetries
 But most ones we think about seem to
Don’t account for the ways in which we might practically express reward functions.(This limitation was handled by this post.)
I think it’s reasonably clear how to apply the results to realistic objective functions. I also think our objective specification procedures are quite expressive, and so the closure condition will hold and the results go through in the appropriate situations.

It’s not hard to have this many degrees of freedom in such a small toy environment, but the toy environment is pedagogical. It’s practically impossible to have full degrees of freedom in an environment with a trillion states. ⤴

“At least”, and not “exactly.” If $α$ is a constant feature vector, it’s optimal to go right for every permutation of $α$ (trivially so, since $α$’s orbit has a single element—itself). ⤴

Even under my more aggressive conjecture about “fractional terminal state copy containment”, the unfeaturized situation would only guarantee 3/5strength orbit incentives, strictly weaker than 2/3strength. ⤴

Certain trivial featurizations can decrease the strength of powerseeking tendencies, too. For example, if the featurization is 2dimensional: $(1 if the agent is dead, 0 otherwise1 if the agent is alive, 0 otherwise )$, this will tend to produce 1:1 survive/die orbitlevel incentives, whereas the incentives for raw reward functions may be 1,000:1 or stronger. ⤴

There’s something abstractionadjacent about this result (proposition D.1 in the linked Overleaf paper). The result says something like “do the grooves of the agent’s world model featurization, respect the grooves of symmetries in the structure of the agent’s environment?”, and if they do, bam, sufficient condition for powerseeking under the featurized model. I think there’s something important here about how good worldmodelfeaturizations should work, but I’m not sure what that is yet.
I do know that “the featurization should commute with the environmental symmetry” is something I’d thought—in basically those words—no fewer than 3 times, as early as summer_{2021}, without explicitly knowing what that should even mean. ⤴