Black and white trout

Find out when I post more content: newsletter & RSSRSS icon

Thoughts? Email me at alex@turntrout.com

What if we want the agent to single-handedly ensure the future is stable and aligned with our values? aup probably won’t allow policies which actually accomplish this goal—one needs power to e.g. nip unaligned superintelligences in the bud. aup aims to prevent catastrophes by stopping bad agents from gaining power to do bad things, but it symmetrically impedes otherwise-good agents.

This doesn’t mean we can’t get useful work out of agents—there are important asymmetries provided by both the main reward function and AU landscape counterfactuals.

First, even though we can’t specify an aligned reward function, the provided reward function still gives the agent useful information about what we want. If we need paperclips, then a paperclip-aup agent prefers policies which make some paperclips. Simple.

Second, if we don’t like what it’s beginning to do, we can shut it off (because it hasn’t gained power over us). Therefore, it has “approval incentives” which bias it towards AU landscapes in which its power hasn’t decreased too much, either.

So we can hope to build a non-catastrophic aup agent and get useful work out of it. We just can’t directly ask it to solve all of our problems: it doesn’t make much sense to speak of a “low-impact singleton.”

  • To emphasize, when I say “aup agents do ” in this post, I mean that aup agents correctly implementing the concept of aup tend to behave in a certain way.
  • As pointed out by Daniel Filan, aup suggests that one might work better in groups by ensuring one’s actions preserve teammates’ AUs.