Posts
Comments
If you want to get huge profits to solve alignment, and are smart/capable enough to start a successful big AI lab, you are probably also smart/capable enough to do some other thing that makes you a lot of money without the side effect of increasing P(doom).
Moral Maze dynamics push corporations not just to pursue profit at all other costs, but also to be extremely myopic. As long as the death doesn't happen before the end of the quarter, the big labs, being immoral mazes, have no reason to give a shit about x-risk. Of course, every individual member of a big lab has reason to care, but the organization as an egregore does not (and so there is strong selection pressure for these organizations to have people that have low P(doom) and/or don't (think they) value the future lives of themselves and others).
Contrary to what the current wiki page says, Simulacrum levels 3 and 4 are not just about ingroup signalling. See these posts and more, as well as Beaudrillard's original work if you're willing to read dense philosophy.
Here is an example where levels 3 and 4 don't relate to ingroups at all, which I think may be more illuminating than the classic "lion across the river" example:
Alice asks "Does this dress makes me look fat?" Bob says "No."
Depending on the simulacrum level of Bob's reply, he means:
- "I believe that the dress does not make you look fat."
- "I want you to believe that the dress does not make you look fat, probably because I want you to feel good about yourself."
- "Niether you nor I are autistic truth-obsessed rationalists, and therefore I recognize that you did not ask me this question out of curiosity as to whether or not the dress makes you look fat. Instead, due to frequent use of simulacrum level 2 to respond to these sorts of queries in the past, a new social equilibrium has formed where this question and its answer are detached from object-level truth, instead serving as a signal that I care about your feelings. I do care about your feelings, so I play my part in the signalling ritual and answer 'No.'"
- "Similar to 3, except I'm a sociopath and don't necessarily actually care about your feelings. Instead, I answer 'No' because I want you to believe that I care about your feelings."
Here are some potentially better definitions, of which the group association definitions are a clear special case:
-
Communication of object-level truth.
-
Optimization over the listener's belief that the speaker is communicating on simulacrum level 1, i.e. desire to make the listener believe what the listener says.
These are the standard old definitions. The transition from 1 to 2 is pretty straightforward. When I use 2, I want you to believe I'm using 1. This is not necessarily lying. It is more like Frankfurt's bullshit. I care about the effects of this belief on the listener, regardless of its underlying truth value. This is often (naively considered) prosocial, see this post for some examples.
Now, the transition from 2 to 3 is a bit tricky. Level 3 is a result of a social equilibrium that emerges after communication in that domain gets flooded by prosocial level 2. Eventually, everyone learns that these statements are not about object-level reality, so communication on levels 1 and 2 become futile. Instead, we have:
- Signalling of some trait or bid associated with historical use of simulacrum level 2.
E.g. that Alice cares about Bob's feelings, in the case of the dress, or that I'm with the cool kids that don't cross the river, in the case of the lion. Another example: bids to hunt stag.
3 to 4 is analogous to 1 to 2.
- Optimization over the listener's belief that the speaker is comminicating on simulacrum level 3, i.e. desire to make the listener believe that the speaker has the trait signalled by simulacrum level 3 communication (i.e. the trait that was historically associated with prosocial level 2 communication).
Like with the jump from 1 to 2, the jump from 3 to 4 has the quality of bullshit, not necessarily lies. Speaker intent matters here.
Oops that was a typo. Fixed now, and added a comma to clarify that I mean the latter.
Formalizing Placebomancy
I propose the following desideratum for self-referential doxastic modal agents (agents that can think about their own beliefs), where represents "I believe ", represents the agent's world model conditional on , and is the agent's preference relation:
Positive Placebomancy: For any proposition , The agent concludes from , if .
In natural English: The agent believes that hyperstitions, that benefit the agent if true, are true.
"The placebo effect works on me when I want it to".
A real life example: In this sequence post, Eliezer Yudkowsky advocates for using positive placebomancy on "I cannot self-deceive".
I would also like to formalize a notion of "negative placebomancy" (doesn't believe hyperstitions that don't benefit it), "total placebomancy" (believes hypestitions iff they are beneficial), "group placebomancy" (believes group hyperstitions that are good for everyone in the group, conditional on all other group members having group placebomancy or similar), and generalizations to probabilistic self-referential agents (like "ideal fixed-point selection" for logical inductor agents).
I will likely cover all of these in a future top-level post, but I wanted to get this idea out into the open now because I keep finding myself wanting to reference it in conversation.
Edit log:
- 2024-12-08 rephrased the criterion to be an inference rule rather than an implication. Also made a minor grammar edit.
I think I know (80% confidence) the identity of this "local Vassarite" you are referring to, and I think I should reveal it, but, y'know, Unilateralist's Curse, so if anyone gives me a good enough reason not to reveal this person's name, I won't. Otherwise, I probably will, because right now I think people really should be warned about them.
People often say things like "do x. Your future self will thank you." But I've found that I very rarely actually thank my past self, after x has been done, and I've reaped the benefits of x.
This quick take is a preregistration: For the next month I will thank my past self more, when I reap the benefits of a sacrifice of their immediate utility.
e.g. When I'm stuck in bed because the activation energy to leave is too high, and then I overcome that and go for a run and then feel a lot more energized, I'll look back and say "Thanks 7 am Morphism!"
(I already do this sometimes, but I will now make a TAP out of it, which will probably cause me to do it more often.)
Then I will make a full post describing in detail what I did and what (if anything) changed about my ability to sacrifice short-term gains for greater long-term gains, along with plausible theories w/ probabilities on the causal connection (or lack thereof), as well as a list of potential confounders.
Of course, it is possible that I completely fail to even install the TAP. I don't think that's very likely, because I'm #1-prioritizing my own emotional well-being right now (I'll shift focus back onto my world-saving pursuits once I'm more stablely not depressed). In that case I will not write a full post because the experiment would have not even been done. I will instead just make a comment on this shortform to that effect.
Edit: There are actually many ambiguities with the use of these words. This post is about one specific ambiguity that I think is often overlooked or forgotten.
The word "preference" is overloaded (and so are related words like "want"). It can refer to one of two things:
- How you want the world to be i.e. your terminal values e.g. "I prefer worlds in which people don't needlessly suffer."
- What makes you happy e.g. "I prefer my ice cream in a waffle cone"
I'm not sure how we should distinguish these. So far, my best idea is to call the former "global preferences" and the latter "local preferences", but that clashes with the pre-existing notion of locality of preferences as the quality of terminally caring more about people/objects closer to you in spacetime. Does anyone have a better name for this distinction?
I think we definitely need to distinguish them, however, because they often disagree, and most "values disagreements" between people are just disagreements in local preferences, and so could be resolved by considering global preferences.
I may write a longpost at some point on the nuances of local/global preference aggregation.
Example: Two alignment researchers, Alice and Bob, both want access to a limited supply of compute. The rest of this example is left as an exercise.
Emotions can be treated as properties of the world, optimized with respect to constraints like anything else. We can't edit our emotions directly but we can influence them.
Oh no I mean they have the private key stored on the client side and decrypt it there.
Ideally all of this is behind a nice UI, like Signal.
I mean, Signal messenger has worked pretty well in my experience.
But safety research can actually disproportionally help capabilities, e.g. the development of RLHF allowed OAI to turn their weird text predictors into a very generally useful product.
I could see embedded agency being harmful though, since an actual implementation of it would be really useful for inner alignment
Some off the top of my head:
- Outer Alignment Research (e.g. analytic moral philosophy in an attempt to extrapolate CEV) seems to be totally useless to capabilities, so we should almost definitely publish that.
- Evals for Governance? Not sure about this since a lot of eval research helps capabilities, but if it leads to regulation that lengthens timelines, it could be net positive.
Edit: oops i didn't see tammy's comment
Idea:
Have everyone who wants to share and recieve potentially exfohazardous ideas/research send out a 4096-bit RSA public key.
Then, make a clone of the alignment forum, where every time you make a post, you provide a list of the public keys of the people who you want to see the post. Then, on the client side, it encrypts the post using all of those public keys. The server only ever holds encrypted posts.
Then, users can put in their own private key to see a post. The encrypted post gets downloaded to the user's machine and is decrypted on the client side. Perhaps require users to be on open-source browsers for extra security.
Maybe also add some post-quantum thing like what Signal uses so that we don't all die when quantum computers get good enough.
Should I build this?
Is there someone else here more experienced with csec who should build this instead?
Is this a massive exfohazard? Should this have been published?
Yikes, I'm not even comfortable maximizing my own CEV.
What do you think of this post by Tammy?
Where is the longer version of this? I do want to read it. :)
Well perhaps I should write it :)
Specifically, what is it about the human ancestral environment that made us irrational, and why wouldn't RL environments for AI cause the same or perhaps a different set of irrationalities?
Mostly that thing where we had a lying vs lie-detecting arms race and the liars mostly won by believing their own lies and that's how we have things like overconfidence bias and self-serving bias and a whole bunch of other biases. I think Yudkowsky and/or Hanson has written about this.
Unless we do a very stupid thing like reading the AI's thoughts and RL-punish wrongthink, this seems very unlikely to happen.
If we give the AI no reason to self-deceive, the natural instrumentally convergent incentive is to not self-deceive, so it won't self-deceive.
Again, though, I'm not super confident in this. Deep deception or similar could really screw us over.
Also, how does RL fit into QACI? Can you point me to where this is discussed?
I have no idea how Tammy plans to "train" the inner-aligned singleton on which QACI is implemented, but I think it will be closer to RL than SL in the ways that matter here.
But we could have said the same thing of SBF, before the disaster happened.
I would honestly be pretty comfortable with maximizing SBF's CEV.
Please explain your thinking behind this?
TLDR: Humans can be powerful and overconfident. I think this is the main source of human evil. I also think this is unlikely to naturally be learned by RL in environments that don't incentivize irrationality (like ours did).
Sorrry if I was unclear there.
It's not, because some moral theories are not compatible with EU maximization.
I'm pretty confident that my values satisfy the VNM axioms, so those moral theories are almost definitely wrong.
And I think this uncertainty problem can be solved by forcing utility bounds.
I'm 60% confident that SBF and Mao Zedong (and just about everyone) would converge to nearly the same values (which we call "human values") if they were rational enough and had good enough decision theory.
If I'm wrong, (1) is a huge problem and the only surefire way to solve it is to actually be the human whose values get extrapolated. Luckily the de-facto nominees for this position are alignment researchers, who pretty strongly self-select for having cosmopolitan altruistic values.
I think (2) is a very human problem. Due to very weird selection pressure, humans ended up really smart but also really irrational. I think most human evil is caused by a combination of overconfidence wrt our own values and lack of knowledge of things like the unilateralist's curse. An AGI (at least, one that comes from something like RL rather than being conjured in a simulation or something else weird) will probably end up with a way higher rationality:intelligence ratio, and so it will be much less likely to destroy everything we value than an empowered human. (Also 60% confident. I would not want to stake the fate of the universe on this claim)
I agree that moral uncertainty is a very hard problem, but I don't think we humans can do any better on it than an ASI. As long as we give it the right pointer, I think it will handle the rest much better than any human could. Decision theory is a bit different, since you have to put that into the utility function. Dealing with moral uncertainty is just part of expected utility maximization.
To solve (2), I think we should try to adapt something like the Hippocratic principle to work for QACI, without requiring direct reference to a human's values and beliefs (the sidestepping of which is QACI's big advantage over PreDCA). I wonder if Tammy has thought about this.
What about the following:
My utility function is pretty much just my own happiness (in a fun-theoretic rather than purely hedonistic sense). However, my decision theory is updateless with respect to which sentient being I ended up as, so once you factor that in, I'm a multiverse-wide realityfluid-weighted average utilitarian.
I'm not sure how correct this is, but it's possible.
Edit log:
2024-04-30 19:31 CST: Footnote formatting fix and minor grammar fix.
20:40 CST: "The problem is..." --> "Alignment is..."
22:17 CST: Title changed from "All we need is a pointer" to "The formal goal is a pointer"
OpenAI is not evil. They are just defecting on an epistemic prisoner's dilemma.
Maybe some kind of simulated long-reflection type thing like QACI where "doing philosophy" basically becomes "predicting how humans would do philosophy if given lots of time and resources"
Yes, amount of utopiastuff across all worlds remains constant, or possibly even decreases! But I don't think amount-of-utopiastuff is the thing I want to maximize. I'd love to live in a universe that's 10% utopia and 90% paperclips! I much prefer that to a 90% chance of extinction and a 10% chance of full-utopia. It's like insurance. Expected money goes down, but expected utility goes up.
Decision theory does not imply that we get to have nice things, but (I think) it does imply that we get to hedge our insane all-or-nothing gambles for nice things, and redistribute the nice things across more worlds.
I think this is only true if we are giving the AI a formal goal to explicitly maximize, rather than training the AI haphazardly and giving it a clusterfuck of shards. It seems plausible that our FAI would be formal-goal aligned, but it seems like UAI would be more like us unaligned humans—a clusterfuck of shards. Formal-goal AI needs the decision theory "programmed into" its formal goal, but clusterfuck-shard AI will come up with decision theory on its own after it ascends to superintelligence and makes itself coherent. It seems likely that such a UAI would end up implementing LDT, or at least something that allows for acausal trade across the Everett branches.
Fixed it! Thanks! It is very confusing that half the time people talk about loss functions and the other half of the time they talk about utility functions
Solution to 8 implemented in python using zero self-reference, where you can replace f with code for any arbitrary function on string x (escaping characters as necessary):
f="x+'\\n'+x"
def ff(x):
return eval(f)
(lambda s : print(ff('f='+chr(34)+f+chr(34)+chr(10)+'def ff(x):'+chr(10)+chr(9)+'return eval(f)'+chr(10)+s+'('+chr(34)+s+chr(34)+')')))("(lambda s : print(ff('f='+chr(34)+f+chr(34)+chr(10)+'def ff(x):'+chr(10)+chr(9)+'return eval(f)'+chr(10)+s+'('+chr(34)+s+chr(34)+')')))")
edit: fixed spoiler tags