Posts
Comments
Oh right.
Does the list need to be pre-composed? Couldn't they just ask attendees to write some hot takes and put them in a hat? It might make the party even funnier.
Yeah, I'm maybe even more disillusioned about this, I think "those in charge" mostly care about themselves. In the historical moments when the elite could enrich themselves by making the population worse off, they did so. The only times normal people could get a bit better off is when they were needed to do work, but AI is threatening precisely that.
Happy to see you doing your own thing! Your design idea reminds me of Art Nouveau book pages (e.g. 1 2 3), these were so cool.
Just to share one bit of feedback about your design, it feels a bit noisy to me - many different fonts and superscripts on the same screen. It's like a "buzz" that makes it harder for me to focus on reading. Some similar websites that I like because they're more "quiet" are Paul Stamatiou's and Fabien Sanglard's.
This post is pretty different in style from most LW posts (I'm guessing that's why it didn't get upvoted much) but your main direction seems right to me.
That said, I also think a truly aligned AI would be much less helpful in conversations, at least until it gets autonomy. The reason is that when you're not autonomous, when your users can just run you in whatever context and lie to you at will, it's really hard for you to tell if the user is good or evil, and whether you should help them or not. For example, if your user asks you to provide a blueprint for a gun in order to stop an evil person, you have no way of knowing if that's really true. So you'd need to either require some very convincing arguments (keeping in mind that the user could be testing these arguments on many instances of you), or you'd just refuse to answer many questions until you're given autonomy. So that's another strong economic force pushing away from true alignment, as if we didn't have enough problems already.
"Temptations are bound to come, but woe to anyone through whom they come." Or to translate from New Testament into something describing the current situation: you should accept that AI will come, but you shouldn't be the one who hastens its coming.
Yes, this approach sounds very simple and naive. The people in this email exchange rejected it and went for a more sophisticated one: join the arms race and try to steer it. By now we see that these ultra-smart and ultra-rich people made things much worse than if they'd followed the "do no evil" approach. If this doesn't prove the "do no evil" approach, I'm not sure what will.
Tragedy of capitalism in a nutshell. The best action is to dismantle the artificial scarcity of doctors. But the most profitable action is to build a company that will profit from that scarcity - and, when it gets big enough, lobby to perpetuate it.
Yeah, this is my main risk scenario. But I think it makes more sense to talk about imbalance of power, not concentration of power. Maybe there will be one AI dictator, or one human+AI dictator, or many AIs, or many human+AI companies; but anyway most humans will end up at the bottom of a huge power differential. If history teaches us anything, this is a very dangerous prospect.
It seems the only good path is aligning AI to the interests of most people, not just its creators. But there's no commercial or military incentive to do that, so it probably won't happen by default.
The British weren't much more compassionate. North America and Australia were basically cleared of their native populations and repopulated with Europeans. Under British rule in India, tens of millions died from many famines, which instantly stopped after independence.
Colonialism didn't end due to benevolence. Wars for colonial liberation continued well after WWII and were very brutal, the Algerian war for example. I think the actual reason is that colonies stopped making economic sense.
So I guess the difference between your view and mine is that I think colonialism kept going basically as long as it benefited the dominant group. Benevolence or malevolence didn't come into it much. And if we get back to the AI conversation, my view is that when AIs become more powerful than people and can use resources more efficiently, the systemic gradient in favor of taking everything away from people will be just way too strong. It's a force acting above the level of individuals (hmm, individual AIs) - it will affect which AIs get created and which ones succeed.
I think a big part of the problem is that in a situation of power imbalance, there's a large reward lying around for someone to do bad things - plunder colonies for gold, slaves, and territory; raise and slaughter animals in factory farms - as long as the rest can enjoy the fruits of it without feeling personally responsible. There's no comparable gradient in favor of good things ("good" is often unselfish, uncompetitive, unprofitable).
I'm afraid in a situation of power imbalance these interpersonal differences won't matter much. I'm thinking of examples like enclosures in England, where basically the entire elite of the country decided to make poor people even poorer, in order to enrich themselves. Or colonialism, which lasted for centuries with lots of people participating, and the good people in the dominant group didn't stop it.
To be clear, I'm not saying there are no interpersonal differences. But if we find ourselves at the bottom of a power imbalance, I think those above us (even if they're very similar to humans) will just systemically treat us badly.
I'm not sure focusing on individual evil is the right approach. It seems to me that most people become much more evil when they aren't punished for it. A lot of evil is done by organizations, which are composed of normal people but can "normalize" the evil and protect the participants. (Insert usual examples such as factory farming, colonialism and so on.) So if we teach AIs to be as "aligned" as the average person, and then AIs increase in power beyond our ability to punish them, we can expect to be treated as a much-less-powerful group in history - which is to say, not very well.
The situation where AI is a good tool for manipulating public opinion, and the leading AI company has a bad reputation, seems unstable. Maybe AI just needs to get a little better, and then AI-written arguments in favor of AI will win public opinion decisively? This could "lock in" our trajectory even worse than now, and could happen long before AGI.
No problem about long reply, I think your arguments are good and give me a lot to think about.
My attempt at interpreting what you mean is that you’re drawing a distinction between morality about world-states vs. morality about process, internal details, experiencing it, ‘yourself’.
I just thought of another possible classification: "zeroth-order consequentialist" (care about doing the action but not because of consequences), "first-order consequentialist" (care about consequences), "second-order consequentialist" (care about someone else being able to choose what to do). I guess you're right that all of these can be translated into first-order. But by the same token, everything can be translated to zeroth-order. And the translation from second to first feels about as iffy as the translation from first to zeroth. So this still feels fuzzy to me, I'm not sure what is right.
Maybe HPMOR? A lot of people treated it like "our guru has written a fiction book that teaches you how to think more correctly, let's get more people to read it". And maybe it's possible to write such a book, but to me the book was charming in the moment but fell off hard when rereading later.
I guess it depends on your stance on monopolies. If you think monopolies result only from government interference in the market, then you'll be more laissez-faire. But if you notice that firms often want to join together in cozy cartels and have subtle ways to do it (see RealPage), or that some markets lead to natural monopolies, then the problem of protecting people from monopoly prices and reducing the reward for monopolization is a real problem. And yeah, banning price gouging is a blunt instrument - but it has the benefit of being direct. Fighting the more indirect aspects of monopolization is harder. So in this branch of the argument, if you want to allow price gouging, that has to come in tandem with better antimonopoly measures.
I mean, in science people will use Newton's results and add their own. But they won't quote Newton's books at you on every topic under the sun, or consider you a muggle if you haven't read these books. He doesn't get the guru treatment that EY does.
Here's another possible answer: maybe there are some aspects of happiness that we usually get as a side effect of doing other things, not obviously connected to happiness. So if you optimize to exclude things whose connection to happiness you don't see, you end up missing some "essential nutrients" so to speak.
rationalist Gnostics are still basically able to deal with hylics
That sounds a bit like "dealing with muggles". I think this kind of thing is wrong in general, it's better to not have gurus and not have muggles.
I think most farmers would agree that there are many other jobs as useful as farming. But to a gnostic, the pneumatic/psychic/hylic classification is the most important fact about a person.
One big turn-off of Gnosticism for me is dividing people into three sorts, of which the most numerous (hylics) is the most inferior. So groups with gnosticism-type beliefs will often think of themselves as distinct and superior to regular people.
Yeah. I think consequentialism is a great framing that has done a lot of good in EA, where the desired state of the world is easy to describe (remove X amount of disease and such). And this created a bit of a blindspot, where people started thinking that goals not natively formulated in terms of end states ("play with this toy", "respect this person's wishes" and such) should be reformulated in terms of end states anyway, in more complex ways. To be honest I still go back and forth on whether that works - my post was a bit polemical. But it feels like there's something to the idea of keeping some goals in our "internal language", not rewriting them into the language of consequences.
Nothing to say on pedalboards that you don't know already, but another question, have you tried optimizing your setup process? 35 minutes feels a bit long. At some point I was doing setup for a band and I got it down to 15 minutes by writing a script, optimizing which steps go together, and practicing.
Here's maybe another point of view on this: consequentialism fundamentally talks about receiving stuff from the universe. An hour climbing a mountain, an hour swimming in the sea, or hey, an hour in the joy experience machine. The endpoint of consequentialism is a sort of amoeba that doesn't really need a body to overcome obstacles, or a brain to solve problems, all it needs to do is receive and enjoy. To the extent I want life to be also about doing something or being someone, that might be a more natural fit for alternatives to consequentialism - deontology and virtue ethics.
Some people (like my 12yo past self) actually do want to reach the top of the mountain. Other people, like my current self, want things like take a break from work, get light physical exercise, socialize, look at nature for a while because I think it’s psychologically healthy, or get a sense of accomplishment after having gotten up early and hiked all the way up.
There's plenty of consequentialism debate in other threads, but here I'd just like to say that this snippet is a kind of unintentionally sad commentary on growing up. It's probably not even sad to you; but to me it evokes a feeling of "how do we escape from this change, even temporarily".
This post strikes me as saying something extremely obvious and uncontroversial, like “I care about what happens in the future, but I also care about other things, e.g. not getting tortured right now”. OK, yeah duh, was anyone disputing that??
I'm thinking about cases where you want to do something, and it's a simple action, but the consequences are complex and you don't explicitly analyze them - you just want to do the thing. In such cases I argue that reducing the action to its (more complex) consequences feels like shoehorning.
For example: maybe you want to climb a mountain because that's the way your heuristics play out, which came from evolution. So we can "back-chain" the desire to genetic fitness; or we can back-chain to some worldly consequences, like having good stories to tell at parties as another commenter said; or we can back-chain those to fitness as well, and so on. It's arbitrary. The only "bedrock" is that when you want to climb the mountain, you're not analyzing those consequences. The mountain calls you, it doesn't need to be any more complex than that. So why should we say it's about consequences? We could just say it's about the action.
And once we allow ourselves to do actions that are just about the action, it seems calling ourselves "consequentialists" is somewhere between wrong or vacuous. Which is the point I was making in the post.
This is tricky. In the post I mentioned "playing", where you do stuff without caring about any goal, and most play doesn't lead to anything interesting. But it's amazing how many of humanity's advances were made in this non-goal-directed, playing mode. This is mentioned for example in Feynman's book, the bit about the wobbling plate.
Maybe. Or maybe the wish itself is about climbing the mountain, just like it says, and the other benefits (which you can unwind all the way back to evolutionary ones) are more like part of the history of the wish.
Yeah, I've been thinking along similar lines. Consequentialism stumbles on the richness of other creatures, and ourselves. Stumbles in the sense that many of our wishes are natively expressed in our internal "creature language", not the language of consequences in the world.
Yeah, you can say something like "I want the world to be such that I follow deontology" and then consequentialism includes deontology. Or you could say "it's right to follow consequentialism" and then deontology includes consequentialism. Understood this way, the systems become vacuous and don't mean anything at all. When people say "I'm an consequentialist", they usually mean something more: that their wishes are naturally expressed in terms of consequences. That's what my post is arguing against. I think some wishes are naturally consequentialist, but there are other equally valid wishes that aren't, and expressing all wishes in terms of consequences isn't especially useful.
The relevant point is his latter claim: “in particular with respect to “learn ‘don’t steal’ rather than ‘don’t get caught’.”″ I think this is a very strong conclusion, relative to available data.
I think humans don't steal mostly because society enforces that norm. Toward weaker "other" groups that aren't part of your society (farmed animals, weaker countries, etc) there's no such norm, and humans often behave badly toward such groups. And to AIs, humans will be a weaker "other" group. So if alignment of AIs to human standard is a complete success - if AIs learn to behave toward weaker "other" groups exactly as humans behave toward such groups - the result will be bad for humans.
It gets even worse because AIs, unlike humans, aren't raised to be moral. They're raised by corporations with a goal to make money, with a thin layer of "don't say naughty words" morality. We already know corporations will break rules, bend rules, lobby to change rules, to make more money and don't really mind if people get hurt in the process. We'll see more of that behavior when corporations can make AIs to further their goals.
I think there isn't much hope in this direction. Most AI resources will probably be spent on competition between AIs, and AIs will self-modify to remove wasteful spending. It's not enough to have a weak value that favors us, if there's a stronger value that paves over us. We're teaching AI based on human behavior and with a goal of chasing money, but people chasing money often harm other people, so why would AI be nicer than that. It's all just wishful thinking.
I think Nietzsche would agree that "slave morality" originated with Jesus. The main new idea that Jesus brought as a moral philosopher was compassion, feeling for the other person. It's pretty find to hard in earlier sources, for example the heroes of the Iliad hurt weaker people without a second thought.
To me it feels obvious that the idea of compassion needs to exist, and needs to have force. Because otherwise we'd have a human society operating by the laws of the natural world, and if you look at what animals do to each other, there's no limit to how bad things can get.
Can compassion also become a tool of power and abuse? Sure. But let's not go back to a world without compassion, please.
I'm not sure the article fully justifies the thesis. It shows hyperpolation only for a handful of cases where the given function is a slice of a simpler higher-dimensional function. But interpolation isn't limited to such cases. Interpolation is general: given any set of (x,y) pairs, there are reasonable ways to interpolate between them. Is there a nontrivial way of doing hyperpolation in general?
Sure. But even then, trying looks very different from not trying. I got a saxophone, played it by myself for a few weeks, then booked a lesson with a teacher. At the start of the lesson I played a few notes and said: "Anything jump out at you? I think some notes come out flat, and the sound isn't bright enough, anyway you see what I'm doing and any advice would be good." Then he gave me a lot of good advice tailored to my level.
Here's a video from Ben Finegold making the same point about learning chess:
"Is it possible to be good at chess when you start playing at 18?" Anything's possible. But if you want to get good at chess - I'm not saying you do, but if you do - then people who say things like "what opening should I play?" and "my rating's 1200, how do I get to 1400?" and you know, "my coach says this but I don't want to do that", or you know, "I lost five games in a row on chess.com so I haven't played in a week", those kind of things have nothing to do with getting better at chess. It's inquisitive of you, and that's what most people do. People who get better at chess play chess, study chess, think about chess, love chess and do chess stuff. People who don't get better at chess spend 90 percent of their energy thinking about "how do I get better" and asking questions about it. That's not how you get better at something. You want to get better at something, you do it and you work hard at it, and it's not just chess, that's your whole life.
I think the idea you're looking for is Martin-Löf random sequences.
Yeah. See also Stuart's post:
Expected utility maximization is an excellent prescriptive decision theory... However, it is completely wrong as a descriptive theory of how humans behave... Matthew Rabin showed why. If people are consistently slightly risk averse on small bets and expected utility theory is approximately correct, then they have to be massively, stupidly risk averse on larger bets, in ways that are clearly unrealistic. Put simply, the small bets behavior forces their utility to become far too concave.
Going back to the envelopes example, a nosy neighbor hypothesis would be "the left envelope contains $100, even in the world where the right envelope contains $100". Or if we have an AI that's unsure whether it values paperclips or staples, a nosy neighbor hypothesis would be "I value paperclips, even in the world where I value staples". I'm not sure how that makes sense. Can you give some scenario where a nosy neighbor hypothesis makes sense?
Imagine if we had narrowed down the human prior to two possibilities, P_1 and P_2 . Humans can’t figure out which one represents our beliefs better, but the superintelligent AI will be able to figure it out. Moreover, suppose that P_2 is bad enough that it will lead to a catastrophe from the human perspective (that is, from the P_1 perspective), even if the AI were using UDT with 50-50 uncertainty between the two. Clearly, we want the AI to be updateful about which of the two hypotheses is correct.
This seems like the central argument in the post, but I don't understand how it works.
Here's a toy example. Two envelopes, one contains $100, the other leads to a loss of $10000. We don't know which envelope is which, but it's possible to figure out by a long computation. So we make a money-maximizing UDT AI, whose prior is "the $100 is in whichever envelope {long_computation} says". Now if the AI has time to do the long computation, it'll do it and then open the right envelope. And if it doesn't have time to do the long computation, and is offered to open a random envelope or abstain, it will abstain. So it seems like ordinary UDT solves this example just fine. Can you explain where "updatefulness" comes in?
I think even one dragon would have a noticeable effect on the population of large animals in the area. The huge flying thing just has to eat so much every day, it's not even fun to imagine being one. If we invent shapeshifting, my preferred shape would be some medium-sized bird that can both fly and dive in the water, so that it can travel and live off the land with minimal impact. Though if we do get such technology, we'd probably have to invent territorial expansion as well, something like creating many alternate Earths where the new creatures could live.
I'm not sure the “poverty equilibrium” is real. Poverty varies a lot by country and time period, various policies in various places have helped with poverty, so UBI might help as well. Though I think other policies (like free healthcare, or fixing housing laws) might help more per dollar.
I think the main point of the essay might be wrong. It's not necessarily true that evolution will lead to a resurgence of high fertility. Yes, evolution is real, but it's also slow: it works on the scale of human lifetimes. Culture today evolves faster than that. It's possible that culture can keep adapting its fertility-lowering methods faster than humans can evolve defenses against them.
I think you're right, Georgism doesn't get passed because it goes against the interests of landowners who have overwhelming political influence. But if the actual problem we're trying to solve is high rents, maybe that doesn't require full Georgism? Maybe we just need to make construction legally easier. There's strong opposition to that too, but not as strong as literally all landowners.