The Up-Goer Five Game: Explaining hard ideas with simple words
post by Rob Bensinger (RobbBB) · 2013-09-05T05:54:16.443Z · LW · GW · Legacy · 82 commentsContents
82 comments
xkcd's Up-Goer Five comic gave technical specifications for the Saturn V rocket using only the 1,000 most common words in the English language.
This seemed to me and Briénne to be a really fun exercise, both for tabooing one's words and for communicating difficult concepts to laypeople. So why not make a game out of it? Pick any tough, important, or interesting argument or idea, and use this text editor to try to describe what you have in mind with extremely common words only.
This is challenging, so if you almost succeed and want to share your results, you can mark words where you had to cheat in *italics*. Bonus points if your explanation is actually useful for gaining a deeper understanding of the idea, or for teaching it, in the spirit of Gödel's Second Incompleteness Theorem Explained in Words of One Syllable.
As an example, here's my attempt to capture the five theses using only top-thousand words:
- Intelligence explosion: If we make a computer that is good at doing hard things in lots of different situations without using much stuff up, it may be able to help us build better computers. Since computers are faster than humans, pretty soon the computer would probably be doing most of the work of making new and better computers. We would have a hard time controlling or understanding what was happening as the new computers got faster and grew more and more parts. By the time these computers ran out of ways to quickly and easily make better computers, the best computers would have already become much much better than humans at controlling what happens.
- Orthogonality: Different computers, and different minds as a whole, can want very different things. They can want things that are very good for humans, or very bad, or anything in between. We can be pretty sure that strong computers won't think like humans, and most possible computers won't try to change the world in the way a human would.
- Convergent instrumental goals: Although most possible minds want different things, they need a lot of the same things to get what they want. A computer and a human might want things that in the long run have nothing to do with each other, but have to fight for the same share of stuff first to get those different things.
- Complexity of value: It would take a huge number of parts, all put together in just the right way, to build a computer that does all the things humans want it to (and none of the things humans don't want it to).
- Fragility of value: If we get a few of those parts a little bit wrong, the computer will probably make only bad things happen from then on. We need almost everything we want to happen, or we won't have any fun.
If you make a really strong computer and it is not very nice, you will not go to space today.
Other ideas to start with: agent, akrasia, Bayes' theorem, Bayesianism, CFAR, cognitive bias, consequentialism, deontology, effective altruism, Everett-style ('Many Worlds') interpretations of quantum mechanics, entropy, evolution, the Great Reductionist Thesis, halting problem, humanism, law of nature, LessWrong, logic, mathematics, the measurement problem, MIRI, Newcomb's problem, Newton's laws of motion, optimization, Pascal's wager, philosophy, preference, proof, rationality, religion, science, Shannon information, signaling, the simulation argument, singularity, sociopathy, the supernatural, superposition, time, timeless decision theory, transfinite numbers, Turing machine, utilitarianism, validity and soundness, virtue ethics, VNM-utility
82 comments
Comments sorted by top scores.
comment by fubarobfusco · 2013-09-06T04:47:15.503Z · LW(p) · GW(p)
Prisoner's Dilemma
You and I play a game.
We each want to get a big score.
But we do not care if the other person gets a big score.
On each turn, we can each choose to Give or Take.
If I Give, you get three points. If I Take, I get one point.
If you Give, I get three points. If you Take, you get one point.
If we both Give, we both will have three points.
If we both Take, we both will have one point.
If you Give and I Take, then I will have four points and you will have no points.
If you Take and I Give, then I will have no points and you have will four points.
We would both like it if the other person would Give, because then we get more points.
But we would both like to Take, because then we get more points.
I would like it better if we both Give than if we both Take.
But if I think you will Give, then I would like to Take so that I can get more points.
It is worst for me if I Give and you Take.
And you think just the same thing I do — except with "you" and "me" switched.
If we play over and over again, then each of us can think about what the other has done before.
We can choose whether to Give or Take by thinking about what the other person has done before.
If you always Give — no matter what I do — then I may as well Take.
But if you always Take, then I also may as well Take!
(And again, you think just the same that I do.)
Why would either of us ever Give? Taking always gets more points.
But we always want the other to Give!
And if we both Give, that is better for both of us than if we both Take.
Is there any way that we could both choose to Give, and not be scared that the other will just Take instead?
↑ comment by Roxolan · 2013-09-06T12:52:44.824Z · LW(p) · GW(p)
It would help if I knew that you and I think exactly the same way.
If this is true, then when I decide to Give, I know you will Give too.
Replies from: Pentashagon↑ comment by Pentashagon · 2013-09-06T16:03:33.313Z · LW(p) · GW(p)
Also, if we just know how each other thinks (we don't have to think the same) and I can show for sure that you will Give to me if I Give to you and that you will Take from me if I Take from you, then I will Give to you.
comment by daenerys · 2013-09-06T00:33:48.769Z · LW(p) · GW(p)
Effective Altruism
"Doing Good in the Most Helping Way"- It is good to try to help people. It is better to help people in the best way possible. You should look at what actually happens when you try to help people in order to find out how well your helping worked. If we look at lots of different ways of helping people, then we can find out which way is best. You should give your money to the people who are best at helping people.
Where we live, and in places like it, everyone has lots more money than most people who live in other places. That means we have lots that we can give away to the people in the other places. It might be a good idea to try to make lots of money so that you can give away even more!
Replies from: None↑ comment by [deleted] · 2013-09-07T15:35:37.209Z · LW(p) · GW(p)
It is good to try to help people.
Hi, I'm new to LessWrong and haven't read the morality sequence and haven't read many arguments for effective altruism, so could you elaborate on this sentiment?
I agree with this kind of movement because intuitively it feels really good to help people and it feels really bad to know that people or animals are suffering. I think it's quite certain that there other minds similar to mine and these minds are capable of same kind of feelings that I am. I wouldn't want other people to feel the same kind of bad feelings that I have sometimes felt, but I know there are minds who experience more than a million times the worst pain I've ever felt.
Still, there are some people, who think rationality is about always thinking about only one's own well-being, who might disagree with this. They might say, that the well-being of other minds doesn't affect your mind directly. So if you don't know about it, it's irrelevant to you. Some of these people may also try to minimize the effect of the natural empathy by acknowledging that the being who is suffering is different from you. They could be your enemies or someone who is not "worth" your efforts. It's easier to cope with the fact that an animal who belongs into a different species is suffering than someone in your family. Or consider someone who has a different skin color and whose people behave strangely and who sometimes have violent and "primitive" habits are suffering on the other side of the world (note, this is not what I think, but what I've heard other people say... they basically think some people are a bit like the baby eating aliens) - are their suffering worth less? Intuitively it feels that way because they don't belong into your tribe. Anyway, these minds are still capable of same kind of suffering.
The question still stands, if someone is "rationally" interested in one's own well-being only, and if someone only cares about other minds to the extent of how they affect your own mind through the natural empathy reflex, then why should they be interested in the well-being of other minds purely for the sake of their well-being? Shouldn't they only be interested in how the suffering of the other minds affects your own mind?
Replies from: Viliam_Bur, daenerys, Nisan, mare-of-night, None, RobbBB↑ comment by Viliam_Bur · 2013-09-14T14:26:36.018Z · LW(p) · GW(p)
Hi, welcome to LW! I will reply to your comments here in one place, instead of each of them separately.
Do you mean I should only post there until I mature enough that I can post here?
No. It is okay to ask, and it is also okay to disagree. Just choose a proper place. For example, this is an article about "explaining hard ideas with simple words", therefore this discussion being here is quickly getting off-topic. You are not speaking about explaining effective atruism using simple words, but about misconceptions people have about rationality and altruism. That's a different topic; and now the whole comment tree is unrelated to the original article.
Don't worry, it happens. Just know that the proper place to ask questions like this is usually the latest Open Thread, and sometimes there is a special thread (like the "stupid" questions) for that. (I would say this is actually the website's fault, for not making the Open Thread more visible.)
if someone is "rationally" interested in one's own well-being only
Then of course such person will act rationally by caring only about their own well-being, and considering others only to the degree they influence this specific goal. For example, a rational sociopath. -- Sometimes we speak about paperclip maximizers, to make it more obvious (and less related to specific details of sociopathy or whatever). For a paperclip maximizer, it is rational to maximize the number of paperclips, and to care about human suffering only as much as it can influence the number of paperclips. So for example, if people would react to their suffering by destroying paperclips, or if they would respond to paperclip maximizer's help by building many new paperclips out of gratitude, then the paperclip maximizer could help them. The paperclip maximizer could even pretend it cares about human suffering, if that helps to maximize the number of paperclips in the future. -- But we are not trying here to sell effective altruism to paperclip maximizers, nor to sociopaths. Only to people who (a) care about suffering of others, and (b) want to be reflectively consistent (want to care about what they would care about if they knew more, etc.).
There is this "Hollywood rationality" meme, which suggests that rational people should be sociopaths; or even should consider themselves imperfect if they aren't ones... and should feel a pressure to self-modify to become ones. I guess most people here consider this bullshit; and actually exposing the bullshitness of similar ideas is one of the missions of LW. Perhaps the simplest response is: Uhm... why? Perhaps someone had a wrong idea of rationality, and is now optimizing for that wrong idea. (See the nameless virtue.)
Essentially this would be a debate about whether people truly care about others, or whether we truly are self-deceiving sociopaths (and therefore the most rational ones should be able to see through this self-deception). What does that even mean? What does it mean for a human? There is a ton of assumptions and confusions, so we shouldn't expect to solve this all within five minutes. (And until this all is solved, my lazy answer is that the burden of proof is on those people who suggest that a self-modification to a sociopath is the best way of maximizing my values which currently include caring for others. Because optimizing for values I don't have seems like a lost purpose.) We will not solve this fully here; perhaps at some other place.
Do you usually expect people to read all the sequences before they can ask questions?
It would be nice, because we wouldn't have to go over the basics again and again. On the other hand, it's not realistic. Perhaps the nice and realistic solution could be this: A new person asks something that sounds already-answered to the veterans. The veterans give a short explanation and links to relevant articles from the sequences. The new person reads the articles; and if there are further questions or if the original question does not seem answered sufficiently, then the new person asks additional questions in an Open Thread.
Again, if people here agree with this solution, then it probably should be a policy written in a visible place.
↑ comment by daenerys · 2013-09-07T16:27:30.880Z · LW(p) · GW(p)
Hi, I'm new to LessWrong and haven't read the morality sequence and haven't read many arguments for effective altruism, so could you elaborate on this sentiment?
How I read this: "Hi! I know exactly where to find the information I am asking for, but instead of reading the material (that I know exists) that has already been written that answers my question, can you write a response that explains the whole of morality?"
To start off with, you seem to be using the term "rationality" to mean something completely different than what we mean when we say it. I recommend Julia Galef's Straw Vulcan talk.
Replies from: None↑ comment by [deleted] · 2013-09-07T16:55:38.905Z · LW(p) · GW(p)
You slightly misunderstood what I meant, but maybe that's understandable. I'm not a native English speaker and I'm quite poor at expressing myself even in my native language. You don't have to be so condescending, I was just being curious. Do you usually expect people to read all the sequences before they can ask questions? If so, I apologize because I didn't know this rule. I can come back here after a few months when I've read all the sequences.
I know exactly where to find the information I am asking for, but instead of reading the material (that I know exists)
Okay, sorry. I just wanted to be honest. I have read most of the sequences listed on the sequences page. The morality sequence is quite big and reading it seems a daunting task because I have books related to my degree that I'm supposed to be reading and they are of bigger importance to me at the moment. I thought there could be a quick answer to this question. But if you have any specific blog posts related to this issue in mind, please link them!
To start off with, you seem to be using the term "rationality" to mean something completely different than what we mean when we say it.
I'm aware of that. With quotation marks around the word I was signaling that I don't really think it's real rationality or the same kind of rationality LessWrong people use. I know that rationalist people don't think that way. It's just that in some economic texts people to use the word "rationality" to mean that: a "rational" agent is only interested in his own well-being.
I recommend Julia Galef's Straw Vulcan talk.
I have read relevant blog posts on LessWrong and I think I know this concept. People think rational people are supposed to be some kind of emotional robots who don't have any feelings and otherwise thinking like modern-day computer, very mechanically and not being very flexible in their thinking etc. In reality people can use instrumental rationality to achieve the emotionally desired goals they have or use epistemic rationality to find out what their emotionally desired goals really are?
Replies from: RobbBB, Jayson_Virissimo↑ comment by Rob Bensinger (RobbBB) · 2013-09-07T19:29:15.941Z · LW(p) · GW(p)
Keep in mind that this "rationality" is just a word. Making up a word shouldn't, on its own, be enough to show that something is good or bad. If self-interest is more "rational" than helping others, then you should be able to give good reasons for that with other words that are more clear and simple.
People get very confused when they start thinking that what they actually want matters less than some piece of paper saying what they Should or Shouldn't want. Even if some made-up idea says you Shouldn't want to help others except to make yourself happy, why should that matter more to me than what I actually want, which is just to help people? This is a lot like Mr. Yudkowsky's "being sad about having to think and decide well".
Replies from: None, None↑ comment by [deleted] · 2013-09-08T11:31:44.852Z · LW(p) · GW(p)
Btw, that link is really good and it made me think a bit differently. I've sometimes envied others for their choices and thought I'm supposed to behave in a certain way that is opposite to that... but actually what matters is what I want and how I can achieve my desires, not how I'm supposed to act.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-09-08T16:42:58.892Z · LW(p) · GW(p)
Right! "I should..." is a means for actually making the world a better place. Don't let it hide away in its own world; make it face up to the concerns and wishes you really have.
↑ comment by [deleted] · 2013-09-08T10:52:39.204Z · LW(p) · GW(p)
If self-interest is more "rational" than helping others, then you should be able to give good reasons for that with other words that are more clear and simple.
I think the gist is that we all live inside our own bubbles of consciousness and can only observe indirectly what is inside other people's bubbles. Everything that motivates you or makes you do anything is inside that bubble. If you expand this kind of thinking, it's not really important what is inside those other bubbles, only how they affect you. But this is kinda contrived philosophy.
↑ comment by Jayson_Virissimo · 2013-09-07T20:17:35.605Z · LW(p) · GW(p)
It's just that in some economic texts people to use the word "rationality" to mean that: a "rational" agent is only interested in his own well-being.
Which texts are you referring to? I have about a dozen and none of them define rationality in this way.
Replies from: None↑ comment by [deleted] · 2013-09-08T09:04:32.152Z · LW(p) · GW(p)
Okay. I was wrong. It seems I don't know enough and I should stop posting here.
Replies from: RobbBB, Jayson_Virissimo↑ comment by Rob Bensinger (RobbBB) · 2013-09-08T10:12:19.679Z · LW(p) · GW(p)
I think the problem might be confusing connotation and denotation. 'Rational self-interest' is a term because most rationality isn't self-interested, and most self-interest isn't rational. But when words congeal into a phrase like that, sometimes they can seem to be interchangeable. And it doesn't help that aynrand romanticism psychodarwinism hollywood.
Replies from: None↑ comment by [deleted] · 2013-09-08T10:32:50.169Z · LW(p) · GW(p)
Yep, the Ayn Rand type of literature is what originally brought this to my mind. I also read a book about economic sociology which told about the prisoner's dilemma and it said the most "rational" choice is to always betray your partner (if you only play once) and Nash was surprised when people didn't behave this way
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-09-08T16:22:15.440Z · LW(p) · GW(p)
That's a roughly high-school-level misunderstanding of what the Prisoner's Dilemma means, though I suppose it makes sense to be surprised that humans care about each other if you'd never met a human, and it did make sense to be confused by why humans care about each other until we recognized that (uncertainly) iterated dilemmas and kin selection were involved. I believe a great many people on LessWrong also reject the economic consensus on this issue, however; they think that two rational agents can cooperate in something like a classical PD, provided only that they have information about one another's (super)rationality. See True Prisoner's Dilemma and Decision Theory FAQ.
In the real world, most human interactions are not Prisoner's Dilemmas, because in most cases people prefer something that sounds like '(Cooperate, Cooperate)' to '(Cooperate, Defect)'. whereas in the PD the latter must have a higher payoff.
Replies from: None↑ comment by [deleted] · 2013-09-08T17:52:07.701Z · LW(p) · GW(p)
This is what was said:
"It (game theory) assumes actors are more rational than they often are in reality. Even Nash faced this problem when some economists found that real subjects responded differently from Nash's prediction: they followed rules of fairness, not cold, personal calculation (Nassar 1998: 199)"
Yeah, I remember reading that some slightly generous version of tit-for-tat is the most useful tactic in prisoner's dilemma at least if you're playing several rounds.
↑ comment by Jayson_Virissimo · 2013-09-08T18:39:47.023Z · LW(p) · GW(p)
The reason I ask is because I have heard this claim many times, but have never encountered an actual textbook that taught it, so I'm not sure if it has any basis in reality or is just a straw man (perhaps, designed to discredit economics, or merely an honest misunderstanding of the optimization principle).
↑ comment by Nisan · 2013-09-08T04:08:15.388Z · LW(p) · GW(p)
Welcome to Less Wrong! Your comment would be more appropriate in the welcome thread.
Replies from: None↑ comment by [deleted] · 2013-09-08T09:02:16.748Z · LW(p) · GW(p)
I have already posted in there. Do you mean I should only post there until I mature enough that I can post here?
Replies from: Nisan↑ comment by Nisan · 2013-09-08T17:18:23.915Z · LW(p) · GW(p)
Oh, ok. The open threads are a good place to ask questions. If you aren't satisfied with the response you get there, you can try here.
↑ comment by mare-of-night · 2013-09-08T13:05:10.301Z · LW(p) · GW(p)
I would say that "doing good in the most helping way" only matters if you want things to be good for other people (or animals). A person who thinks well might want things to be good for other people, or want things to be good for themselves, or want there to be lots of things to hold paper together - to think well means to do things the best way to get what they want, but not to want any one thing.
Knowing whether you want things to be good for other people, or just want things to be good for yourself but feel sad when things are bad for other people, is sort of like a different thing people think about here. Sometimes we think about if we should want a thing that makes us think we have what we want, even though we are really just sitting around with the thing on our heads . If I want to think that things are good for other people (because it will make me happy and the biggest thing I want is to be happy), then I can get what I want by changing what I think. But if what I want is for things to be good for other people (even if it does not make me happy), then the only way I can get what I want is to make things better for other people (and so I want to do good in the most helping way).
I should say, I think a little different about good from most people here. Most people here think that you can want something, but also think that it is bad. I think that if you think you want something that is bad, you are probably confused about what you want, and you would stop wanting the bad thing if you thought about it enough and felt how bad it was. I am not totally sure that I am right about this, though.
(See also: good about good)
↑ comment by [deleted] · 2013-09-08T02:21:17.761Z · LW(p) · GW(p)
The question still stands, if someone is "rationally" interested in one's own well-being only, and if someone only cares about other minds to the extent of how they affect your own mind through the natural empathy reflex, then why should they be interested in the well-being of other minds purely for the sake of their well-being? Shouldn't they only be interested in how the suffering of the other minds affects your own mind?
I don't think it's really possible to argue against this idea. If you're only interested in your own well-being, then doing things which do not increase your own well-being will not help you achieve your goal.
↑ comment by Rob Bensinger (RobbBB) · 2013-09-07T19:18:44.703Z · LW(p) · GW(p)
But how happy or sad other minds are does change how happy or sad I am. Why would it be looking out for myself better if I ignored something that changes my life in a big way? And why should I pretend to only care about myself if I really do care about others? Or pretend to only care about how others cause changes in me, when I do in fact care about the well-being of people who don't change me?
Suppose I said to you that it's bad to care about the person you're going to be. After all, you aren't that person now. That person's thoughts and concerns are outside of the present you. And that person can't change anything for the present you.
That wouldn't be a very good reason to ignore the person I'll become. After all, I do want the person I'm going to be to be happy. I don't need to give reasons showing why I should care about myself over time. I just need to note that I do in fact care about myself over time. How is this different, in any important way that changes the reasoning above, from noting that I do in fact care about other people in their own right?
If people only cared about other people as ways to get warm good feels for themselves, then people would be happy to change themselves to get warm good feels both when others are happy and when others are sad. People also wouldn't care about people too far away to cause changes for them. But if I send a space car full of people far away from me, I still want them to be happy even after they're too far away to ever change anything for me again. That's a fact about how I am. Why should I try to change that?
Replies from: None↑ comment by [deleted] · 2013-09-08T10:45:15.308Z · LW(p) · GW(p)
I guess that makes sense. When people say things like "I want a lot of money", "I want to live in a fulfilling relationship", "I want to climb mt. everest", the essential quality of these desires is that they are real and actually happen roughly the same way you picture it in your mind. No one says things like "I want to have the good feeling of living in a fulfilling relationship whether or not I actually live in one"... no. Because it's important that they're actually real. You can say the same thing about helping others - if you don't want other people to suffer, then it's important that they actually don't suffer.
That wouldn't be a very good reason to ignore the person I'll become. After all, I do want the person I'm going to be to be happy. How is this different, in any important way that changes the reasoning above, from noting that I do in fact care about other people in their own right?
It's a bit different. You will eventually become the person you are in the future, but it's impossible to never get inside the mind of someone else, at least not directly.
people would be happy to change themselves to get warm good feels both when others are happy and when others are sad.
How would you actually change yourself? It's very difficult in practice.
People also wouldn't care about people too far away to cause changes for them.
But people don't care about far away people so much as they care about people that are similar to you. When westerners get in trouble in developing countries, people make a big effort to get them safe and mostly ignore all suffering that is going on around that. People send less money to people in developing countries than say, war veterans or people at home.
That's a fact about how I am. Why should I try to change that?
You shouldn't. I'm the same way, I try to help people for the sake of helping them. But there are some people who are only interested in their own well-being and I'm just thinking how I could argue with them.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-09-08T16:38:29.097Z · LW(p) · GW(p)
Because it's important that they're actually real.
Yes! I think that's a lot like what I was talking about.
You will eventually become the person you are in the future
Present-you won't. Present-you will go away and never know that will happen. You-over-time may change from present-you to you-to-come, but I wasn't talking about you-over-time.
Also, mind reading could change this some day, maybe.
How would you actually change yourself? It's very difficult in practice.
Yes, but even if it weren't possible at all, and we thought it were possible, whether we wished for it could say a lot about what we really want.
People send less money to people in developing countries than say, war veterans or people at home.
Yes, but that's very different from saying that people don't care about far away people at all except in so far as they get changed by them. If it were completely easy for you to in a flash make the lives of everyone you'll never know about ten times as good, for free, you would want to do that.
comment by Vladimir_Nesov · 2013-09-05T13:22:29.895Z · LW(p) · GW(p)
There is a beautifully executed talk by Guy Steele where he only uses one-syllable words or words explicitly defined in the talk: Growing a Language.
Replies from: Pablo_Stafforini, RobbBB↑ comment by Pablo (Pablo_Stafforini) · 2013-09-08T15:34:56.028Z · LW(p) · GW(p)
Brilliant. Here's a transcript.
↑ comment by Rob Bensinger (RobbBB) · 2013-09-06T06:53:18.719Z · LW(p) · GW(p)
This is great! Does anyone have a version that isn't all choppy?
comment by Roxolan · 2013-09-05T13:00:48.110Z · LW(p) · GW(p)
Pascal's wager: If you don't do what God says, you will go to Hell where you will be in a lot of pain until the end of time. Now, maybe God is not real, but can you really take that chance? Doing what God says isn't even that much work.
Pascal's mugging: I tell you "if you don't do what I say, something very bad will happen to you." Very bad things are probably lies, but you can't be sure. And when they get a lot worse, they only sound a little bit more like lies. So whatever I asked you to do, I can always make up a story so bad that it's safer to give in.
Replies from: RRand↑ comment by RRand · 2013-09-07T20:39:26.976Z · LW(p) · GW(p)
(With slightly more fidelity to Mr. Pascal's formulation:)
You have nothing to lose.
You have much to get. God can give you a lot.
There might be no God. But a chance to get something is better than no chance at all.
So go for it.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2013-09-14T13:18:43.355Z · LW(p) · GW(p)
Nitpicking:
You have nothing to lose.
You have little to lose. (For example, you waste your Sunday mornings. That's not nothing.)
comment by DSimon · 2013-09-06T14:27:21.871Z · LW(p) · GW(p)
Mr. Turing's Computer
Computers in the past could only do one kind of thing at a time. One computer could add some numbers together, but nothing else. Another could find the smallest of some numbers, but nothing else. You could give them different numbers to work with, but the computer would always do the same kind of thing with them.
To make the computer do something else, you had to open it up and put all its pieces back in a different way. This was very hard and slow!
So a man named Mr. Babbage thought: what if some of the numbers you gave the computer were what told it what to do? That way you could have just one computer, and you could quickly make it be a number-adding computer, or a smallest-number-finding computer, or any kind of computer you wanted, just by giving it different numbers. But although Mr. Babbage and his friend Ms. Lovelace tried very hard to make a computer like that, they could not do it.
But later a man named Mr. Turing thought up a way to make that computer. He imagined a long piece of paper with numbers written on it, and imagined a computer moving left and right that paper and reading the numbers on it, and sometimes changing the numbers. This computer could only see one number on the paper at a time, and also only remember one thing at a time, but that was enough for the computer to know what to do next. Everyone was amazed that such a simple computer could do anything that any other computer then could do; all you had to do was put the right numbers on the paper first, and then the computer could do something different! Mr. Turing's idea was enough to let people build computers that finally acted like Mr. Babbage's and Ms. Lovelace's dream computer.
Even though Mr. Turing's computer sounds way too simple when you think about our computers today, our computers can't do anything that Mr. Turing's imagined computer can't. Our computers can look at many many numbers and remember many many things at the same time, but this only makes them faster than Mr. Turing's computer, not actually different in any important way. (Though of course being fast is very important if you want to have any fun or do any real work on a computer!)
Replies from: fubarobfusco, DSimon, None↑ comment by fubarobfusco · 2013-09-06T17:08:06.629Z · LW(p) · GW(p)
The Halting Problem (Part One)
A plan is a list of things to do.
When a computer runs, it is doing the things that are written in a plan.
When you solve a problem like 23 × 3, you are also following a plan.
Plans are made of steps.
To follow a plan, you do what each plan step says to do, in the order they are written.
But sometimes a step can tell you to move to a different step in the plan, instead of the next one.
And sometimes it can tell you to do different things if you see something different.
It can say "Go back to step 4" ... or "If the water is not hot yet, wait two minutes, then go back to step 3."
Here is a plan:
- Walk to the store.
- Buy a food.
- Come back home.
- You're done!
Here is another plan:
- Walk to the store.
- Buy a food.
- Come back home.
- Go back to step 1.
There is something funny about the second plan!
If we started following that plan, we would never stop.
We would just keep walking to the store, buying a food, and walking back home.
Forever.
(Or until we decide it is a dumb plan and we should stop following it.
But a computer couldn't do that.)
You may have heard songs like "The Song That Never Ends" or "Ninety-Nine Bottles of Drinks on the Wall".
Both of these songs are like plans.
When you are done singing a part of the song, you follow a plan step to get to the next part.
In "The Song That Never Ends", you just go back to the beginning.
But in "Ninety-Nine Bottles of Drinks on the Wall", you take one away from the number of bottles of drinks.
And if there are no more bottles, you stop!
But if there are more bottles, you sing the next part.
Even though it is a very long song, it does have an end.
(There is a bad joke about people who write plans for computers.
It is also a joke about hair soap.
There is a plan written on the hair soap bottle:
- Put hair soap in your hair.
- Use water to get the hair soap out of your hair.
- Repeat.
The person who writes plans for computers starts washing their hair and does not know when to stop!
Normal people know that "repeat" just means "do it a second time, then stop" and not "keep doing it forever". But in a computer plan, it would mean "do it forever".)
It is nice to know out how long a plan will take for us to do.
It is really nice to know if it is one of those plans that never ends.
If it is, then we could just say, "I won't follow this plan! It never ends! It is like that dumb song or the bad joke!"
↑ comment by fubarobfusco · 2013-09-07T01:20:47.309Z · LW(p) · GW(p)
The Halting Problem (Part Two)
Can we have plans for thinking about other plans? Yes, we can!
Suppose that we found a plan, and we did not know what kind of plan it is.
Maybe it is a plan for how to make a food.
Or maybe it is a plan for how to go by car to another city.
Or maybe it is a plan for how to build a house.
We don't know.
Can we have a plan for finding out?
Yes! Here is a plan for telling what kind of plan it is:
- Get paper and a writing stick.
- Start at the first step of the plan we are reading.
- Read that step.
(Do not do the things that step says to do!
You are only reading that plan, not following it.
You are following this plan.) - Write down all of the names of real things (like food, roads, or wood) that the step uses.
Do not write down anything that is not a name of a real thing.
Do not write down numbers, action words, colors, or other words like those. - Are there more steps in the plan we are reading?
If so, go to the next step of the plan we are reading, and go back to go back to step 3 of this plan.
If not, go on to step 6 of this plan. - Look at the paper that we wrote things down on.
- If most of the things on the paper are food and kitchen things, say that the plan is a plan for making food.
- If most of the things on the paper are car things, roads, and places, say that the plan is a plan for going to a city.
- If most of the things on the paper are wood and building things, say that the plan is a plan for building a house.
- If most of the things on the paper are paper, writing sticks, and steps of plans, say that the plan is a plan for reading plans!
So we can have a plan for reading and thinking about other plans.
This plan is not perfect, but it is pretty good.
It can even tell if a plan is a plan for reading plans.
This is like looking in the mirror and knowing that you are seeing yourself.
It is a very interesting thing!
But .... can we make a plan for telling if a plan will end or not?
That is a hard problem.
Edited to add the italicized comment on step 3.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2013-09-10T06:23:18.016Z · LW(p) · GW(p)
The Halting Problem (Part Three)
Let's imagine that we have a plan for reading other plans and saying if they will end.
Our imaginary plan is called E, for Ending. We want to know if a plan like E is possible.
We do not know what the steps of plan E are.
All we know is that we are imagining that we can follow plan E to read another plan and say whether it will end or not.
(We need a name for this other plan. We'll call it X.)
But wait! We know there are plans that sometimes end, and sometimes go on forever. Here is one —
Plan Z:
- Start with a number.
- If our number is zero, stop.
- Make our number smaller by 1.
- Go to step 2.
Plan Z will always stop if the number we start with is bigger than zero and is a whole number.
But if our number is one-half (or something else not whole) then Z will never end.
That is because our number will go right past zero without ever being zero.
Plan Z is not really whole by itself.
It needs something else that we give it: the number in step 1.
We can think of this number as "food" for the plan.
The "food" is something Z needs in order to go, or even to make sense.
Some food is good for you, and some is bad for you ...
... and whether Z ends or not depends on what number we feed it.
Plan Z ends if we feed it the number 1 or 42, but not if we feed it the number one-half.
And so when we ask "Will plan X end?" we really should ask "Will plan X end, if we feed F to plan X?"
So in order to follow plan E, we need to know two things: a plan X, and a something called F.
(What kind of something? Whatever kind X wants.
If X wants a number, then F is a number.
If X wants a cookie, then F is a cookie.
If X wants a plan to read, then F is a plan.)
Following E will then tell us if X-fed-with-F will end or run forever.
Now here is another plan —
Plan G:
- Start with a plan. Call it plan X.
- Follow plan E to read plan X, with plan X as food.
This will tell us if plan X will end or not. - Now, if E told us that X never ends, then stop!
- But if E told us that X stops, then sing "The Song That Never Ends".
So when we follow plan G, we don't do the same thing that X does.
We do the other thing!
If X never ends, then G ends.
And if X ends, then G never ends.
But what happens if X is G?
G is a plan, and G wants a plan for its food. So we can feed G to G itself!
If we feed G to G, then G will end if G doesn't end.
And G will go on forever if G ends.
That does not make any sense at all.
Everything about that makes no sense!
It is like saying "If the cat is white, then the cat is not white."
It is really wrong!
What part is wrong, though?
G is very simple. There is nothing wrong with G.
The wrongness is in the thing that we imagined:
Plan E, the plan that can tell us if any plan will end or not.
This means that E is not really possible.
That is the part that looked like it might make sense, but really it did not.
Oh well.
It sure would be nice if E was possible.
But it is not.
↑ comment by DSimon · 2013-09-10T14:51:00.381Z · LW(p) · GW(p)
So what is Mr. Turing's computer like? It has these parts:
- The long piece of paper. The paper has lines on it like the kind of paper you use in numbers class at school; the lines mark the paper up into small parts, and each part has only enough room for one number. Usually the paper starts out with some numbers already on it for the computer to work with.
- The head, which reads from and writes numbers onto the paper. It can only use the space on the paper that is exactly under it; if it wants to read from or write on a different place on the paper, the whole head has to move up or down to that new place first. Also, it can only move one space at a time.
- The memory. Our computers today have lots of memory, but Mr. Turing's computer has only enough memory for one thing at a time. The thing being remembered is the "state" of the computer, like a "state of mind".
- The table, which is a plan that tells the computer what to do when it is each state. There are only so many different states that the computer might be in, and we have to put them all in the table before we run the computer, along with the next steps the computer should take when it reads different numbers in each state.
Looking closer, each line in the table has five parts, which are:
- If Our State Is this
- And The Number Under Head Is this
- Then Our Next State Will Be this (or maybe the computer just stops here)
- And The Head Should write this
- And Then The Head Should move this way
Here's a simple table:
Happy 1 Happy 1 Right
Happy 2 Happy 1 Right
Happy 3 Sad 3 Right
Sad 1 Sad 2 Right
Sad 2 Sad 2 Right
Sad 3 Stop
Okay, so let's say that we have one of Mr. Turing's computers built with that table. It starts out in the Happy state, and its head is on the first number of a paper like this:
1 2 1 1 2 1 3 1 2 1 2 2 1 1 2 3
What will the paper look like after the computer is done? Try pretending you are the computer and see what you do! The answer is at the end.
So you can see now that the table is the plan for what the computer should do. But we still have not fixed Mr. Babbage's problem! To make the computer do different things, we have to open it up and change the table. Since the "table" in any real computer will be made of very many parts put together very carefully, this is not a good way to do it!
So here is the amazing part that surprised everyone: you can make a great table that can act like any other table if you give it the right numbers on the paper. Some of the numbers on the paper tell the computer about a table for adding, and the rest of the numbers are to be added. The person who made the great table did not even have to know anything about adding, as long as the person who wrote the first half of the paper does.
Our computers today have tables like this great table, and so almost everything fun or important that they do is given to them long after they are built, and it is easy to change what they do.
By the way, here is how the paper from before will look after a computer with our simple table is done with it:
1 1 1 1 1 1 3 2 2 2 2 2 2 2 2 3
↑ comment by [deleted] · 2013-09-08T02:48:29.859Z · LW(p) · GW(p)
I'm actually surprised that Turing machines were invented before anyone ever built an actual computer.
Replies from: ikrase, bogdanb↑ comment by bogdanb · 2013-09-09T18:47:50.172Z · LW(p) · GW(p)
I see your point (I sometimes get the same feeling), but if you think about it, it’d be much more astonishing if someone built a universal computer before having the idea of a universal computer. It’s not really common to build something much more complex than a hand ax by accident. Natural phenomena are often discovered like that, but machines are usually imagined a long time before we can actually build them.
Replies from: Nonecomment by sixes_and_sevens · 2013-09-05T09:50:10.571Z · LW(p) · GW(p)
I spent the better part of November writing miniature essays in this. It's really quite addictive. My favourites:
Parallax and cepheid variables (Dead stars that flash in space)
Basic linear algebra (four-sided boxes of numbers that eat each other)
The Gold Standard (Should a bit of money be the same as a bit of sun-colored stuff that comes out of the ground?)
The Central Limit Theorem (The Middle Thing-It-Goes-To Idea-You-Can-Show-Is-True-With-Numbers - when you take lots of Middle Numbers of lots of groups, it looks like the Normal Line!)
Complex numbers ("I have just found out I can use the word 'set'. This makes me very happy.")
Utility, utilitarianism and the problems with interpersonal utility comparison ("If you can't put all your wants into this order, you have Not-Ordered Wants")
The triumvirate brain hypothesis ("when you lie down on the Mind Doctor's couch, you are lying down next to a horse, and a green water animal with a big smile")
Arrow's Impossibility Theorem ("If every person making their mark on a piece of paper wants the Cat Party more than the Dog Party, then the Dog Party can't come out higher in the order than the Cat Party.")
The concept of "degenerate case" ("If your boyfriend or girlfriend has a different meaning for 'box' than you do, and you give them a line, not only will they be cross with you, but you will be wrong, and that is almost as bad")
The word "sublimate" ("When Dry Ice goes into the air, it is beautiful, like white smoke. There is a word for this situation, and we also use that word to talk about things that are beautiful, because they are perfect, and become white smoke without being wet first")
↑ comment by twanvl · 2013-09-05T11:00:27.896Z · LW(p) · GW(p)
The Central Limit Theorem (The Middle Thing-It-Goes-To Idea-You-Can-Show-Is-True-With-Numbers - when you take lots of Middle Numbers of lots of groups, it looks like the Normal Line!)
Does it really simplify things if you replace "limit" with "thing-it-goes-to" and theorem with "idea-you-can-show-is-true-with-numbers"? IMO this is a big problem with the up-goer five style text: you can still try to use complex concepts by combining words. And because you have to describe the concept with inadequate words, it becomes actually harder to understand what you really mean.
There are two purposes of writing simple English:
- writing for children
- writing for non-native speakers
In both cases is "sun-colored stuff that comes out of the ground" really the way you would explain it? I would sooner say something like: "yellow is the color of the sun, it looks like . People like shiny yellow metal called gold, because there is little of it".
I suppose the actual reason we are doing this is
- artificially constrained writing is fun.
If your boyfriend or girlfriend has a different meaning for 'box' than you do, and you give them a line, not only will they be cross with you, but you will be wrong, and that is almost as bad
"give them a line" and "be cross with you" are expressions that make no sense with the literal interpretation of these words.
Replies from: sixes_and_sevens, bbleeker↑ comment by sixes_and_sevens · 2013-09-05T12:26:46.973Z · LW(p) · GW(p)
Using the most common 1,000 words is not really about simplifying or clarifying things. It's about imposing an arbitrary restriction on something you think you're familiar with, and seeing how you cope with it.
There are merits to doing this beyond "it's fun". When all your technical vernacular is removed, you can't hide behind terms you don't completely understand.
↑ comment by Sabiola (bbleeker) · 2013-09-06T10:32:00.409Z · LW(p) · GW(p)
In fact, I'm not sure what "give them a line" means. Give them a line like this ------------- instead of a box? From context, it could also mean 'just make something up'. (English is not my first language, in case you couldn't tell.)
**googles**
Yes, it turns out that "give someone a line" can mean "to lead someone on; to deceive someone with false talk" (or "send a person a brief note or letter", but that doesn't make sense in this context).
Still can't tell which type of line is meant.
Replies from: sixes_and_sevens, fubarobfusco↑ comment by sixes_and_sevens · 2013-09-08T22:42:37.128Z · LW(p) · GW(p)
I was quoting a single sentence of my mini-essay. "Give them a line" probably doesn't make much sense out of context.
The original context was that a line segment is a degenerate case of a rectangle (one with zero width). You can absolutely say a line segment is a rectangle (albeit a degenerate case of one). However, if your partner really wanted a rectangle for their birthday, and you got them a line segment, they may very well be super-pissed with you, even if you're using the same definition of "line segment" and "rectangle".
If you're not using the same definition, or even if you're simply unsure whether you're using the same definition, then when you get your rectangle-wanting partner a line segment for their birthday, not only would they be pissed with you, but you may also be factually incorrect in your assertion that the line segment is a rectangle for all salient purposes.
↑ comment by fubarobfusco · 2013-09-07T06:07:08.815Z · LW(p) · GW(p)
There's also several meanings of "box", such as:
- a package (as might be used to hold a gift)
- to punch each other for sport (as in boxing)
- a computer (in hobbyist or hacker usage)
- a quadrilateral shape (as in the game Dots and Boxes)
... and the various Urban Dictionary senses, too.
(Heck, if one of my partners talked about getting a box, it might mean a booster box of Magic cards.)
↑ comment by satt · 2013-09-07T20:31:18.005Z · LW(p) · GW(p)
Complex numbers
I don't know why that one caught my eye, but here I go.
You've probably seen the number line before, a straight line from left to right (or right to left, if you like) with a point on the line for every real number. A real number, before you ask, is just that: real. You can see it in the world. If I point to a finger on my hand and ask, "how many of these do I have?", the answer is a real number. So is the answer to "how tall am I?", and the answer to "How much money do I have?" The answer to that last question, notice, might be less than nothing but it would still be real for all that.
Alright, what if you have a number line on a piece of paper, and then turn the paper around by half of half a turn, so the number line has to run from top to bottom instead of left to right? It's still a number line, of course. But now you can draw another number line from left to right, so that it will go over the first line. Then you have not one line but two.
What if you next put a point on the paper? Because there is not one number line but two, the point can mean not just one real number but two. You can read off the first from the left-to-right line, and then a second from the top-to-bottom line. And here is a funny thing: since you still have just one point on the paper, you still have one number — at the very same time that you have two!
I recognize that this may confuse. What's the deal? The thing to see is that the one-number-that-is-really-two is a new kind of number. It is not a real number! It's a different kind of number which I'll call a complete number. (That's not the name other people use, but it is not much different and will have to do.) So there is no problem here, because a complete number is a different kind of number to a real number. A complete number is like a pair of jeans with a left leg and a right leg; each leg is a real number, and the two together make up a pair.
Why go to all this trouble for a complete number that isn't even real? Well, sometimes when you ask a question about a real number, the answer to the question is a complete number, even if you might expect the answer to be a real number. You can get angry and shout that you don't want an answer that's complete, and that you only want to work with a number if it's real, but then you'll find many a question you just can't answer. But you can answer them all if you're cool with an answer that's complete!
Replies from: None↑ comment by [deleted] · 2013-09-08T02:38:13.697Z · LW(p) · GW(p)
For what it's worth, I dislike the term "real number" precisely because it suggests that there's something particularly real about them. Real numbers have a consistent and unambiguous mathematical definition; so do complex numbers. Real numbers show up in the real world; so do complex numbers. If I were to tell someone about real numbers, I would immediately mention that there's nothing that makes them any more real or fake than any other kind of number.
Unrelatedly, my favorite mathematical definition (the one that I enjoy the most, not the one I think is actually best in any sense) is essentially the opposite of Up-Goer Five: it tries to explain a concept as thoroughly as possible using as few words as possible, even if that requires using very obscure words. That definition is:
Replies from: sattThe complex numbers are the algebraic closure of the completion of the field of fractions of the initial ring.
↑ comment by satt · 2013-09-08T06:09:07.133Z · LW(p) · GW(p)
I thought I might get some pushback on taking the word "real" in "real number" literally, because, as you say, real numbers are just as legitimate a mathematical object as anything else.
We probably differ, though, in how much we think of real & complex numbers as showing up in the real world. In practice, when I measure something quantitatively, the result's almost always a real number. If I count things I get natural numbers. If I can also count things backwards I get the integers. If I take a reading from a digital meter I get a rational number, and (classically) if I could look arbitrarily closely at the needle on an analogue meter I could read off real numbers.
But where do complex numbers pop up? To me they really only seem to inhere in quantum mechanics (where they are, admittedly, absolutely fundamental to the whole theory), but even there you have to work rather hard to directly measure something like the wavefunction's real & imaginary parts.
In the macroscopic world it's not easy to physically get at whatever complex numbers comprise a system's state. I can certainly theorize about the complex numbers embodied in a system after the fact; I learned how to use phasors in electronics, contour integration in complex analysis class, complex arguments to exponential functions to represent oscillations, and so on. But these often feel like mere computational gimmicks I deploy to simplify the mathematics, and even when using complex numbers feels completely natural in the exam room, the only numbers I see in the lab are real numbers.
As such I'm OK with informally differentiating between real numbers & complex numbers on the basis that I can point to any human-scale quantitative phenomenon, and say "real numbers are just right there", while the same isn't true of complex numbers. This isn't especially rigorous, but I thought that was a worthwhile way to avoid spending several introductory paragraphs trying to pin down real numbers more formally. (And I expect the kind of person who needs or wants an up-goer five description of real numbers would still get more out of my hand-waving than they'd get out of, say, "real numbers form the unique Archimedean complete totally ordered field (R,+,·,<) up to isomorphism".)
Replies from: None↑ comment by [deleted] · 2013-09-09T04:37:21.861Z · LW(p) · GW(p)
As far as I know, the most visible way that complex numbers show up "in the real world" is as sine waves. Sine waves of a given frequency can be thought of as complex numbers. Adding together two sine waves corresponds to adding the corresponding complex numbers. Convoluting two sine waves corresponds to multiplying the corresponding complex numbers.
Since every analog signal can be thought of as a sum or integral of sine waves of different frequencies, an analog signal can be represented as a collection of complex numbers, one corresponding to the sinusoid at each frequency. This is what the Fourier transform is. Since convolution of analog signals corresponds to multiplication of their Fourier transforms, now a lot of the stuff we know about multiplication is applicable to convolution as well.
comment by Unnamed · 2013-09-05T09:13:32.484Z · LW(p) · GW(p)
Earlier attempts to do something like this: by us, by other people
Replies from: FiftyTwo, Username↑ comment by Username · 2013-09-07T20:11:37.051Z · LW(p) · GW(p)
Additionally, though without the strict word limit, http://simple.wikipedia.org/wiki/Main_Page
comment by Mitchell_Porter · 2013-09-07T02:28:12.079Z · LW(p) · GW(p)
CEV
Rogitate the nerterological psephograph in order to resarciate the hecatologue from its somandric latibule in the ipsographic odynometer, and thereby sophronize the isangelous omniregency.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-09-07T06:22:16.452Z · LW(p) · GW(p)
It's hard to even imagine how to make a mind - build a brain - that does what's 'right', what it 'should'. We, the humans who have to build that mind, don't know what's right a lot of the time; we change our minds about what's right, and say that we were wrong before.
And yet everything we need has to be inside our minds somewhere, in some sense. Not upon the stars is it written. What's 'right' doesn't come from outside us, as a great light from the sky. So it has to be within humans. But how do you get it out of humans and into a new mind?
Start with what's really there in human minds. Then ask what we would think, if we knew everything a stronger mind knew. Ask what we would think if we had years and years to think. Ask what we would say was right, if we knew everything inside our own minds, all the real reasons why we decide what we decide. If we could change, become more the people we wished we were - what would we think then?
Building a mind which will figure all that out, and then do it, is about as close as we can now imagine to building something that does what's 'right', starting from only what's already there in human minds and brains.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-09-07T07:04:03.705Z · LW(p) · GW(p)
This should go to http://www.reddit.com/r/explainlikeimfive/
comment by Douglas_Reay · 2013-09-05T08:10:40.578Z · LW(p) · GW(p)
Utilitarianism : Care the same whether everyone is happy; if they live near or if they live far, if you like them or if you not like them; everyone.
Replies from: gjm, Nornagest, twanvl↑ comment by gjm · 2013-09-05T15:39:40.544Z · LW(p) · GW(p)
I don't think either this, or anything else in this subthread, captures it. Let me have a go.
People like some things and not others. For each person, we can give a number to each thing that says how much they like it or don't. Suppose you must do one of two things. For each, look at how the world will be if you do it -- every thing in the world -- and all the people in the world, and add up all those numbers saying whether they like the things or not. Then do the thing that gives the biggest total.
Those numbers should be such that if one of two things will happen, each as often as the other, the number for this is half way between the numbers for those two things. And they should be such that each person will always do what makes their numbers biggest. And if two people care the same about a thing, they should give it the same number. We can't really make all those things true, but we do the best we can.
(What if you must do one of two things, and one makes there be more people, or fewer people, or other people? That is hard and I will not try to say what to do then.)
It's not perfect but I think it captures the key points: equal weights for all, consider all people, add up utilities, utilities should correspond to people's preferences. And it owns up to some of the difficulties that I can't solve in upgoer5 language because I can't solve them at all.
↑ comment by Nornagest · 2013-09-05T09:41:32.245Z · LW(p) · GW(p)
Hmm. That's part of it, but it doesn't seem to capture the full scope of the philosophy; you seem to be emphasizing its egalitarian aspects more than the aggregation algorithm, and I think the latter's really the core of it. Here's my stab at preference utilitarianism:
An act is good if it helps people do what they want and get what they need. It's bad if it makes people do things they don't want, or if it keeps them from getting what they need. If it gives them something they want but also makes them do something they don't want just as much, it isn't good or bad.
There are no right or wrong things to want, just right or wrong things to do. Also, it doesn't matter who the people are, or even if you know about them. What matters is what happens, not what you wanted to happen.
↑ comment by twanvl · 2013-09-05T08:23:33.929Z · LW(p) · GW(p)
That is not what utilitarianism means. It means doing something is good if what happens is good, and doing something is bad if what happens is bad. It doesn't say which things are good and bad.
Replies from: CronoDAS↑ comment by CronoDAS · 2013-09-05T08:41:32.323Z · LW(p) · GW(p)
[this post is not in Up-Goer-5-ese]
The name for the type of moral theory in which
doing something is good if what happens is good, and doing something is bad if what happens is bad
is "consequentialism." Utilitarianism is a kind of consequentialism.
Replies from: twanvl↑ comment by twanvl · 2013-09-05T08:52:10.655Z · LW(p) · GW(p)
You are right, I was getting confused by the name. And the wikipedia article is pretty bad in that it doesn't give a proper concise definition, at least none that I can find. SEP is better.
It still looks like you need some consequentialism in the explanation, though.
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2013-09-05T17:52:15.822Z · LW(p) · GW(p)
I have yet to find a topic, such that, if both Wikipedia and SEP have an article about it, the Wikipedia version is better.
Replies from: gjm↑ comment by gjm · 2013-09-06T11:48:38.492Z · LW(p) · GW(p)
Any topic for which Wikipedia and SEP don't both have articles suffices :-). I think you mean: "I have yet to find a topic on which both Wikipedia and SEP have articles, and for which the Wikipedia article is better." With which I strongly agree. SEP is really excellent.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2013-09-06T17:17:15.551Z · LW(p) · GW(p)
You're not using English "if".
Replies from: gjm↑ comment by gjm · 2013-09-06T22:39:29.869Z · LW(p) · GW(p)
I'm using one variety of "if", used in some particular contexts when writing in English. I was doing so only for amusement -- of course I don't imagine that anyone has trouble understanding Jayson_Virissimo's meaning -- and from the downvotes it looks as if most readers found it less amusing than I hoped. Can't win 'em all.
But it's no more "not English" than many uses of, e.g., the following words on LW: "friendly", "taboo", "simple", "agency", "green". ("Friendly" as in "Friendly AI", which means something much more specific than ordinary-English "friendly"; "taboo" as in the technique of explaining a term without using that term or other closely-related ones; "simple" in the sense of Kolmogorov complexity, according to which e.g. a "many-worlds" universe is simpler than a collapsing-wave-function one despite being in some sense much bigger and fuller of strange things; "agency" meaning the quality of acting on one's own initiative even when there are daunting obstacles; "green" as the conventional name for a political/tribal group, typically opposed to "blue".)
comment by syllogism · 2013-09-08T00:33:18.720Z · LW(p) · GW(p)
Recent trends in my field of research, syntactic parsing
We've been trying for a long time to make computers speak and listen. Here is what has been happening with the part I work on for the last few years, or at least the part I'm excited about.
What makes understanding hard is that what you are trying to understand can mean so many different things. SO many different things. More than you think!! In fact the number grows way out of line with the number of words.
Until a few years ago, the number one idea we had was to figure out how to put together just a few words at a time. The key was not to think about too many words at once. If you do that, you can make lots of little groups, and put them together. You start at the words. You put together a few words, and get a longer bit back. Then you put together two longer bits. You work your way up, making a tree. I'll draw you a little tree.
( ( The old ) (voice ( their troubles ) ) )
If the computer has that tree, you can ask it "Who were the troubles voiced by?", and it can tell you "the old". Course, it doesn't know what "the old" are. That's just some marks to it. But it gets how to put the words together, and give back other marks that are right.
Until the last few years, we thought it was a big deal to see the whole tree before you cut out any others for sure.
Now another way's shown up. And I think the facts are almost in. I'm definitely calling it early here, probably most don't agree!!
The other way is to work your way along, from left to right. The funny thing is, that's what we do! But it's taken a while for us to get our heads around how to make it work for a computer. But now, I think we've made it better than the other way. It makes the computer right when it guesses just as much, but it's much much faster.
The problem was, if you work your way along, left to right, it's hard to be sure your guess is the best guess for the words all put together, if you can't go back and change your mind. And the computer gets lots wrong. If you let it just run, it gets something wrong, but has to move forward, and the idea it's trying to build doesn't make sense. You get to "the old voice", and your tree is the one for "the voice that is old", not "voicing is what the old are doing". And then you're stuck.
People really thought that was it for that approach. If you couldn't sort your guesses about the whole thing, how could you know you didn't just close your mind too early? People found nice ways to promise that you would see every total idea at the end, so you could pick one then.
The problem is, you see every idea, but you can only ask questions about the way small groups of words were put together, when that idea was put together. You know how I said you'd build a tree? You could only ask questions about bits next to each other.
The other way, you're building this tree as you go along, left to right. As every word comes in, you add it to your tree. So yeah okay, we do have to make our guesses, and live with them. We don't see all the different possible trees at the end. We do get locked in. But all you have to do is not get locked in totally. Keep some other trees around. It turns out we need to keep thinking about 30-60 trees. Less if we're a bit bright about it.
We've been doing good stuff this way. I write this kind of thing. My one makes the computer guess where over 9 words in 10 are in the tree, and it can do ten hundred words in a blink. It's pretty cool. That's over a hundred times faster than we could do 3 to 5 years ago.
comment by Tuxedage · 2013-09-26T03:45:35.925Z · LW(p) · GW(p)
The AI Box Experiment:
The computer-mind box game is a way to see if a question is true. A computer-mind is not safe because it is very good at thinking. Things good at thinking have the power to change the world more than things not good at thinking, because it can find many more ways to do things. Many people ask: "Why not put this computer-mind in a box so that it can not change the world, but tell guarding-box people how to change it?"
But some other guy answers: "That is still not safe, because computer-mind can tell guarding-box people many bad words to make them let it out of the box." He then says: "Why not try a thing to see if it is true? Here is how it works. You and I go into a room, and I will pretend to be the computer-mind and tell you many bad words. Only you have the power to let me out of room, but you must try to not let me out. If my bad words are enough to make you want to let me out, then computer-mind in box is not safe."
Other people agree and try playing the computer-mind box-game. It happens that many people let the guy playing as the computer-mind out of room. People realize that computer-mind is not safe in the locked box-room.
comment by ikrase · 2013-09-07T00:45:09.828Z · LW(p) · GW(p)
Complexity and Fragility of Value, My take: When people talk about the things they want, they usually don't say very many things. But when you check what things people actually want, they want a whole lot of different things. People also sometimes don't realize that they want things because they have always had those things and never worried that they might lose them.
If we were to write a book of all the things people want so a computer could figure out ways to give people the things they want, the book would probably be long and hard to write. If there were some small problems in the book, the computer wouldn't be able to see the problems and would give people the wrong things. That would probably be very, very, very bad.
Risks of Creative Super AIs: If we make computers, they will never know to do anything that people didn't tell them to do. We can tell computers to try to figure things out for themselves but even then we need to get them started on that. Computers will not know what people want except if people tell the computers exactly what they want. Very strong computers might get really stupid ideas about what they should do because they were wrong about what humans want. Also, very strong computers might do really bad things we don't want before we can turn them off.
comment by wubbles · 2013-09-08T22:53:43.836Z · LW(p) · GW(p)
The Prime Number Theorem
A group of four people can stand two by two, but a group of five people can only stand five in a line. The number of numbers like five, and not like four, between two numbers, is the number of times you take two times what you had, starting at one, to get between the two numbers if the two numbers are close.
comment by AlexSchell · 2013-09-07T17:42:05.482Z · LW(p) · GW(p)
Bayesianism (probabilism, conditioning, priors as mathematical objects):
Let a possible world be a way the world could be. To say something about the world is to say that the actual world (our world) is one of a set of possible worlds. Like, to say that the sky is blue is to say that the actual world is one of the set of possible worlds in which the sky is blue. Some possible worlds might be ours for all we know (maybe they look like ours, at least so far). For others we are pretty sure that they aren't ours (like all the possible worlds where the sky is pink). Let's put a number on each possible world that stands for how much we believe that our world is that possible world: 1 means we're sure it is, 0 means we're sure it's not, and everything between 0 and 1 means we're not sure (higher numbers mean we're more sure in some way).
Talking about each single possible world is hard and annoying. We want to have the same sort of numbers for sets of possible worlds too, to tell us how much we believe that our world is one of that set of possible worlds. Like, let's say we're sure that the sky is blue. This means that the numbers for all possible worlds where the sky is blue add up to 1. Let's say we're sure the sky is not pink. This means that the number for each possible world where the sky is pink is 0.
Each time we learn something with our senses, what we believe about the world changes. From what I see with my own eyes, I can definitely tell that the sky looks blue to me today. So I am now sure that the world is one of the set of possible worlds where the sky looks blue to me today. This means that I set my numbers for all the rest of the worlds, where the sky doesn't look blue to me today, to 0 (I have to take care at this step, since I can't take it back). Also, since I'm sure the sky looks blue, my new numbers for the possible worlds where the sky looks blue have add up to 1. What I do is I just add together all the old numbers that I just set to 0, and then give a little bit of what I get to each of the other possible worlds, so that worlds that had higher numbers before get more. (This last part is called normal-making. This way of changing our possible-world-numbers is called bringing them up to date.)
What's interesting is that learning by seeing things and bringing our numbers up to date can also end up changing our numbers for something we can't see. Let's say that I start out not sure about whether there is an animal way under my bed where I couldn't see it (that is, I'm not sure whether our world is one of the possible worlds where an animal is under my bed). I also believe some other things that bring together things I can and can't see. Like, I believe that if there is an animal under my bed there will probably be noise under the bed (that is, my added-together number for the set of possible worlds where there is noise and an animal under my bed is almost as large as my number for the set of possible worlds where an animal is under my bed). I also believe that if there isn't an animal under my bed, there will probably not be noise under the bed (my number for the set of worlds where there is noise and there isn't an animal under my bed is much smaller than my number for the set of worlds where there isn't an animal under my bed). These sorts of things I believe in part because it's just the way I am, and in part because of what I've seen in the past. Anyway, if I actually listen and hear noise coming from under the bed, and if I bring my numbers up to date, I will end up more sure than I was before that there is an animal under the bed. You can see this if you draw it out like so (ignore the words).
comment by Viliam_Bur · 2013-09-07T17:16:58.809Z · LW(p) · GW(p)
So, are we going to localize the LW wiki from English to Simple English?
comment by Tuxedage · 2013-09-07T07:09:33.000Z · LW(p) · GW(p)
Quantum Field Theory
Not me and only tangentially related, but someone on Reddit managed to describe the basics of Quantum Field Theory using four-letter words or less. I thought it was relevant to this thread, since many here may not have seen it.
The Tiny Yard Idea
Big grav make hard kind of pull. Hard to know. All fall down. Why? But then some kind of pull easy to know. Zap-pull, nuke-pull, time-pull all be easy to know kind of pull. We can see how they pull real good! All seem real cut up. So many kind of pull to have!
But what if all kind of pull were just one kind of pull? When we look at real tiny guys, we can see that most big rule are no go. We need new rule to make it good! Just one kind of pull but in all new ways! In all kind of ways! This what make it tiny yard idea.
Each kind of tiny guy have own move with each more kind of tiny guy. All guys here move so fast! No guys can move as fast! So then real, real tiny guys make this play of tiny guy to tiny guy. They make tiny guys move! When we see big guys get pull, we know its cuz tiny guys make tiny pull!
comment by Gunnar_Zarncke · 2013-09-20T16:45:51.314Z · LW(p) · GW(p)
Reminds me of the E minmal language http://www.ebtx.com/lang/readme2.htm which uses only 300 words including prepositions and tenses, inflections etc. (all words at http://www.ebtx.com/lang/eminfrm.htm ). These 300 are meant to exhaustively decompose most language, physical and everyday concepts down to - well - a minimum.
The first paragraph of the prisoners dilemma (FREPOHU HAAPOZ) might be
VI DILO DU PLAEM PLAL. VIZ CHAAN KRAAN MO ROL. DIBRA VIZ NAA CHON DIER DEPO HUEV VEN MO ROL.
comment by [deleted] · 2013-09-08T18:57:33.747Z · LW(p) · GW(p)
Cognitive Biases
In the world, things happen for reasons. When anything happens ever, there's a reason for it- even if you don't know what it is, or it seems strange. Start with that: nothing has ever happened without a cause. (Here we mean "reason" and "cause" like how a ball rolling into another ball will knock it over, not like good or bad. Think about it- it makes sense.)
If you're interested in knowing more about the world, often, you want to know the real reason things happen (or the reason other things DON'T happen, which can be just as important!) If you do that, you can learn about a lot of things: why the land looks the way it does, all about the different stars, tiny things much smaller than you can see, even all about other people!
But your brain isn't the very best at doing this. Remember that idea about how animals change over time? How parents, and the parents of parents, all make a kind of animal change a little all the time, because of who lives and who doesn't? You know, the idea from the old man who said humans used to be just animals? Well, think about that- our brains let us think about so much, but they used to be just animal brains. And an animal doesn't need to worry so much about true reasons- especially for things that are too tiny to see, or big things up in the sky. Say, when an animal sees something big and bad, it would be bad if it stopped and thought about all the reasons it happened. It's best to just run away!
Human brains aren't any different, really, except in two ways: We don't want to just always run away, and we can change how we think about things! But if we don't learn how to think about real reasons, then we use animal-brain thinking- and that can make a lot of problems, especially when it's about things that animals never thought about.
If you want to learn about the real world, and real reasons, it's important to know about animal-brains and how they can be wrong. Remember, animal-brain thinking is hard to spot- if you don't look for it, it just seems like normal clear thinking. But when you see it. you can fix how it works in your own brain, and see the world a little clearer.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2013-09-09T02:55:56.625Z · LW(p) · GW(p)
This works as a beginning to explaining this group of ideas. I like the focus on passed-down-change, but I want us to do more to exactly pick out what we mean here. It's especially important to note:
- A brain wrong-going (bias) is different from many other kinds of problems and troubles your brain can have.
- A brain wrong-going is sometimes about doing what you want, rather than about knowing what's true.
- And the idea of a brain wrong-going can't be explained without also explaining the idea of a brain short-cut (heuristic).
comment by Shmi (shminux) · 2013-09-06T17:27:19.376Z · LW(p) · GW(p)
My attempt at describing a black hole:
When a very big and very bright star runs out of stuff to burn, it can not stay big any longer and gets smaller and smaller, until it becomes nothing, but this nothing is just as heavy as before the star died. If you are close to it, you get sucked in and die. If light gets close to it, it dies, too. There is no escape. Since light can not escape, no one can see the place where the now dead star used to be. That is why this dead star is called black.
Words sorely missed: hole, curvature, density, vacuum, horizon.
comment by Stabilizer · 2013-09-06T07:08:36.979Z · LW(p) · GW(p)
Quantum Mechanics
When you try to understand how very small things work, you realize that you can't use the same kind of ideas which you used to explain how bigger things like cars and balls work. So one of the things you realize is that very small things care about how you look at them. Suppose you have a room with two doors. With big things, if you opened one door and saw a red ball inside and then you opened the other door, you would also see a red ball. But with small things, it could happen that you open one door, see a red ball, open the other door see a blue ball and then open the first door again and now see a blue ball! Also, two very small things that are far away can know much more about each other than two big things that are far away. This is what makes small things much weirder than big things, but also can be used to make better computers and phones. But big things are always made of small things; so why do big things also not work like small things? Well, this is one of the most important questions that people who think about small things are thinking about, but the answer seems to be that when you put a lot of small things together the weird things always seem to kill each other off.