Comment by liam-goddard on The LessWrong Team · 2019-06-15T21:24:48.343Z · score: 1 (3 votes) · LW · GW

What about Eliezer? He founded Less Wrong- why isn't he part of the team anymore?

Comment by liam-goddard on Welcome and Open Thread June 2019 · 2019-06-11T22:10:22.348Z · score: 2 (2 votes) · LW · GW

I was wondering- what happened on June 16, 2017? Most of the users on Less Wrong, including Eliezer, seemed to have "joined" at that point, but Less Wrong was created on February 1, 2009, and I've seen posts from before 2017.

Comment by liam-goddard on 2017 LessWrong Survey · 2019-06-03T18:13:05.279Z · score: 1 (1 votes) · LW · GW

Is there a 2018 or 2019 survey anywhere? I tried to find it, and I've seen some things from both you and Yvain, but I can't find any surveys past this one.

Comment by liam-goddard on Five Planets In Search Of A Sci-Fi Story · 2019-06-02T01:04:08.807Z · score: 1 (1 votes) · LW · GW

Zyzzx Prime could always do either:

1. No rulers; every single member votes on every issue

or

2. Select scientists (not leading scientists, of course, just average ones) and have them work on genetic engineering. No one can know who they are, and they work at minimum wage. (Of course, it could be hard to convince them to do this.)

Comment by liam-goddard on Newcomb's Problem: A Solution · 2019-05-28T01:59:44.740Z · score: 3 (2 votes) · LW · GW

From what I've seen, most people seem to argue two-box, and the one-boxers usually just say that Omega needs to think you'll be a one-boxer, so precommit even if it later seems irrational... I haven't seen this exact argument yet, but I might have just not read enough.

Comment by liam-goddard on Newcomb's Problem: A Solution · 2019-05-26T20:25:59.918Z · score: 1 (1 votes) · LW · GW

Since Newcomb's Problem, the boxes, and Omega don't actually exist, we can't physically conduct the experiment. However, based on the rules of the problem we can calculate the average amount of profits. In this fictional world, we are already told that Omega guesses correctly 99% of the time, and since we learned that from Newcomb himself it counts as a fact about this fictional world. This means that 99% of the time, the one-boxer gets $1,000,000 and 99% of the time the two-boxer gets $1,000. That's like saying that we can't be sure of whether purebloods are stronger in HPMOR. Even though we haven't seen any evidence in our world, since there's no purebloods in the real world, Yudkowsky tells us the facts in HPMOR, and since Yudkowsky's word is fact about HPMOR, this confirms hypothesis "purebloods are no stronger than other wizards." And even though we haven't seen any Omega evidence in our world, Newcomb tells us the facts in his problem, and since Newcomb's word is fact about Newcomb's problem, this confirms hypothesis "one-boxers almost always do better than two-boxers."

If a pre-Galileo person wrote a fictional story about a different land in which heavier objects fell faster, in that world, heavier objects would fall faster. By simple mathematics, we can prove that under the conditions stated by Newcomb, we should take both boxes.

Comment by liam-goddard on Yudkowsky's brain is the pinnacle of evolution · 2019-05-26T17:27:04.693Z · score: 1 (1 votes) · LW · GW

You do realize that other people work on AI? Sure, Eliezer might be the most important, but he is not the only member of MIRI's team. I'd definitely sacrifice several people to save him, but nowhere near 3^^^3. Eliezer's death would delay the Singularity, not stop it entirely, and certainly not destroy the world.

Newcomb's Problem: A Solution

2019-05-26T16:32:55.987Z · score: -1 (7 votes)
Comment by liam-goddard on How would you take over Rome? · 2019-05-24T21:37:38.398Z · score: 1 (1 votes) · LW · GW

Use your wonderful "inventions" and knowledge about the "future" to show your amazing powers. Then explain to them that you are Mercury, god of a lot of different things, including some forms of prophecy. But like Jupiter had done previously to Neptune and Apollo, Jupiter has now sent you down to Earth in the form of a human to work off a debt as you have committed a grave crime against Jupiter (Neptune and Apollo had tried to overthrow him.)

As Mercury, you are assigned by Jupiter to serve the Emperor of Rome. Continue to impress them, and as they worship you, gain power and strength in the society. Also, use your modern rationality/science to advise the Emperor until you control most of his decisions, leaving him as merely a puppet while you receive most of the praise and make most of the actual laws of Rome.

While you are gaining power, you also are trusted by the Emperor and manage to steal money. Even if you are caught (which, if possible, you aren't) they would never dare beat or kill a god, and it wouldn't hurt your image as "Mercury"- after all, one of the things he's best known for is being the god of thieves. Eventually, you start bribing officials to help you. You build trust inside of the leaders of Rome.

When the Emperor is "mysteriously assassinated" you, Mercury, prophet, inventor, god, nobleman, wise, skilled at rulership, wealthy, trustworthy, high-ranked, and adored, become his replacement. If anyone asks why a servant is going to become the Emperor, you tell them that your orders were to serve the government of Rome, and its people, and what better way to do that than to rule it in a way that makes life for the people better?! Especially after you make some donations from the treasures of Rome that appease some of the groups that include the people questioning you, and kill the other questioners for blasphemy.

You are the Emperor of Rome.


I know this solution requires a lot of luck, and could be foiled, but it seems to me that impersonating a god would be the best option.

Comment by liam-goddard on Beautiful Probability · 2019-05-22T22:04:23.526Z · score: 1 (1 votes) · LW · GW

The two experiments would differ. In Experiment 1, we now have received evidence of a 70% probability of a cure. However, Experiment 2 doesn't offer the same evidence, because it will stop as soon as it gets significantly over 60%. Based on the randomness of results, it will not always fit the true probability. If the real probability was 70%, wouldn't it have most likely gotten up to 70% with 7 out of 10, or 14 out of 20? For most of Experiment 2, less than 60% of the patients were cured. The fact that by 100 patients it happened to go up was most likely an error in the data, and if the experiment was continued it would probably drop back below 60%.

Comment by liam-goddard on "I don't know." · 2019-05-14T20:54:25.128Z · score: 1 (1 votes) · LW · GW

Just say, "I'm not able to assign a very high probability to any possibility, since I don't have very much information, but the possibility that I would assign the highest probability to is the tree having ___ to ___ apples, with a probability of ___%." You don't know how many there are, but you can still admit that you don't know while assigning a probability.

Comment by liam-goddard on Chapter 1: A Day of Very Low Probability · 2019-05-10T20:21:33.175Z · score: 1 (1 votes) · LW · GW

Um... what do all of those comments mean? Also, I’m wondering how Harry became so smart. I know part of it was from [Spoiler from Book Six] but that really wouldn’t have been enough, even combined with science. Why is it that Harry was able to think rationally and create a test, but Michael wasn’t even willing to consider the idea?

Comment by liam-goddard on Pretending to be Wise · 2019-04-29T02:15:33.982Z · score: 1 (1 votes) · LW · GW

Argument is of course a good thing among rational people, since refusing to argue and agreeing to disagree solves nothing- you won't come to any agreement and you won't know what's right.. But I think the reason many people see argument as a bad thing is because most people are too stubborn to admit they are wrong, so argument among most people is pointless because one or both sides is unwilling to actually debate. If people admitted they were wrong, argument wouldn't be treated as such a bad thing, but as it is, with no one willing to see truth, it often ends up accomplishing nothing.

Comment by liam-goddard on Planning Fallacy · 2019-04-29T01:49:35.363Z · score: 3 (2 votes) · LW · GW

Apart from planning, optimism seems to be a problem in many situations. Since I've read this article and others, I've tried to correct my incorrect beliefs, and whenever I have the belief "this scenario is how I want it to be" I immediately take it as a warning sign and reevaluate the belief, and most of the time I've been too optimistic... I remember earlier in my life, in fourth grade, being positive that a certain person who I had had a crush on liked me. I overheard a conversation in which she stated that she liked someone else. I went over why I had believed that and realized I had had absolutely zero evidence of anything. My "intuition" had told me what I thought was right.

Intuition is insanely biased. Whatever you think, it's probably way too happy unless you evaluate the probability from the outside view, find an estimate that seems accurate, and then chop it in half.

The Meaning(s) of Life

2019-04-13T21:02:51.989Z · score: -2 (7 votes)
Comment by liam-goddard on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases · 2019-04-13T20:20:46.893Z · score: 1 (1 votes) · LW · GW

I think that since so few people have even heard of Glomarization or meta-honesty they'll be too suspicious. It's better to just say you haven't done it. Now, everyone on here or other websites who knows about these things and rationality, or a Gestapo soldier who knows I know about this- to them, I would Glomarize. If one of you asked me if I had robbed a bank, I would tell you I couldn't answer that because of its effect on my counterfactual selves. If anyone else, who didn't know about Glomarization, asked me if I had robbed a bank, I would tell them I hadn't. I mean, imagine being a police officer, going to a suspect's house, asking if they had robbed a bank, and hearing "I refuse to answer that question." They would take that as a confession.

Comment by liam-goddard on Transhumanism as Simplified Humanism · 2019-04-13T19:49:40.728Z · score: 1 (1 votes) · LW · GW

One of the problems people have with complete immortality is a lack of purpose. They think that if we were immortal, then we would never get anything done because we could always just put it off a few hundred years, and time would be meaningless. Also, we would be bored with life after we did everything. But we could always invent new technology, and create some sort of law system that gave special privileges to those who worked.

And even if they're worried about immortality, complete immortality is impossible. But what's wrong with a long lifespan? What's wrong with thousands or millions of years? People seem to think that the suffering in life should make it not worth it to live... but in that case, why are they living today? Very few people want to die today. Tomorrow, they won't want to die. The next day, they won't want to die. And if they put a certain limit on life, I expect that if they get that old, they won't want to die, no matter what they said. Why, then, do they insist now that they will want to die, and refuse cryonics or other lifespan-increasing options?

Comment by liam-goddard on Tell Your Rationalist Origin Story · 2019-04-13T18:27:14.137Z · score: 5 (3 votes) · LW · GW

My path to rationality started with atheism. I had always believed in the Christian God, and never questioned it. But one day I heard a reading in church about how if a city of villains contained but one innocent, God would not strike it down. I remembered the story of the Plagues of Egypt. Um... how was that possible? I thought about how evolution contrasted with "Adam and Eve" and started to wonder how reliable the Bible was. What if God was different from what it said? And then the question I had never asked before came into my head- "How do we know there's a God?"

About an hour later, I had found a total of zero evidence and converted to atheism.

Ever since then, I rebelled against my parents. I knew they weren't always right. If they didn't have a good reason for a rule, I tried to ask them why they were doing it, and since they rarely even gave any reasoning, flawed or not, I usually just acted as if they had never said the rule.

I also enjoyed Harry Potter fanfiction, and one day in February 2019, just over two months before this post, I thought that HPMOR might be interesting, and clicked on it. After discovering all of Eliezer Yudkowsky's writings, I started asking myself, "What do you think you know and why do you think you know it? Why do you believe what you believe?" I found that a large percentage of the beliefs that I held were incorrect.

I've been reading Overcoming Bias, Less Wrong, and other writings by Eliezer Yudkowsky and other rationalists in order to determine what flaws in my reasoning I still have, what biases I hold, and how to fix it ever since then.