Posts

What are LessWrong Meetups? 2020-10-07T14:36:55.099Z
Allowing Exploitability in Game Theory 2020-05-17T23:19:20.848Z
Could We Give an AI a Solution? 2020-05-15T21:38:33.581Z
Creationism and Many-Worlds 2019-11-15T02:06:38.137Z
Multiple Moralities 2019-11-03T17:06:34.374Z

Comments

Comment by liam-goddard on The Darwin Game - Rounds 21-500 · 2020-11-21T01:47:27.538Z · LW · GW

This has been a really interesting game! It's good to see I survived, although I barely made it to the end.

I assume that when you run the 'official' timeline, the arrival of AbstractSpyTreeBot will bring MeasureBot higher than in this, so MeasureBot will probably reach second place. But even with randomness as a factor, I doubt such a small change would disrupt EarlyBirdMimicBot's serious advantage. I think we can probably say Multicore is the winner.

I'd be interested in watching you continue the game past round 500- EarlyBirdMimicBot would most likely remain in first, and me in fourth, but I would want to see if things change between MeasureBot and BendBot. However, it might be a while before those rounds could be run- by this point we're seeing more alternate timelines than Doctor Strange. (Or the Guardians of the Universe, if you're more of a DC fan.)

Comment by liam-goddard on The Darwin Game - Rounds 10 to 20 · 2020-11-17T15:22:30.348Z · LW · GW

LiamGoddard is an EquityBot. It plays 3232 on the first four rounds and then determines the sequence for the rest of the game based on the opponent's sequence on the first four rounds- if they played 2323, then continue playing 32323232, if they played 3232, then play 232323, if they played 3333 then play a pattern of 3s and 2s that makes sure they don't outperform cooperation while maximizing my score, if they played something random then just try to keep cooperating. No matter what they played, my selected pattern will continue for the rest of the game.

It is really simple, but I don't know how to code myself so I wanted to be sure that it was specified carefully. I also didn't realize at the time that simulators would be allowed. Nevertheless, it's reached fourth place, which is better than I had expected. Long live the Dark Lord Liam!

Comment by liam-goddard on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases · 2020-11-14T23:52:57.235Z · LW · GW

What's this about Inadequate Equilibria's publication?

Comment by liam-goddard on Playing Video Games In Shuffle Mode · 2020-11-03T00:42:52.316Z · LW · GW

I notice that there are a few other posts from the early days of LW that are repeated- maybe we should look through them.

Comment by liam-goddard on Playing Video Games In Shuffle Mode · 2020-11-03T00:41:46.541Z · LW · GW

This is a repeat post, and the links aren't working.

Comment by liam-goddard on I'm confused. Could someone help? · 2020-11-03T00:39:00.782Z · LW · GW

This might apply if you switched dollars with utility (i.e. Pascal's Mugging) but in this case, the decreasing value of dollars affects the deal. A 1/1,000,000 chance for 1,000,000 dollars isn't as valuable as a dollar, because a million dollars, while valuable, isn't exactly a million times better than one dollar.

Comment by liam-goddard on Covid Covid Covid Covid Covid 10/29: All We Ever Talk About · 2020-10-29T18:03:47.876Z · LW · GW

Could you put all of these COVID posts into a single sequence? I'd be interested in looking back at your analyses over time.

Comment by liam-goddard on The Darwin Game · 2020-10-20T19:50:16.807Z · LW · GW

When does this start?

Comment by liam-goddard on The Allais Paradox · 2020-10-12T14:18:25.542Z · LW · GW

I think an essential part of why people make such an irrational decision can be explained by thinking of the probabilities as frequencies. In problem one, 33 out of 34 possible versions of you will receive money, and you're willing to pay $3,000 to make sure that the 34th can as well. But in problem two, 33 out of 100 will receive money, and yet you're not willing to pay $3,000 to make sure that the 34th can. The bias here is essentially that people care more about a certainty than the actual probabilities.

Comment by liam-goddard on All of Yudkowsky's Lesswrong Posts in one .epub · 2020-10-11T18:57:52.926Z · LW · GW

Thanks, this is great.

Comment by liam-goddard on Forcing Freedom · 2020-10-11T17:13:34.221Z · LW · GW

You raise some excellent points, and I agree for the most part; preferences don't necessarily correspond to happiness. But one thing I still stand behind, although it might not exactly follow the original thought experiment: If a person is fully educated and entirely understands the situation, including what their original utility function was, and all of the possible alternatives, and they still choose to be be enslaved, they shouldn't be forced into freedom.

Comment by liam-goddard on Forcing Freedom · 2020-10-07T14:02:24.125Z · LW · GW

I would assume preference utilitarianism might be the best in this case; that is, free only those who want to be freed. Reminds me of SPEW from Harry Potter. If they truly wish to be a slave, and they would prefer that to freedom, then let them. However, in the case that they are forced to work for their masters against their true will, then they should be freed. As such, I would support:

First: Leave them alone. They want to be slaves, and as such, it's of no benefit to free them, and in fact it would be morally wrong.

Second: Leave them alone. The idea is that through hypnotism, they are transformed into completely different people, who are nevertheless happy. The choice between freedom and slavery is just a decision of which happy people to keep- and freedom would require a great amount of resources to be spent.

Third: Free them, but the means of doing this are complicated; they should be freed through an Intergalactic Council attack, but there aren't enough resources. I think the best idea would be to create enough fear throughout the masters that an uprising will occur, that the chances of winning the lottery rise up until the masters have to free most of the people on their own each week.

Fourth: Complicated. If you can devise a better plan to free them that works quickly, then implement it, but otherwise, the people do desire to be slaves, so probably just leave them alone.

Fifth: Leave them alone. The people aren't perfectly happy, but they do desire to remain here, and in fact their lives will be much better if they remain in slavery. If there is some way to rehabilitate the people into a better natural lifestyle, then suggest it, but if there's nothing feasible, just let them be.

Sixth: There are clearly much more important problems than what should happen on this planet. Kill yourself, but make sure that the galaxy finds out about the Lords of Arlak first.

Comment by liam-goddard on The Felt Sense: What, Why and How · 2020-10-06T15:49:05.660Z · LW · GW

A lot of the time I associate objects, just by thinking about them, with hot or cold feelings in my mind, and while this sometimes follows actual temperature, for the most part it's completely independent of that- for example, cold includes almost all liquids, no matter what the temperature, while hot includes most, but not all, animals. There are a few other unconscious feelings that I associate with things, such as emotions, cleanliness, or even gender, even when these don't apply to objects at all. While these "felt senses" definitely have some correlation with reality, I think a lot of the time it can bias us in ways that have no rational reason- for example, phobias based on harmless objects, or liking someone as a person based on what they look like. Noticing when you're thinking about something based on "felt senses" can help you figure out why you feel what you feel about a certain thing.

Comment by liam-goddard on The Adventure: a new Utopia story · 2020-08-27T18:12:38.194Z · LW · GW

Incredible. This seems genuinely Utopian and a lot like the sort of universe that I would consider perfect. The challenge is making it come true...

Comment by liam-goddard on Movable Housing for Scalable Cities · 2020-05-15T21:48:15.053Z · LW · GW

Are you trying to turn Earth into dath ilan?

Comment by liam-goddard on Multiple Moralities · 2019-11-03T19:06:17.172Z · LW · GW

All of this makes a lot of sense when it comes to rules for society. And I understand that certain forms of government, or certain laws, would be effective for almost any utility function. What I’m questioning isn’t how you achieve your goals, it’s where goals themselves come from, your terminal values.

Comment by liam-goddard on [Question] When Do Unlikely Events Should Be Questioned? · 2019-11-03T17:19:06.276Z · LW · GW

The quote comes from HPMOR, when Harry gets his wand and it shares Voldemort’s core.

Comment by liam-goddard on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2019-10-25T13:18:25.517Z · LW · GW

Due to the magic resonance, Quirrell can’t cast spells on Harry, so the test wasn’t faked.

Comment by liam-goddard on A rational unfalsifyable believe · 2019-10-25T11:26:48.388Z · LW · GW

I definitely think that “Adam did not kill him” would be an accurate and rational belief, but there still would technically be some evidence that could convince her otherwise (such as by using a time machine.) Therefore the probability she should hold of that belief should not be quite 100%, though very close. But another important part of unfalsifiable is that she COULD have been convinced otherwise, that had there been different evidence she would have thought Adam to be the killer. The most important thing here is that beliefs are probabilistic. It is quite possible for a perfect Bayesian to believe something and to think it possible that they could encounter evidence which persuades them otherwise. Eve should hold a high, but not 100%, probability that Adam was innocent. I don’t see how any of this could apply to theism, though, since theism isn’t founded on much evidence.

Comment by liam-goddard on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T01:49:34.592Z · LW · GW

Do you really think he would care enough about three hours of LessWrong shutdown to write the chapter?

Comment by liam-goddard on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-26T15:05:32.272Z · LW · GW

The button appeared on my screen, but I'm new to Less Wrong, definitely do not have 1000+ karma, and didn't get an email... did everyone see it for some reason?

Comment by liam-goddard on The AI in a box boxes you · 2019-09-06T00:53:56.133Z · LW · GW

Can't you just pull the plug before it can run any simulations?

Comment by liam-goddard on Open & Welcome Thread - September 2019 · 2019-09-06T00:46:53.681Z · LW · GW

They were in black text.

Comment by liam-goddard on Open & Welcome Thread - September 2019 · 2019-09-05T20:38:49.717Z · LW · GW

I saw two more posts that I’ve already read. I have Unread Only checked. I think there’s some problem with the unread filter.

Comment by liam-goddard on Simulate and Defer To More Rational Selves · 2019-09-05T02:07:02.219Z · LW · GW

Several years ago, back before I deconverted and learned about Less Wrong, I sometimes used this without trying- I would pray to "God," and "God" would usually make better decisions than my intuitive judgements- not because of a higher power (it would be impossible to simulate a being more intelligent than myself) but because I was really simulating myself, minus several cognitive restraints. After I left religion, I stopped doing that anymore, because I basically thought praying was beneath me- although now that I've read this post, I will start doing it again. But recently, I've been doing something similar. I simulate a being who has never encountered our universe before and I explain to it various aspects of ordinary life, finding what does and doesn't make sense. There have been some interesting reactions, such as: "But why would they believe without evidence?" "They insist that they can rely on faith-" "Don't use the f-word!" or "You're telling me that people decide who they are going to spend their entire lives with based on WHO THEY WANT TO HAVE SEX WITH?" It can be pretty helpful.

Comment by liam-goddard on The rational rationalist's guide to rationally using "rational" in rational post titles · 2019-09-04T21:30:10.288Z · LW · GW

This post was very rational.

Comment by liam-goddard on Open & Welcome Thread - September 2019 · 2019-09-03T23:20:37.160Z · LW · GW

I checked- it was on already, and I turned it off and back on. The post isn't currently showing up.

Comment by liam-goddard on Open & Welcome Thread - September 2019 · 2019-09-03T20:07:40.026Z · LW · GW

I’m on my phone and I can’t find the complain button, so I’ll post this here, since someone on the LW team has to see it. In the three recommended posts at the top of the home screen, I saw The Best Textbooks on Every Subject by lukeprog, and read it. A few hours later, I saw the same article, back in the recommended posts again, supposedly unread. I clicked on it and scrolled through it so I could remove it from the recommended posts. It came back again. And again. Can someone help?

Comment by liam-goddard on The Unfinished Mystery of the Shangri-La Diet · 2019-08-28T02:27:54.491Z · LW · GW

Maybe just find a lot of reputable diets that might work, start on one, and then if you start regaining weight switch to something else. Not sure if this would work, but it could probably solve problems concerning getting used to it.

Comment by liam-goddard on What's In A Name? · 2019-08-26T20:42:44.992Z · LW · GW

My brain is reacting to this information with extreme shock. “That cannot possibly be true,” says my brain. “No one could ever be subject to that effect, no one with even the slightest shred of sanity...” Then my brain remembers what humanity is like. “Oh, wait, yeah, that seems pretty likely.”

Comment by liam-goddard on Chapter 45: Humanism, Pt 3 · 2019-08-20T23:16:16.666Z · LW · GW

There's no guarantee that death will be destroyed. If we make Unfriendly AI, then humanity is gone. If we start nuclear war, than humanity is gone. As Eliezer discusses here: https://www.youtube.com/watch?v=D6peN9LiTWA while we might hope to do whatever we can to stop death, while we might have that as our end goal, that does not justify a belief that we will succeed.

Comment by liam-goddard on Chapter 21: Rationalization · 2019-08-20T23:02:40.273Z · LW · GW

On hpmor.com, when HPMOR was separated into six PDFs, this was the final chapter of Book One of HPMOR... is it supposed to be that way or not?

Comment by liam-goddard on Dark Arts of Rationality · 2019-07-02T23:08:38.187Z · LW · GW

How are you supposed to do this? I know that it could be useful in many situations, but after reading the Sequences and looking at CFAR resources, I'm not able to doublethink. If I find that a fact is true, I can refuse to think about its truth, I can act as if it weren't true to a certain degree, but I can't actually bring myself to change my beliefs without evidence even when it's better to believe a lie. How are we supposed to use the Dark Arts?

Comment by liam-goddard on Zombies! Zombies? · 2019-06-26T23:47:32.493Z · LW · GW

If the zombies are writing these consciousness papers, then they would have to have our beliefs, and they would strongly believe that THEY were conscious. So how do we know that we’re conscious? If we weren’t, we would still think we were, so there’s really no way to determine if we’re actually the zombies.

Comment by liam-goddard on My Wild and Reckless Youth · 2019-06-23T01:13:15.463Z · LW · GW

While the guess that seems like it has the highest probability is the most important to test, anything that seems to have a moderately high probability should be tested, as long as it doesn't take up too many resources. This is particularly important when experiments take a long time- if Hypothesis A is more likely than Hypothesis B, but testing either would take 3 years, you don't want to just test A and risk taking 3 years when you could test both at the same time and determine if either was correct.

Comment by liam-goddard on The Right to be Wrong · 2019-06-22T23:44:02.747Z · LW · GW

It's probably best to not update based on expertise. Even though that would usually improve accuracy, because the experts are more likely to be right than chance, or than most people's opinions, it stops anyone from creating anti-expert opinions. Accuracy isn't as important as discovery, and the only way anyone can discover anything new is if they find things that seem probable despite disagreeing with the experts, and if you update too much just because of who believes something, you'll very rarely make any scientific progress.

Comment by liam-goddard on The LessWrong Team · 2019-06-15T21:24:48.343Z · LW · GW

What about Eliezer? He founded Less Wrong- why isn't he part of the team anymore?

Comment by liam-goddard on Welcome and Open Thread June 2019 · 2019-06-11T22:10:22.348Z · LW · GW

I was wondering- what happened on June 16, 2017? Most of the users on Less Wrong, including Eliezer, seemed to have "joined" at that point, but Less Wrong was created on February 1, 2009, and I've seen posts from before 2017.

Comment by liam-goddard on 2017 LessWrong Survey · 2019-06-03T18:13:05.279Z · LW · GW

Is there a 2018 or 2019 survey anywhere? I tried to find it, and I've seen some things from both you and Yvain, but I can't find any surveys past this one.

Comment by liam-goddard on Five Planets In Search Of A Sci-Fi Story · 2019-06-02T01:04:08.807Z · LW · GW

Zyzzx Prime could always do either:

1. No rulers; every single member votes on every issue

or

2. Select scientists (not leading scientists, of course, just average ones) and have them work on genetic engineering. No one can know who they are, and they work at minimum wage. (Of course, it could be hard to convince them to do this.)

Comment by Liam Goddard on [deleted post] 2019-05-28T01:59:44.740Z

From what I've seen, most people seem to argue two-box, and the one-boxers usually just say that Omega needs to think you'll be a one-boxer, so precommit even if it later seems irrational... I haven't seen this exact argument yet, but I might have just not read enough.

Comment by Liam Goddard on [deleted post] 2019-05-26T20:25:59.918Z

.

Comment by liam-goddard on Yudkowsky's brain is the pinnacle of evolution · 2019-05-26T17:27:04.693Z · LW · GW

You do realize that other people work on AI? Sure, Eliezer might be the most important, but he is not the only member of MIRI's team. I'd definitely sacrifice several people to save him, but nowhere near 3^^^3. Eliezer's death would delay the Singularity, not stop it entirely, and certainly not destroy the world.

Comment by liam-goddard on How would you take over Rome? · 2019-05-24T21:37:38.398Z · LW · GW

Use your wonderful "inventions" and knowledge about the "future" to show your amazing powers. Then explain to them that you are Mercury, god of a lot of different things, including some forms of prophecy. But like Jupiter had done previously to Neptune and Apollo, Jupiter has now sent you down to Earth in the form of a human to work off a debt as you have committed a grave crime against Jupiter (Neptune and Apollo had tried to overthrow him.)

As Mercury, you are assigned by Jupiter to serve the Emperor of Rome. Continue to impress them, and as they worship you, gain power and strength in the society. Also, use your modern rationality/science to advise the Emperor until you control most of his decisions, leaving him as merely a puppet while you receive most of the praise and make most of the actual laws of Rome.

While you are gaining power, you also are trusted by the Emperor and manage to steal money. Even if you are caught (which, if possible, you aren't) they would never dare beat or kill a god, and it wouldn't hurt your image as "Mercury"- after all, one of the things he's best known for is being the god of thieves. Eventually, you start bribing officials to help you. You build trust inside of the leaders of Rome.

When the Emperor is "mysteriously assassinated" you, Mercury, prophet, inventor, god, nobleman, wise, skilled at rulership, wealthy, trustworthy, high-ranked, and adored, become his replacement. If anyone asks why a servant is going to become the Emperor, you tell them that your orders were to serve the government of Rome, and its people, and what better way to do that than to rule it in a way that makes life for the people better?! Especially after you make some donations from the treasures of Rome that appease some of the groups that include the people questioning you, and kill the other questioners for blasphemy.

You are the Emperor of Rome.


I know this solution requires a lot of luck, and could be foiled, but it seems to me that impersonating a god would be the best option.

Comment by liam-goddard on Beautiful Probability · 2019-05-22T22:04:23.526Z · LW · GW

The two experiments would differ. In Experiment 1, we now have received evidence of a 70% probability of a cure. However, Experiment 2 doesn't offer the same evidence, because it will stop as soon as it gets significantly over 60%. Based on the randomness of results, it will not always fit the true probability. If the real probability was 70%, wouldn't it have most likely gotten up to 70% with 7 out of 10, or 14 out of 20? For most of Experiment 2, less than 60% of the patients were cured. The fact that by 100 patients it happened to go up was most likely an error in the data, and if the experiment was continued it would probably drop back below 60%.

Comment by liam-goddard on "I don't know." · 2019-05-14T20:54:25.128Z · LW · GW

Just say, "I'm not able to assign a very high probability to any possibility, since I don't have very much information, but the possibility that I would assign the highest probability to is the tree having ___ to ___ apples, with a probability of ___%." You don't know how many there are, but you can still admit that you don't know while assigning a probability.

Comment by liam-goddard on Chapter 1: A Day of Very Low Probability · 2019-05-10T20:21:33.175Z · LW · GW

Um... what do all of those comments mean? Also, I’m wondering how Harry became so smart. I know part of it was from [Spoiler from Book Six] but that really wouldn’t have been enough, even combined with science. Why is it that Harry was able to think rationally and create a test, but Michael wasn’t even willing to consider the idea?

Comment by liam-goddard on Pretending to be Wise · 2019-04-29T02:15:33.982Z · LW · GW

Argument is of course a good thing among rational people, since refusing to argue and agreeing to disagree solves nothing- you won't come to any agreement and you won't know what's right.. But I think the reason many people see argument as a bad thing is because most people are too stubborn to admit they are wrong, so argument among most people is pointless because one or both sides is unwilling to actually debate. If people admitted they were wrong, argument wouldn't be treated as such a bad thing, but as it is, with no one willing to see truth, it often ends up accomplishing nothing.

Comment by liam-goddard on Planning Fallacy · 2019-04-29T01:49:35.363Z · LW · GW

Apart from planning, optimism seems to be a problem in many situations. Since I've read this article and others, I've tried to correct my incorrect beliefs, and whenever I have the belief "this scenario is how I want it to be" I immediately take it as a warning sign and reevaluate the belief, and most of the time I've been too optimistic... I remember earlier in my life, in fourth grade, being positive that a certain person who I had had a crush on liked me. I overheard a conversation in which she stated that she liked someone else. I went over why I had believed that and realized I had had absolutely zero evidence of anything. My "intuition" had told me what I thought was right.

Intuition is insanely biased. Whatever you think, it's probably way too happy unless you evaluate the probability from the outside view, find an estimate that seems accurate, and then chop it in half.

Comment by liam-goddard on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases · 2019-04-13T20:20:46.893Z · LW · GW

I think that since so few people have even heard of Glomarization or meta-honesty they'll be too suspicious. It's better to just say you haven't done it. Now, everyone on here or other websites who knows about these things and rationality, or a Gestapo soldier who knows I know about this- to them, I would Glomarize. If one of you asked me if I had robbed a bank, I would tell you I couldn't answer that because of its effect on my counterfactual selves. If anyone else, who didn't know about Glomarization, asked me if I had robbed a bank, I would tell them I hadn't. I mean, imagine being a police officer, going to a suspect's house, asking if they had robbed a bank, and hearing "I refuse to answer that question." They would take that as a confession.