Posts

The Lottery Paradox 2021-01-31T21:41:50.320Z
Where Did You Hear This: How Priming Can Affect Your Everyday Life 2020-12-09T18:40:06.983Z
What are LessWrong Meetups? 2020-10-07T14:36:55.099Z
Allowing Exploitability in Game Theory 2020-05-17T23:19:20.848Z
Could We Give an AI a Solution? 2020-05-15T21:38:33.581Z
Multiple Moralities 2019-11-03T17:06:34.374Z

Comments

Comment by Liam Goddard on Why We Launched LessWrong.SubStack · 2021-04-01T15:35:28.932Z · LW · GW

This is hilarious, but how do we access these "exclusive posts" without paying thousands of dollars?

Comment by Liam Goddard on Against neutrality about creating happy lives · 2021-03-15T20:00:08.508Z · LW · GW

I understand your point, but I'm not sure if your position is consistent. From a consequentialist standpoint, valuing new life is usually problematic in how it affects the world. The mere addition paradox is a thought experiment that shows that if you're willing to make even the tiniest sacrifice to create a new, slightly happy person, the continuation of the process suggests that it is moral to replace a small society of joyful people with a large society with very little average happiness. Because of this, many ethicists would prefer not to create new people just because they use up too many resources and therefore decrease the happiness of others. Would you be willing to create slightly happy people if it sacrificed utility in the lives of those who are already there?

Comment by Liam Goddard on Deflationism isn't the solution to philosophy's woes · 2021-03-10T03:29:37.154Z · LW · GW

Interesting, I didn't realize there were this many people on the site. How many users have written posts?

Comment by Liam Goddard on [deleted post] 2021-03-08T15:36:12.140Z

.

Comment by Liam Goddard on Elephant seal · 2020-12-21T14:27:59.019Z · LW · GW

I'm noticing that you're making a lot of posts that are very off-topic. What does an elephant seal, or opening a thesaurus, or finding out your ancestry, have to do with rationality? This would probably be a lot better for shortform posts or a personal blog outside of LessWrong.

Comment by Liam Goddard on Where Did You Hear This: How Priming Can Affect Your Everyday Life · 2020-12-11T00:00:07.113Z · LW · GW

Could you explain which research you're referring to?

Comment by Liam Goddard on Luna Lovegood and the Chamber of Secrets - Part 4 · 2020-12-05T16:20:07.616Z · LW · GW

It implies the writing is bad- GPT-3 isn't exactly the best author.

Comment by Liam Goddard on Luna Lovegood and the Chamber of Secrets - Part 4 · 2020-12-04T02:52:33.881Z · LW · GW

Normally I would say you were being rude, but the last time I saw someone call a post GPT-writing, they were absolutely correct, so I'm going to avoid passing judgement unless lsusr verifies who wrote it.

Comment by Liam Goddard on The Darwin Game - Rounds 21-500 · 2020-11-21T01:47:27.538Z · LW · GW

This has been a really interesting game! It's good to see I survived, although I barely made it to the end.

I assume that when you run the 'official' timeline, the arrival of AbstractSpyTreeBot will bring MeasureBot higher than in this, so MeasureBot will probably reach second place. But even with randomness as a factor, I doubt such a small change would disrupt EarlyBirdMimicBot's serious advantage. I think we can probably say Multicore is the winner.

I'd be interested in watching you continue the game past round 500- EarlyBirdMimicBot would most likely remain in first, and me in fourth, but I would want to see if things change between MeasureBot and BendBot. However, it might be a while before those rounds could be run- by this point we're seeing more alternate timelines than Doctor Strange. (Or the Guardians of the Universe, if you're more of a DC fan.)

Comment by Liam Goddard on The Darwin Game - Rounds 10 to 20 · 2020-11-17T15:22:30.348Z · LW · GW

LiamGoddard is an EquityBot. It plays 3232 on the first four rounds and then determines the sequence for the rest of the game based on the opponent's sequence on the first four rounds- if they played 2323, then continue playing 32323232, if they played 3232, then play 232323, if they played 3333 then play a pattern of 3s and 2s that makes sure they don't outperform cooperation while maximizing my score, if they played something random then just try to keep cooperating. No matter what they played, my selected pattern will continue for the rest of the game.

It is really simple, but I don't know how to code myself so I wanted to be sure that it was specified carefully. I also didn't realize at the time that simulators would be allowed. Nevertheless, it's reached fourth place, which is better than I had expected. Long live the Dark Lord Liam!

Comment by Liam Goddard on Meta-Honesty: Firming Up Honesty Around Its Edge-Cases · 2020-11-14T23:52:57.235Z · LW · GW

What's this about Inadequate Equilibria's publication?

Comment by Liam Goddard on [deleted post] 2020-11-03T00:42:52.316Z

I notice that there are a few other posts from the early days of LW that are repeated- maybe we should look through them.

Comment by Liam Goddard on [deleted post] 2020-11-03T00:41:46.541Z

This is a repeat post, and the links aren't working.

Comment by Liam Goddard on I'm confused. Could someone help? · 2020-11-03T00:39:00.782Z · LW · GW

This might apply if you switched dollars with utility (i.e. Pascal's Mugging) but in this case, the decreasing value of dollars affects the deal. A 1/1,000,000 chance for 1,000,000 dollars isn't as valuable as a dollar, because a million dollars, while valuable, isn't exactly a million times better than one dollar.

Comment by Liam Goddard on Covid Covid Covid Covid Covid 10/29: All We Ever Talk About · 2020-10-29T18:03:47.876Z · LW · GW

Could you put all of these COVID posts into a single sequence? I'd be interested in looking back at your analyses over time.

Comment by Liam Goddard on The Darwin Game · 2020-10-20T19:50:16.807Z · LW · GW

When does this start?

Comment by Liam Goddard on The Allais Paradox · 2020-10-12T14:18:25.542Z · LW · GW

I think an essential part of why people make such an irrational decision can be explained by thinking of the probabilities as frequencies. In problem one, 33 out of 34 possible versions of you will receive money, and you're willing to pay $3,000 to make sure that the 34th can as well. But in problem two, 33 out of 100 will receive money, and yet you're not willing to pay $3,000 to make sure that the 34th can. The bias here is essentially that people care more about a certainty than the actual probabilities.

Comment by Liam Goddard on All Lesswrong Posts by Yudkowsky in one .epub · 2020-10-11T18:57:52.926Z · LW · GW

Thanks, this is great.

Comment by Liam Goddard on Forcing Freedom · 2020-10-11T17:13:34.221Z · LW · GW

You raise some excellent points, and I agree for the most part; preferences don't necessarily correspond to happiness. But one thing I still stand behind, although it might not exactly follow the original thought experiment: If a person is fully educated and entirely understands the situation, including what their original utility function was, and all of the possible alternatives, and they still choose to be be enslaved, they shouldn't be forced into freedom.

Comment by Liam Goddard on Forcing Freedom · 2020-10-07T14:02:24.125Z · LW · GW

I would assume preference utilitarianism might be the best in this case; that is, free only those who want to be freed. Reminds me of SPEW from Harry Potter. If they truly wish to be a slave, and they would prefer that to freedom, then let them. However, in the case that they are forced to work for their masters against their true will, then they should be freed. As such, I would support:

First: Leave them alone. They want to be slaves, and as such, it's of no benefit to free them, and in fact it would be morally wrong.

Second: Leave them alone. The idea is that through hypnotism, they are transformed into completely different people, who are nevertheless happy. The choice between freedom and slavery is just a decision of which happy people to keep- and freedom would require a great amount of resources to be spent.

Third: Free them, but the means of doing this are complicated; they should be freed through an Intergalactic Council attack, but there aren't enough resources. I think the best idea would be to create enough fear throughout the masters that an uprising will occur, that the chances of winning the lottery rise up until the masters have to free most of the people on their own each week.

Fourth: Complicated. If you can devise a better plan to free them that works quickly, then implement it, but otherwise, the people do desire to be slaves, so probably just leave them alone.

Fifth: Leave them alone. The people aren't perfectly happy, but they do desire to remain here, and in fact their lives will be much better if they remain in slavery. If there is some way to rehabilitate the people into a better natural lifestyle, then suggest it, but if there's nothing feasible, just let them be.

Sixth: There are clearly much more important problems than what should happen on this planet. Kill yourself, but make sure that the galaxy finds out about the Lords of Arlak first.

Comment by Liam Goddard on The Felt Sense: What, Why and How · 2020-10-06T15:49:05.660Z · LW · GW

A lot of the time I associate objects, just by thinking about them, with hot or cold feelings in my mind, and while this sometimes follows actual temperature, for the most part it's completely independent of that- for example, cold includes almost all liquids, no matter what the temperature, while hot includes most, but not all, animals. There are a few other unconscious feelings that I associate with things, such as emotions, cleanliness, or even gender, even when these don't apply to objects at all. While these "felt senses" definitely have some correlation with reality, I think a lot of the time it can bias us in ways that have no rational reason- for example, phobias based on harmless objects, or liking someone as a person based on what they look like. Noticing when you're thinking about something based on "felt senses" can help you figure out why you feel what you feel about a certain thing.

Comment by Liam Goddard on The Adventure: a new Utopia story · 2020-08-27T18:12:38.194Z · LW · GW

Incredible. This seems genuinely Utopian and a lot like the sort of universe that I would consider perfect. The challenge is making it come true...

Comment by Liam Goddard on Movable Housing for Scalable Cities · 2020-05-15T21:48:15.053Z · LW · GW

Are you trying to turn Earth into dath ilan?

Comment by Liam Goddard on Multiple Moralities · 2019-11-03T19:06:17.172Z · LW · GW

All of this makes a lot of sense when it comes to rules for society. And I understand that certain forms of government, or certain laws, would be effective for almost any utility function. What I’m questioning isn’t how you achieve your goals, it’s where goals themselves come from, your terminal values.

Comment by Liam Goddard on [Question] When Do Unlikely Events Should Be Questioned? · 2019-11-03T17:19:06.276Z · LW · GW

The quote comes from HPMOR, when Harry gets his wand and it shares Voldemort’s core.

Comment by Liam Goddard on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2019-10-25T13:18:25.517Z · LW · GW

Due to the magic resonance, Quirrell can’t cast spells on Harry, so the test wasn’t faked.

Comment by Liam Goddard on A rational unfalsifyable believe · 2019-10-25T11:26:48.388Z · LW · GW

I definitely think that “Adam did not kill him” would be an accurate and rational belief, but there still would technically be some evidence that could convince her otherwise (such as by using a time machine.) Therefore the probability she should hold of that belief should not be quite 100%, though very close. But another important part of unfalsifiable is that she COULD have been convinced otherwise, that had there been different evidence she would have thought Adam to be the killer. The most important thing here is that beliefs are probabilistic. It is quite possible for a perfect Bayesian to believe something and to think it possible that they could encounter evidence which persuades them otherwise. Eve should hold a high, but not 100%, probability that Adam was innocent. I don’t see how any of this could apply to theism, though, since theism isn’t founded on much evidence.

Comment by Liam Goddard on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T01:49:34.592Z · LW · GW

Do you really think he would care enough about three hours of LessWrong shutdown to write the chapter?

Comment by Liam Goddard on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-26T15:05:32.272Z · LW · GW

The button appeared on my screen, but I'm new to Less Wrong, definitely do not have 1000+ karma, and didn't get an email... did everyone see it for some reason?

Comment by Liam Goddard on The AI in a box boxes you · 2019-09-06T00:53:56.133Z · LW · GW

Can't you just pull the plug before it can run any simulations?

Comment by Liam Goddard on Open & Welcome Thread - September 2019 · 2019-09-06T00:46:53.681Z · LW · GW

They were in black text.

Comment by Liam Goddard on Open & Welcome Thread - September 2019 · 2019-09-05T20:38:49.717Z · LW · GW

I saw two more posts that I’ve already read. I have Unread Only checked. I think there’s some problem with the unread filter.

Comment by Liam Goddard on Simulate and Defer To More Rational Selves · 2019-09-05T02:07:02.219Z · LW · GW

Several years ago, back before I deconverted and learned about Less Wrong, I sometimes used this without trying- I would pray to "God," and "God" would usually make better decisions than my intuitive judgements- not because of a higher power (it would be impossible to simulate a being more intelligent than myself) but because I was really simulating myself, minus several cognitive restraints. After I left religion, I stopped doing that anymore, because I basically thought praying was beneath me- although now that I've read this post, I will start doing it again. But recently, I've been doing something similar. I simulate a being who has never encountered our universe before and I explain to it various aspects of ordinary life, finding what does and doesn't make sense. There have been some interesting reactions, such as: "But why would they believe without evidence?" "They insist that they can rely on faith-" "Don't use the f-word!" or "You're telling me that people decide who they are going to spend their entire lives with based on WHO THEY WANT TO HAVE SEX WITH?" It can be pretty helpful.

Comment by Liam Goddard on The rational rationalist's guide to rationally using "rational" in rational post titles · 2019-09-04T21:30:10.288Z · LW · GW

This post was very rational.

Comment by Liam Goddard on Open & Welcome Thread - September 2019 · 2019-09-03T23:20:37.160Z · LW · GW

I checked- it was on already, and I turned it off and back on. The post isn't currently showing up.

Comment by Liam Goddard on Open & Welcome Thread - September 2019 · 2019-09-03T20:07:40.026Z · LW · GW

I’m on my phone and I can’t find the complain button, so I’ll post this here, since someone on the LW team has to see it. In the three recommended posts at the top of the home screen, I saw The Best Textbooks on Every Subject by lukeprog, and read it. A few hours later, I saw the same article, back in the recommended posts again, supposedly unread. I clicked on it and scrolled through it so I could remove it from the recommended posts. It came back again. And again. Can someone help?

Comment by Liam Goddard on The Unfinished Mystery of the Shangri-La Diet · 2019-08-28T02:27:54.491Z · LW · GW

Maybe just find a lot of reputable diets that might work, start on one, and then if you start regaining weight switch to something else. Not sure if this would work, but it could probably solve problems concerning getting used to it.

Comment by Liam Goddard on What's In A Name? · 2019-08-26T20:42:44.992Z · LW · GW

My brain is reacting to this information with extreme shock. “That cannot possibly be true,” says my brain. “No one could ever be subject to that effect, no one with even the slightest shred of sanity...” Then my brain remembers what humanity is like. “Oh, wait, yeah, that seems pretty likely.”

Comment by Liam Goddard on Chapter 45: Humanism, Pt 3 · 2019-08-20T23:16:16.666Z · LW · GW

There's no guarantee that death will be destroyed. If we make Unfriendly AI, then humanity is gone. If we start nuclear war, than humanity is gone. As Eliezer discusses here: https://www.youtube.com/watch?v=D6peN9LiTWA while we might hope to do whatever we can to stop death, while we might have that as our end goal, that does not justify a belief that we will succeed.

Comment by Liam Goddard on Chapter 21: Rationalization · 2019-08-20T23:02:40.273Z · LW · GW

On hpmor.com, when HPMOR was separated into six PDFs, this was the final chapter of Book One of HPMOR... is it supposed to be that way or not?

Comment by Liam Goddard on Dark Arts of Rationality · 2019-07-02T23:08:38.187Z · LW · GW

How are you supposed to do this? I know that it could be useful in many situations, but after reading the Sequences and looking at CFAR resources, I'm not able to doublethink. If I find that a fact is true, I can refuse to think about its truth, I can act as if it weren't true to a certain degree, but I can't actually bring myself to change my beliefs without evidence even when it's better to believe a lie. How are we supposed to use the Dark Arts?

Comment by Liam Goddard on Zombies! Zombies? · 2019-06-26T23:47:32.493Z · LW · GW

If the zombies are writing these consciousness papers, then they would have to have our beliefs, and they would strongly believe that THEY were conscious. So how do we know that we’re conscious? If we weren’t, we would still think we were, so there’s really no way to determine if we’re actually the zombies.

Comment by Liam Goddard on My Wild and Reckless Youth · 2019-06-23T01:13:15.463Z · LW · GW

While the guess that seems like it has the highest probability is the most important to test, anything that seems to have a moderately high probability should be tested, as long as it doesn't take up too many resources. This is particularly important when experiments take a long time- if Hypothesis A is more likely than Hypothesis B, but testing either would take 3 years, you don't want to just test A and risk taking 3 years when you could test both at the same time and determine if either was correct.

Comment by Liam Goddard on The Right to be Wrong · 2019-06-22T23:44:02.747Z · LW · GW

It's probably best to not update based on expertise. Even though that would usually improve accuracy, because the experts are more likely to be right than chance, or than most people's opinions, it stops anyone from creating anti-expert opinions. Accuracy isn't as important as discovery, and the only way anyone can discover anything new is if they find things that seem probable despite disagreeing with the experts, and if you update too much just because of who believes something, you'll very rarely make any scientific progress.

Comment by Liam Goddard on The LessWrong Team · 2019-06-15T21:24:48.343Z · LW · GW

What about Eliezer? He founded Less Wrong- why isn't he part of the team anymore?

Comment by Liam Goddard on Welcome and Open Thread June 2019 · 2019-06-11T22:10:22.348Z · LW · GW

I was wondering- what happened on June 16, 2017? Most of the users on Less Wrong, including Eliezer, seemed to have "joined" at that point, but Less Wrong was created on February 1, 2009, and I've seen posts from before 2017.

Comment by Liam Goddard on 2017 LessWrong Survey · 2019-06-03T18:13:05.279Z · LW · GW

Is there a 2018 or 2019 survey anywhere? I tried to find it, and I've seen some things from both you and Yvain, but I can't find any surveys past this one.

Comment by Liam Goddard on Five Planets In Search Of A Sci-Fi Story · 2019-06-02T01:04:08.807Z · LW · GW

Zyzzx Prime could always do either:

1. No rulers; every single member votes on every issue

or

2. Select scientists (not leading scientists, of course, just average ones) and have them work on genetic engineering. No one can know who they are, and they work at minimum wage. (Of course, it could be hard to convince them to do this.)

Comment by Liam Goddard on [deleted post] 2019-05-28T01:59:44.740Z

From what I've seen, most people seem to argue two-box, and the one-boxers usually just say that Omega needs to think you'll be a one-boxer, so precommit even if it later seems irrational... I haven't seen this exact argument yet, but I might have just not read enough.

Comment by Liam Goddard on [deleted post] 2019-05-26T20:25:59.918Z

.