Posts

Comments

Comment by Olivier Faure (olivier-faure) on Looking back on my alignment PhD · 2022-07-13T09:14:42.128Z · LW · GW

I really appreciate you for writing how despite verbally agreeing that modesty is unproductive, you nevertheless never judged high-status people as dumb. That's totally the kind of noticing/Law I imagine we need more of. And I also imagine this is the sort of mindset Eliezer is looking for - the mindset where you figure those things out unprompted, without an Eliezer there to correct you.

I find this comment kind of aggravating.

I'll claim that the very mindset you mention starts with not taking Eliezer at face value when he half-implies he's the only person producing useful alignment research on earth, an that his ability to write an angry rant about hopeless it all is proves that everyone else is a follower drone because they didn't write the rant first.

Like, I think Eliezer deserves a lot of respect, and I'm aware I'm caricaturing him a bit, but... not that much?

I don't even think I disagree with you in substance. The mindset of thinking for yourself is useful, etc. But part of that mindset is to not unironically quote everything Eliezer says about how smart he is.

Comment by Olivier Faure (olivier-faure) on Looking back on my alignment PhD · 2022-07-13T09:05:39.643Z · LW · GW

But my guess is that studying applied math and CS would have been better for me per hour than studying science, and the reason I spent that time learning science was largely because I think it's exciting and cool rather than because I endorse it as a direct path to knowing things that are useful for doing alignment research

Strong upvote for this.

Doing things you find fun is extremely efficient. Studying things you don't like is inefficient, no matter how useful these things may turn out to be for alignment or x-risk.

Comment by Olivier Faure (olivier-faure) on Looking back on my alignment PhD · 2022-07-13T08:55:28.342Z · LW · GW

I know we're not supposed to optimize for not sounding like a cult, bur holy crap the cult vibe is strong with this remark.

(And yes, I understand that "dignity" is meant to be a shorthand for "behaving in a way that improves humanity's long term chances of survival". It's still a sentence that implies unhealthy social dynamics, even with that framing.)

Comment by Olivier Faure (olivier-faure) on What DALL-E 2 can and cannot do · 2022-05-16T07:50:54.246Z · LW · GW

Regarding text, if the problem comes from encoding, does that mean the model does better with individual letters and digits? Eg

"The letter A"

"The letters X Y and Z"

"Number 8"

"A 3D rendering of the number 5"

Comment by Olivier Faure (olivier-faure) on GPT-3 and concept extrapolation · 2022-04-20T19:55:11.684Z · LW · GW

It feels like a "gotcha" rebuke, but it honestly doesn't seem like it really addresses the article's point. Unless you think GPT-3 would perform better if given more time to work on it?

Comment by Olivier Faure (olivier-faure) on More GPT-3 and symbol grounding · 2022-04-20T19:42:15.576Z · LW · GW

For that prompt "she went to work at the office" was still the most common completion. But it only happened about  of the time. Alternatively, GPT-3 sometimes found the completion "she was found dead". Kudos, GPT-3, you understand the prompt after all! That completion came up about  of the time.

Does it really understand, though? If you replace the beginning of the prompt with "She died on Sunday the 7th", does it change the probability that the model outputs "she was found dead"?

Comment by Olivier Faure (olivier-faure) on Self-Integrity and the Drowning Child · 2021-10-31T09:14:36.170Z · LW · GW

From previous posts about this setting, the background assumption is that the child almost certainly won't permanently die if it takes 15 seconds longer to reach them.

Sure, whatever.

Honestly, that answer makes me want to engage with the article even less. If the idea is that you're supposed to know about an entire fanfiction-of-a-fanfiction canon to talk about this thought experiment, then I don't see what it's doing in the Curated feed.

Comment by Olivier Faure (olivier-faure) on Self-Integrity and the Drowning Child · 2021-10-28T09:13:31.723Z · LW · GW

I reject the parable/dilemma for another reason: in the majority of cases, I don't think it's ethical to spend so much money on a suit that you would legitimately hesitate to save a drowning child if it put the suit at risk?

If you're so rich that you can buy tailor-made suits, then sure, go save the child and buy another suit. If you're not... then why are you buying super-expensive tailor-made suits? I see extremely few situations where keeping the ability to play status games slightly better would be worth more than saving a child's life.

(And yes, there's arguments about "near vs far" and how you could spend your money saving children in poor countries instead, but other commenters have already pointed out why one might still value a nearby child more. Also, even under that lens, the framework "don't spend more money than you can afford on status signals" still holds.)

Comment by Olivier Faure (olivier-faure) on The Best Software For Every Need · 2021-09-20T09:28:22.754Z · LW · GW

As a counterpoint, here's my experience with NixOS: https://poignardazur.github.io/2021/09/08/nixos-post-mortem/

Comment by Olivier Faure (olivier-faure) on I'm from a parallel Earth with much higher coordination: AMA · 2021-04-08T13:13:35.007Z · LW · GW

For example I think many Muslim countries have a lot of success at preventing pornography

Citation needed.

My default assumption for any claims of that sort is "they had a lot of success at concealing the pornography that existed in such a way that officials can pretend it doesn't exist".

Comment by Olivier Faure (olivier-faure) on I'm from a parallel Earth with much higher coordination: AMA · 2021-04-08T03:07:17.539Z · LW · GW

This was fun to read, but also a little awkward. This feels less like "The world if everyone was an economist" and more "The world if everyone agreed with Eliezer Yudkowsky about everything".

Some thoughts:

  • I don't care how strong your social norms are, you're not enforcing that pornography ban. Forget computers, it's unworkable as long as people have paper.

  • Same thing with sad people not reproducing. People would go "fuck social norms" and have kids anyway. People who respect the norms would be pushed out of the gene pool. I don't see how you could enforce those norms without totalitarian violence.

  • I don't see how you could have both a self-repairing culture of transparency and also a completely secret conspiracy that suppresses technological development (in a free market with its own evolutionary pressures) without anyone realizing it. The company that makes the fastest computers drives everyone else out of business. You can only stop Moore's law if everyone coordinates to not build better computers, but that's not subtle.

I'm not sure EY missed that (the guy is usually really good with this stuff), so maybe the joke is that an AGI already took over their world or something.