My Recollection of How This All Got Started

post by Gordon Seidoh Worley (gworley) · 2022-04-06T03:22:48.988Z · LW · GW · 12 comments

Contents

12 comments

I've told this story to various folks one-on-one. They usually want to know something like "how did you get into AI safety" or "how did you get into EA". And although I expect to keep telling it one-off, I'll write it down for those of you I'll never get to meet.

Why should you read it? Because my story is partially the story of how all this got started: LessWrong, AI safety, EA, and so on. I'm not saying it's the whole story, I'm just saying I've been hanging around what I think of as this community for over 20 years now, so my story is one facet of how we got here.


My story starts in the late 90s. I was hanging around this mailserv called "extropians". How I ended up there I don't recall, but my best guess is I wandered over directly or indirectly from something I found on Slashdot. I'm pretty sure nanobots or cryonics were involved.

This guy Eli-something-or-other wrote some posts about how most people are unable to think coherently about future tech that's too many "shock levels" above what they already know about. He started a mailing list that split off from extropians to talk coherently about the most shocking stuff: the Shock Level 4 stuff.

Thus the community began to come into existence with the creation of the SL4 mailserv.

We talked about all kinds of wild ideas on there, but the big one was AGI. An important topic in those days was figuring out if AGI was default safe or dangerous. Eliezer said default dangerous and made lots of arguments that this was the case. I was eventually convinced. Some remain unconvinced to this day.

Big names I remember from this time include Hal Finney, Christine Peterson, Max Moore, Eric Drexler, Ben Goertzel, and Robin Hanson. I'm not sure who was on the list and who wasn't. I also remember some other names but we were not among the big players.

The list started to die down after a couple years, but around this time we started hanging out on IRC. It was a lot of fun, but a huge time suck. This helped bring the community more together in real time, but everyone was still spread out. Somewhere along the way the Singularity Institute started.

Around this time Eliezer started to get really into heuristics and biases and Bayes' Theorem, claiming it was the secret of the universe or something. After I studied a bunch of information theory and thermodynamics I basically believed it, although I still prefer to think in the cybernetic terms I picked up from my engineering education. We also all got interested in quantum physics and evolutionary psychology and some other stuff.

Eliezer was really on about building Friendly AI and had been since about the start of the SL4 mailing list. What that meant got clearer over time. What also got clearer is that we were all too stupid to help even though many of us were convinced AGI was going to be dangerous (I remember a particular exchange where Eliezer got so frustrated at my inability to do some basic Bayesian reasoning that he ended up writing a whole guide to Bayes' Theorem). Part of the problem seemed to be not that we lacked intelligence, but that we just didn't know how to think very good. Our epistemics were garbage and it wasn't very clear why.

Eliezer went off into the wilderness and stopped hanging out on IRC. The channel kind of died down and went out with a whimper. I got busy doing other stuff but tried to keep track of what was happening. The community felt like it was in a lull.

Then Overcoming Bias started! Eliezer and Robin posted lots of great stuff! There was an AI foom debate. Sequences of posts were posted. It was fun!

Then at some point LessWrong started. Things really picked up. New people emerged in the community. I found myself busy and sidelined at this time, so it's kind of hazy what happened. I remember a lot of great posts, and especially eagerly awaiting new entries in what would eventually become the Sequences.

I think CFAR started a bit after. Again, timeline is fuzzy for me here. I remember volunteering to be in a study to see if CFAR worked. I ended up in the control group. I'm happy to report that after all these years I still am, having managed to avoid attending a single CFAR workshop to make sure we get the longitudinal study results we deserve!

Some Yavin guy posted some good stuff. Then I found out he had his own blog. That was pretty good. I think this is around the time I moved to California and Berkeley. It turned out I could hang with rationalists in person. Moved in with SL4 longtimer Michael Anisimov and his group house.

I was busy working at a startup but on the weekends I sometimes got invited to cool parties. Seemed like stuff was coming together. Singularity Institute became MIRI somewhere in there. Then there were some conferences and a certain book that made people start to talk AI safety seriously.

Me and the rationalists where hanging out. I found out most rationalists were actually the walking wounded, as folks like to say. I was surprised how hard many of these so-called rationalists sucked at rationality. They said the right things but the vibe was way off. Luckily I ran into a couple other folks who felt similarly and we vibed on things like developmental psychology. This felt like the start of post-rationality proper. Yes, there were people talking about the ideas, but this is roughly when it hit the Berkeley rationality scene, and not long later you all had to start dealing with us posting stuff on LessWrong.

Soon after I started hearing about EA. Seemed cool but didn't really seem like it was for me, but then EA started to eat AI safety, and so after about a year I was like "yeah sure, I'm an EA I guess". EA Global was really great.

LessWrong 2.0 made the community a lot stronger. LessWrong 1.0 was basically just link posts and a few people trying to keep the lights on. I tried to do my own revived version of LessWrong on Medium, but it didn't take off. Version 2 brought LessWrong back to life and has been going great ever since.

I finally got my ideas together, along with enough personal development to be on top of my shit, and started posting in earnest about a bunch of weird ideas I'll lump under post-rationality but also AI safety and my unique approach to the problem. I'm definitely not in the mainstream of research, and I've basically run the course on my AI safety ideas for now, but I see signs that I've had some impact on some of you all, so that feels nice.

The pandemic put a damper on parties, but things are starting to come back. We all found other ways to stay connected during the long isolation. And that about brings us up to today. The community became a thing somewhere along the way, solid in a way that I don't much worry about it disappearing easily, though I expect it to continue to transform.


Okay, so that's my story. I probably forgot a lot of stuff. If I remember things later I might add them. I've definitely written about bits and pieces of this other places. If there's particular things you'd like to know about the history of the community, post them and I (and maybe other old-timers?) can try to answer them.

12 comments

Comments sorted by top scores.

comment by Gunnar_Zarncke · 2022-04-06T20:34:33.694Z · LW(p) · GW(p)

What are your thoughs on the diaspora when everybody seemed to build local meetups and much less was posted on LW 1.0? How do you see the causality pointing?

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-04-06T22:55:42.129Z · LW(p) · GW(p)

Seems like it was an import time for building community resilience. The diaspora period is really when we stopped depending on the founder and had to find our own ways to build community. I think much of what makes the community great today exists because we could no longer lean on Eliezer to be the leader and we had to start leading ourselves, first locally, then globally though things like LessWrong 2.0.

comment by rank-biserial · 2022-04-06T04:45:38.668Z · LW(p) · GW(p)

Some Yavin guy posted some good stuff. Then I found out he had his own blog. That was pretty good.

Is "Yavin" supposed to be "Yarvin" or "Yvain"? Both are quite plausible.

Replies from: gworley, shminux
comment by Gordon Seidoh Worley (gworley) · 2022-04-06T04:57:49.396Z · LW(p) · GW(p)

Yes ☺️

Replies from: rank-biserial
comment by rank-biserial · 2022-04-06T05:03:05.210Z · LW(p) · GW(p)

Well, "Yarvin" is closer in Levenshtein distance, plus you're acting coy, so I'm updating towards "Yarvin" 😎

Replies from: Pattern, Raemon
comment by Pattern · 2022-04-06T20:19:48.139Z · LW(p) · GW(p)

Don't they have the same Hamming distance?

Replies from: rank-biserial
comment by rank-biserial · 2022-04-06T23:15:16.872Z · LW(p) · GW(p)

Hamming distance assumes that the strings you're comparing are the same length, but "Yavin" is shorter than "Yarvin". Levenshtein distance is the smallest number of insertions, deletions, or character substitutions required to get from string A to string B, while Hamming distance only counts char-substitutions.

Replies from: Pattern
comment by Pattern · 2022-04-08T18:21:59.353Z · LW(p) · GW(p)

Yavin

Yvain


I thought the av switch is more natural than the other. I wouldn't call it a common typo, but...

I can see why if you count a switch as the same as adding a letter, then they'd be the same.

comment by Raemon · 2022-04-06T05:14:41.488Z · LW(p) · GW(p)

I think he just literally means both

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2022-04-06T05:29:07.627Z · LW(p) · GW(p)

I was actually just thinking about Scott and didn't realize the overlap with Curtis, but it's not really wrong so I'm sticking with it.

comment by Shmi (shminux) · 2022-04-06T05:26:23.615Z · LW(p) · GW(p)

Imagine a cross between the two.

comment by Eli Tyre (elityre) · 2024-05-29T04:20:49.722Z · LW(p) · GW(p)

(I remember a particular exchange where Eliezer got so frustrated at my inability to do some basic Bayesian reasoning that he ended up writing a whole guide to Bayes' Theorem).

Wow. This made me laugh out loud.

Thank you for your service to the intellectual commons!