Open & Welcome Thread — February 2023

post by Ben Pace (Benito) · 2023-02-15T19:58:00.435Z · LW · GW · 36 comments

Contents

37 comments

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.

If you want to explore the community more, I recommend reading the Library [? · GW], checking recent Curated posts [? · GW], seeing if there are any meetups in your area [? · GW], and checking out the Getting Started [? · GW] section of the LessWrong FAQ [? · GW]. If you want to orient to the content on the site, you can also check out the Concepts section [? · GW].

The Open Thread tag is here [? · GW]. The Open Thread sequence is here [? · GW].

36 comments

Comments sorted by top scores.

comment by Ben Pace (Benito) · 2023-02-15T20:08:35.724Z · LW(p) · GW(p)

To give an update about what the Lightcone Infrastructure team has been working on: we recently decided to close down a big project we'd been running for the last 1.5 years, an office space in Berkeley for people working on x-risk/EA/rationalist things that we opened August 2021.

At some point I hope we publish a postmortem, but for now here's a copy of the announcement I wrote in the office slack announcing its closure 2-3 week ago.

Hello there everyone,

Sadly, I'm here to write that we've decided to close down the Lightcone Offices by the end of March. While we initially intended to transplant the office to the Rose Garden Inn, Oliver has decided (and I am on the same page about this decision) to make a clean break going forward to allow us to step back and renegotiate our relationship to the entire EA/longtermist ecosystem, as well as change what products and services we build.

Below I'll give context on the decision and other details, but the main practical information is that the office will no longer be open after Friday March 24th. (There will be a goodbye party on that day.)

I asked Oli to briefly state his reasoning for this decision, here's what he says:

An explicit part of my impact model for the Lightcone Offices has been that its value was substantially dependent on the existing EA/AI Alignment/Rationality ecosystem being roughly on track to solve the world's most important problems, and that while there are issues, pouring gas into this existing engine, and ironing out its bugs and problems, is one of the most valuable things to do in the world.

I had been doubting this assumption of our strategy for a while, even before FTX. Over the past year (with a substantial boost by the FTX collapse) my actual trust in this ecosystem and interest in pouring gas into this existing engine has greatly declined, and I now stand before what I have helped built with great doubts about whether it all will be or has been good for the world.

I respect many of the people working here, and I am glad about the overall effect of Lightcone on this ecosystem we have built, and am excited about many of the individuals in the space, and probably in many, maybe even most, future worlds I will come back with new conviction to invest and build out this community that I have been building infrastructure for for almost a full decade. But right now, I think both me and the rest of Lightcone need some space to reconsider our relationship to this whole ecosystem, and I currently assign enough probability that building things in the space is harmful for the world that I can't really justify the level of effort and energy and money that Lightcone has been investing into doing things that pretty indiscriminately grow and accelerate the things around us.

(To Oli's points I'll add to this that it's also an ongoing cost in terms of time, effort, stress, and in terms of a lack of organizational focus on the other ideas and projects we'd like to pursue.)

Oli, myself, and the rest of the Lightcone team will be available to discuss more about this in the channel #closing-office-reasoning where I invite any and all of you who wish to to discuss this with me, the rest of the lightcone team, and each other.

In the last few weeks I sat down and interviewed people leading the 3 orgs whose primary office is here (FAR, AI Impacts, and Encultured) and 13 other individual contributors. I asked about how this would affect them, how we could ease the change, and generally get their feelings about how the ecosystem is working out. These conversations lasted on average 45 mins each, and it was very interesting to hear people's thoughts about this, and also their suggestions about other things Lightcone could work on.

These conversations also left me feeling more hopeful about building related community-infrastructure in the future, as I learned of a number of positive effects that I wasn't aware of. These conversations all felt pretty real, I respect all the people involved more, and I hope to talk to many more of you at length before we close.

From the check-ins I've done with people, this seems to me to be enough time to not disrupt any SERI MATS mentorships, and to give the orgs here a comfortable enough amount of time to make new plans, but if this does put you in a tight spot, please talk to us and we'll see how we can help.

The campus team (me, Oli, Jacob, Rafe) will be in the office for lunch tomorrow (Friday at 1pm) to discuss any and all of this with you. We'd like to know how this is affecting you, and I'd really like to know about costs this has for you that I'm not aware of. Please feel free (and encouraged) to just chat with us in your lightcone channels (or in any of the public office channels too).

Otherwise, a few notes:

  • The Lighthouse system is going away when the leases end. Lighthouse 1 has closed, and Lighthouse 2 will continue to be open for a few more months.
  • If you would like to start renting your room yourself from WeWork, I can introduce you to our point of contact, who I think would be glad to continue to rent the offices. Offices cost between $1k and $6k a month depending on how many desks are in them.
  • Here's a form to give the Lightcone team anonymous feedback about this decision (or anything).
  • To talk with people about future plans starting now and after the offices close, whether to propose plans or just to let others know what you'll be doing, I've made the #future-plans channel and added you all to it.

It's been a thrilling experience to work alongside and get to know so many people dedicated to preventing an existential catastrophe, and I've made many new friends working here, thank you, but I think me and the Lightcone Team need space to reflect and to build something better if Earth is going to have a shot at aligning the AGIs we build.

https://www.youtube.com/watch?v=8Fow61Zsn2s

Replies from: lc, Milli
comment by lc · 2023-02-16T12:20:23.605Z · LW(p) · GW(p)

jesus o.o

comment by Milli | Martin (Milli) · 2023-02-16T08:28:11.262Z · LW(p) · GW(p)

Here's a form to give the Lightcone team anonymous feedback about this decision (or anything).

The link seems to be missing.

Also: Looking forward to the postmortem.

Replies from: gwern
comment by gwern · 2023-02-16T16:51:07.837Z · LW(p) · GW(p)

The link seems to be missing.

As it should be, because the anonymous survey about the SF offices is not for you. It's for the people who were using the offices in question and thus have access to the original Slack channel posting with the link intact. (Obviously, you can't filter out rando Internet submissions of 'feedback' if it's anonymous.)

comment by Marta (Kitku) · 2023-02-26T10:02:48.500Z · LW(p) · GW(p)

Hi all, 

nice to meet you! I've been silently reading LW for the past 4ish years (really enjoyed it), even went to CFAR workshops (really enjoyed it), I regard myself as a rationalist overall (usually enjoy it, sometimes, as expected, it's a bit hard) but so far felt too shy to comment. But now I have a question, so well, it leaves me with a very little choice ;). 

Is it possible to export LW series to PDF? I don't like reading on my computer, accessing forum from my e-reader each time is a pain, and I really would like to read, for example, the whole "2022 MIRI alignment discussion". Any advice here? 

Replies from: niplav
comment by niplav · 2023-02-27T16:31:34.209Z · LW(p) · GW(p)

Yes, for example here (announcement post [LW · GW]).

Many more [LW · GW] exist.

Replies from: Kitku
comment by Marta (Kitku) · 2023-02-27T16:58:12.349Z · LW(p) · GW(p)

Thank you! 

comment by Milli | Martin (Milli) · 2023-02-16T08:23:52.409Z · LW(p) · GW(p)

LessWrong open thread, I love it. I hope they become as lively as the ACX Open Threads (if they have the same intention).

I'm reading the sequences this year (1/day, motivated by this post [EA · GW]) and am enjoying it so far. Lmk if I'm wasting my time by not "just" reading the highlights.

PS: In case you or someone you know is looking for a software engineer, here's my profile: https://cv.martinmilbradt.de/. Preferably freelance, but I'm open to employment if the project is impactful or innovative.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2023-02-17T21:25:54.709Z · LW(p) · GW(p)

Not a waste of time, generally good idea :P

comment by niplav · 2023-02-24T11:29:48.465Z · LW(p) · GW(p)

I remember seeing a short fiction story about a EURISKO-type AI system taking over the world in the 1980s (?) here on LessWrong, but I can't find it via search engines, to the point that I wonder whether I hallucinated this. Does anyone have an idea where to find this story?

comment by trevor (TrevorWiesinger) · 2023-02-20T21:55:05.207Z · LW(p) · GW(p)

I've been on lesswrong every day for almost a year now, and I'm really interested in intelligence amplification/heavy rationality boosting. 

I have a complicated but solid plan to read the sequences and implement the CFAR handbook over the next few months (important since you can only read them the first time once).

I need a third thing to do simultaneously with the sequences and the CFAR handbook. It's gotta be three. What is the best thing I can do for heavy intelligence/rationality amplification? Is it possible to ask a CFAR employee/alumni without being a bother? (I do AI policy, not technical alignment)

Replies from: ChristianKl, TrevorWiesinger
comment by ChristianKl · 2023-02-21T20:19:56.374Z · LW(p) · GW(p)

A third thing might be forecasting. Go to Metaculus or GJOpen and train to make predictions. 

You could also make predictions relevant to your work. Will XY attend the meeting? Will this meeting result in outcome X?

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2023-02-21T23:48:45.906Z · LW(p) · GW(p)

This is a good idea and I should have done it anyway a long time a go. But in order for that to work for this particular plan, I'd need a pretty concentrated dose, at least at the beginning. Do you have anything to recommend for a quick intense dose of forecasting education?

Replies from: ChristianKl
comment by ChristianKl · 2023-02-21T23:55:47.879Z · LW(p) · GW(p)

I think Tetlock's Superforcasting book is great. 

comment by trevor (TrevorWiesinger) · 2023-02-21T23:57:38.088Z · LW(p) · GW(p)

I'm still looking for things! Categories of things that work are broad, even meditation, so long as rapid intelligence amplification is the result. The point is to do all of them at once (with plenty of sleep and exercise and breaks, but no medications, not even caffeine). If I'm on to something then it could be an extremely valuable finding.

Replies from: Leviad
comment by Drake Morrison (Leviad) · 2023-02-28T21:02:14.763Z · LW(p) · GW(p)

This sequence [? · GW] has been a favorite of mine for finding little drills or exercises to practice overcoming  biases.

comment by Olli Järviniemi (jarviniemi) · 2023-02-17T10:01:51.746Z · LW(p) · GW(p)

Feature suggestion: Allow one to sort a user's comments by the number of votes.

Context: I saw a comment [LW(p) · GW(p)] by Paul Christiano, and realized that probably a significant portion of the views expressed by a person lie in comments, not top-level posts. However, many people (such as Christiano) have written a lot of comments, so sorting them would allow one to find more valuable comments more easily.

Replies from: TrevorWiesinger, SomeoneYouOnceKnew
comment by trevor (TrevorWiesinger) · 2023-02-20T21:39:49.443Z · LW(p) · GW(p)Replies from: niplav
comment by niplav · 2023-02-23T11:59:54.419Z · LW(p) · GW(p)

Hm, you can already browse comments by a user, though. I don't think high-voted comments being more easily accessible would make things worse (especially since high-voted comments are probably less-likely to contain politically sensitive statements).

comment by SomeoneYouOnceKnew · 2023-02-20T22:04:18.308Z · LW(p) · GW(p)

I don't agree, but for a separate reason from trevor.

Highly-upvoted posts are a signal of what the community agrees with or disagrees with, and I think being able to more easily track down karma would cause reddit-style internet-points seeking. How many people are hooked on Twitter likes/view counts?

Or "ratio'd".

Making it easier to track these stats would be counterproductive, imo.

comment by MSRayne · 2023-02-25T15:40:36.855Z · LW(p) · GW(p)

Is there anyone who'd be willing to intensively (that is, over a period of possibly months, at least one hour a day) help someone with terminal spaghetti brain organize a mountain of ideas about how to design a human hive mind (that is: system for maximizing collective human intelligence and coordination ability) operating without BCIs, using only factored cognition, prediction markets, behaviorist psychology, and LLMs?

I cannot pay you except with the amusement inherent to my companionship - I literally have no money and live with my parents lol - but one of the things for which I've been using my prodigious free time the past few years is inventing-in-my-head a system which at this point is too complicated for me to be able to figure out how to explain clearly in one go, although it feels to me as if it's rooted ultimately in simple principles.

I've never been able to organize my thoughts, in general, on any topic, better than a totally unordered ramble that I cannot afterward figure out any way to revise - and I mean, I am pathologically bad at this, as a personality trait which has remained immune to practice, which is why I rarely write top level posts here - so I actually need someone who is as good at it as I am bad and can afford to put a lot of time and effort into helping me translate my rambles into a coherent, actionable outline or better yet, wiki full of pseudocode and diagrams.

Obviously I will find some way to reciprocate, but probably in another way; I doubt I can do much good towards organizing your stuff if I can't organize my own lol. But who knows, maybe it's my closeness to the topic that makes me unable to do so? Anyway, thanks in advance if you decide to take this on.

comment by Anna Eplin (Skoobeton) · 2023-02-24T17:55:56.329Z · LW(p) · GW(p)

Hello, I'm new here and still working on reading the Sequences and the other amazing content on here; hopefully then I'll feel more able to join in some of the discussions and things. For now, I have what I'm sure is an embarrassingly basic question, but I can't find an answer on here anywhere and it keeps distracting me from focusing on the content: would someone please tell me what's the deal with the little degree symbols after some links but not others?

Thank you in advance, and warm regards to you all.

Replies from: niplav
comment by niplav · 2023-02-24T17:58:53.624Z · LW(p) · GW(p)

AFAIU, the symbols signify links to other content on LessWrong, showing that you can hover over the link and see a pop-up of the content.

Replies from: Skoobeton
comment by Anna Eplin (Skoobeton) · 2023-02-28T19:44:21.239Z · LW(p) · GW(p)

Oh, I see. Thank you very much.

comment by DragonGod · 2023-02-22T00:58:37.935Z · LW(p) · GW(p)

LW/AF did something very recently that broke/hindered the text to speech software I use for a lot of my LW reading.

This is a considerable inconvenience.

Replies from: Raemon
comment by Raemon · 2023-02-22T01:04:08.737Z · LW(p) · GW(p)

Which software?

Replies from: DragonGod
comment by DragonGod · 2023-02-22T01:06:55.691Z · LW(p) · GW(p)

Google Assistant and/or Microsoft Edge (on mobile).

  • Microsoft Edge flat out cannot narrate LW posts anymore.
  • Google Assistant sometimes(?) fails to fetch the entire text of the main post
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2023-02-21T17:34:53.526Z · LW(p) · GW(p)

Feature request: An option to sort comments by agreement-karma.

Replies from: Taleuntum
comment by Taleuntum · 2023-02-21T19:04:43.837Z · LW(p) · GW(p)

As someone who doesn't know web-development I'm curious what would be the obstacle to letting the user write their own custom comment sorting algorithm? I'm assuming comment sorting is done on the client machine, so it would not be extra burden on the server. I would like to sort top level comments lexicographically first by whether they were written by friends (or at least people whose posts I'm subscribed to) or having descendant replies written by friends, then whether they were written in the last 24h then by total karma and lower level comments lexicographically first by whether they were written by friends or having descendant replies written by friends then by submission time (older first). In spite of the numerous people craving this exact sorting algorithm I doubt the lesswrong team will implement it any time soon, so it would be cool if I could.

comment by MondSemmel · 2023-02-19T12:21:38.406Z · LW(p) · GW(p)

I have a vague preference to switch from my nickname here to my real name. Are there any unexpected upsides or downsides? (To the extent that matters, I'm no-one important, probably nobody knows me, and in any case it's probably already easy to find out my real name.)

Plus in case I go through with it, are there any recommendations for how to ease the transition? (I've seen some users temporarily use nicknames like <Real Name, formerly MondSemmel> or something, though preferably as a shorter version.)

Replies from: steve2152, Benito
comment by Steven Byrnes (steve2152) · 2023-02-20T20:24:50.177Z · LW(p) · GW(p)

Another option: if memory serves, the mods said somewhere that they're happy for people to have two accounts, one pseudonymous and one real-named, as long as you avoid voting twice on the same posts / comments.

comment by Ben Pace (Benito) · 2023-02-19T18:40:09.785Z · LW(p) · GW(p)

In the past I've encouraged more people to use their real names to more stand behind their writing, nowadays I feel more like encouraging people to use pseudonyms to feel less personal social cost for their writing.

comment by Lost Futures (aeviternity1) · 2023-02-16T22:40:31.754Z · LW(p) · GW(p)

Question for people working in AI Safety: Why are researchers generally dismissive of the notion that a subhuman level AI could pose an existential risk? I see a lot of attention paid to the risks a superintelligence would pose, but what prevents, say, an AI model capable of producing biological weapons from also being an existential threat, particularly if the model is operated by a person with malicious or misguided intentions?

Replies from: ChristianKl, alex-rozenshteyn
comment by ChristianKl · 2023-02-17T20:59:12.157Z · LW(p) · GW(p)

I think in the standard X-risk models that would be a biosafety X-risk. It's a problem but it has little to do with the alignment problems on which AI Safety researchers focus. 

comment by rpglover64 (alex-rozenshteyn) · 2023-02-17T15:28:21.535Z · LW(p) · GW(p)

Some thoughts:

  • Those who expect fast takeoffs would see the sub-human phase as a blip on the radar on the way to super-human
  • The model you describe is presumably a specialist model (if it were generalist and capable of super-human biology, it would plausibly count as super-human; if it were not capable of super-human biology, it would not be very useful for the purpose you describe). In this case, the source of the risk is better thought of as the actors operating the model and the weapons produced; the AI is just a tool
  • Super-human AI is a particularly salient risk because unlike others, there is reason to expect it to be unintentional; most people don't want to destroy the world
  • The actions for how to reduce xrisk from sub-human AI and from super-human AI are likely to be very different, with the former being mostly focused on the uses of the AI and the latter being on solving relatively novel technical and social problems
comment by Mary Chernyshenko (mary-chernyshenko) · 2023-02-26T18:00:50.608Z · LW(p) · GW(p)

"Games we play": civilians helping troops from different sides of conflict. As in, Army I entered the village; N sold his neighbors, the neighbors died horribly. Then Army II chased away Army I. Would N be reported as a collaborationist? Commonly, no. But everybody knows that everybody knows. And everybody knows who knows what everybody knows, which means N is probably going to sell a lot of people if another opportunity arises.

comment by [deleted] · 2023-02-25T19:24:18.520Z · LW(p) · GW(p)