Open and Welcome Thread – August 2021

post by habryka (habryka4) · 2021-08-15T05:59:05.270Z · LW · GW · 20 comments

Contents

20 comments

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you want to explore the community more, I recommend reading the Library, [? · GW] checking recent Curated posts [? · GW], seeing if there are any meetups in your area [? · GW], and checking out the Getting Started [LW · GW] section of the LessWrong FAQ [LW · GW]. If you want to orient to the content on the site, you can also check out the new Concepts section [? · GW].

The Open Thread tag is here [? · GW]. The Open Thread sequence is here [? · GW].

20 comments

Comments sorted by top scores.

comment by John D. Bell · 2022-04-04T00:55:06.296Z · LW(p) · GW(p)

New member here.
I happen (outside of this community) to already be friends with Eric Raymond (who posts here upon occasion) and I've met (once) Scott Alexander.   I expect to make some more new online (and hopefully F2F) friends.
I have spent most of my adult lifetime trying to think more clearly, more 'rationally'.  Hope being here helps!

comment by Yoav Ravid · 2021-08-24T09:53:06.295Z · LW(p) · GW(p)

Being able to comment on our posts before they're published (i.e, on drafts) would be nice. Sometimes I want to add a note in a comment but can't do that until the post is posted.

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2021-08-31T14:06:38.365Z · LW(p) · GW(p)

[Edit: I was misunderstanding the parent comment, sorry, see reply.] I (and I think a lot of people) generally write and revise and solicit comments in google docs, and then copy-paste to lesswrong at the end. Copy-paste from Google docs into the LessWrong editor works great, it preserves the formatting almost perfectly. There's just a couple little things that you need to do manually after copy-pasting.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2021-08-31T17:33:46.708Z · LW(p) · GW(p)

This isn't what I meant here. I meant if we want to have a comment of ours on the post before it's posted. A bit like what YouTubers do.

Replies from: steve2152
comment by Steven Byrnes (steve2152) · 2021-08-31T18:16:59.271Z · LW(p) · GW(p)

oh, oops, sorry :-P

In that case, I agree, that's a reasonable suggestion.

comment by Rafael Harth (sil-ver) · 2021-08-23T21:12:32.262Z · LW(p) · GW(p)

Our world in data offers a free download of their big Covid-19 dataset. It's got data on lots of things including cases, deaths, and vaccines (full list of columns here), and all that by country and date -- i.e., each row corresponds to one (country,date) pair with date date ranging from 2020-02-24 to 2021-08-20 for each country, stepsize one day.

Is there any not-ultra-complicated way to demonstrate vaccine effectiveness from this dataset? I.e., is there any way to measure the effect such that you would be confident predicting the direction ahead of time? (E.g., something like, for date Z, plot all countries by and and measure the correlation, but you can make it reasonably more complicated than this by controlling for a hand full of variables or something.)

Replies from: rossry
comment by rossry · 2021-08-24T17:45:55.232Z · LW(p) · GW(p)

What do you mean by "demonstrate vaccine effectiveness"? My instinct is that it's going to be ~impossible to prove a casual result in a principled way just from this data. (This is different from how hard it will be to extract Bayesian evidence from the data.)

For intuition, consider the hypothesis that countries can (at some point after February 2020) unlock Blue Science, which decreases cases and deaths by a lot. If the time to develop and deploy Blue Science is sufficiently correlated with the time to develop and deploy vaccines (and the common component can't be measured well), it won't be possible to distinguish casual effectiveness of vaccines from casual effectiveness of Blue Science.

(A Bayesian would draw some update even from an uncontrolled correlation, so if you want the Bayesian answer, the real question is "how much of an update so you want to demonstrate (and assuming what prior)"?

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-08-24T18:08:23.049Z · LW(p) · GW(p)

I mean something like, "a result that would constitute a sizeable Bayesian update to a perfectly rational but uninformed agent". Think of someone who has never heard much about those vaccine thingies going from 50/50 to 75/25, that range.

comment by Viliam · 2021-08-16T19:36:31.506Z · LW(p) · GW(p)

Aren't Open Threads made obsolete by Shortforms?

One advantage of Open Thread over a Shortform is that of periodical reset; an Open Thread may contain hundreds of comments (used to happen in the past) but then a new one is created and the debate starts anew.

When some Shortforms get hundreds of comments, using them may become inconvenient. Maybe at some moment, a mechanism to reset Shortform will be needed -- basically, just create a new one, and make it so that the "new shortform" button writes into the new one. Could happen automatically, e.g. when the total number of comments exceeds some predefined threshold.

But... returning to my meta point... I also could have written this in my Shortform. Is there any advantage of writing it here?

Replies from: ChristianKl, habryka4, Pattern
comment by ChristianKl · 2021-08-17T03:39:59.334Z · LW(p) · GW(p)

I doubt that the introduction posts that get regularly written in this thread would be written at all if the thread wouldn't exist.

comment by habryka (habryka4) · 2021-08-16T19:48:12.376Z · LW(p) · GW(p)

Shortform posts currently feel more like they are your personal space with your own norms, that someone could visit. The Open Thread feels more like a town plaza where you are acting on shared norms, where you expect things to get a bit more visibility, but also are generally acting in a more shared social and intellectual context. 

It's not obvious that this distinction is worth having both of them, but both features seem to continue getting usage, and I personally often have a sense that a specific comment/post is more shortform or more Open Thread shaped.

Replies from: Sefirosu
comment by exmateriae (Sefirosu) · 2021-08-17T12:52:09.877Z · LW(p) · GW(p)

Maybe a stupid question but how do I access other people's shortforms? First time I'm hearing of this

Replies from: Ruby, habryka4
comment by Ruby · 2021-08-29T14:30:12.517Z · LW(p) · GW(p)

If you go to their pofiles, you might see their "X's shortform post".  Alternatively, go to www.lesswrong.com/shortform

comment by habryka (habryka4) · 2021-08-19T00:12:42.568Z · LW(p) · GW(p)

Shortform posts show up on the frontpage in the recent discussion section, and can be visited from people's profiles if they've created at least one shortform post. All shortform posts are listed as just one post in their post-list.

They are also visible in the All-Posts page.

comment by Pattern · 2021-08-21T05:49:37.234Z · LW(p) · GW(p)
Is there any advantage

You probably use your shortform more, so it might get more attention as a comment here.

comment by Ana Rubianes (banana-1) · 2023-04-17T16:38:40.225Z · LW(p) · GW(p)

I'm new here. I wanted to ask if there are any specific proposed regulations for AI governance? Or any type of proposed solutions?

comment by niplav · 2021-08-16T19:49:15.035Z · LW(p) · GW(p)

Is there a good case for the usefulness (or uselessness) of brain-computer interfaces in AI alignment (à la Neuralink etc.)? I've searched around a bit, but there seems to be no write-up for the path to making AI go well using BCIs.

Edit: Post about this is up [LW · GW].

Replies from: gilch, Pattern, habryka4
comment by gilch · 2021-08-22T04:54:29.054Z · LW(p) · GW(p)

Maybe if we could give a human more (emulated) cortical columns without also making him insane in the process, we'd end up with a limited superintelligence who maybe isn't completely Friendly, but also isn't completely alien to human values. If we just start with the computer, all bets are off. He might still go insane later though. Arms race scenarios are still a concern. Reckless approaches might make hybrid intelligence sooner, but they'd also be less stable. The end result of most unfriendly AIs is that all the humans are dead. It takes a perverse kind of near-miss to get to the hellish, worse-than-death scenarios; an unFriendly AI that doesn't just kill us. A crazy hybrid might be that.

If the smartest of humans could be made just a little smarter, maybe we could solve the alignment problem before AI goes FOOM. Otherwise, the next best approach seems to involve somehow getting the AI to solve the problem for us, without killing everyone (or worse) in the meantime. Of course, that's only if they're working on alignment, and not just improving AI.

If the Borg Collective becomes the next Facebook, then at least we're not all dead. Unfortunately, an AI trying to FOOM on a pure machine substrate would still outcompete us poor meat brains.

comment by Pattern · 2021-08-21T05:52:21.853Z · LW(p) · GW(p)

Well, it might make it easier for someone to steal your credit card info if you're wearing one of these headsets.

comment by habryka (habryka4) · 2021-08-16T19:58:20.938Z · LW(p) · GW(p)

I don't know of any writeup, and I do think it would be great for someone to make one. I've definitely discussed this for many hours with people over the years.

Some related tags that might have some stuff in this space covered: 

But overall doesn't look like there are any posts that really cover the AI Alignment angle.