Open Thread – Winter 2023/2024

post by habryka (habryka4) · 2023-12-04T22:59:49.957Z · LW · GW · 160 comments

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.

If you want to explore the community more, I recommend reading the Library [? · GW], checking recent Curated posts [? · GW], seeing if there are any meetups in your area [? · GW], and checking out the Getting Started [? · GW] section of the LessWrong FAQ [? · GW]. If you want to orient to the content on the site, you can also check out the Concepts section [? · GW].

The Open Thread tag is here [? · GW]. The Open Thread sequence is here [? · GW].

160 comments

Comments sorted by top scores.

comment by Oliver Ridge (oliver-ridge) · 2023-12-05T01:29:44.894Z · LW(p) · GW(p)

Hello! I’ve mostly been lurking around on LessWrong for a little while and have found it to be a good source of AI news and other stuff. I like these posts - sometimes it feels somewhat intimidating in other parts. I hope to be commenting more on LessWrong in the future!

Replies from: habryka4
comment by habryka (habryka4) · 2023-12-05T01:30:54.753Z · LW(p) · GW(p)

Welcome! Glad to have you here and am looking forward to reading your comments!

comment by trevor (TrevorWiesinger) · 2023-12-05T14:35:49.568Z · LW(p) · GW(p)

Confession: I've sometimes been getting Lesswrong users mixed up, in a very status-damaging way for them. 

Before messaging with lc, I mixed his writings and accomplishments with lsusr (e.g. I thought the same person wrote Luna Lovegood and the Chamber of Secrets [LW · GW] and What an actually pessimistic containment strategy looks like [LW · GW]). 

I thought that JenniferRM did demon research and used to work at MIRI, but I had I mixed her up with Jessicata. 

And, worst of all, I mixed up Thane Ruthenis with Thoth Hermes, causing me to think that Thane Ruthenis wrote Thoth's downvoted post The truth about false [LW · GW].

Has this happened to other people? The main thing is that I just didn't notice the mixup at all until ~a week after we first exchanged messages. It was just a funny manifestation of me not really paying much attention to some new names, and it's an easy fix on my end, but the consequences are pretty serious if this happens in general.

Replies from: lsusr, Yoav Ravid, Benito, lc, Charlie Steiner, habryka4, nikolaisalreadytaken, nathan-helm-burger, ChristianKl
comment by lsusr · 2024-01-10T01:21:16.674Z · LW(p) · GW(p)

That's funny. When I read lc's username I think "that username looks similar to 'lsusr'" too.

comment by Yoav Ravid · 2023-12-06T09:53:17.824Z · LW(p) · GW(p)

Yep, happened to me too. I like LW aesthetic so I wouldn't want profile pics, but I think personal notes on users (like discord has) would be great.

comment by Ben Pace (Benito) · 2024-02-22T00:09:25.105Z · LW(p) · GW(p)

Someone told me that they like my story The Redaction Machine [LW · GW].

Replies from: lsusr
comment by lsusr · 2024-02-23T00:44:54.743Z · LW(p) · GW(p)

The secret is out. Ben's secret identity is Ben Pace.

comment by lc · 2024-02-23T02:08:42.787Z · LW(p) · GW(p)

Well, that's cause I'm his alt

comment by Charlie Steiner · 2023-12-06T15:25:27.563Z · LW(p) · GW(p)

There are at least two Steves, and also at least two Evans. But I don't know if anything embarassing happened, I just mixed some people up.

comment by habryka (habryka4) · 2023-12-23T22:04:15.833Z · LW(p) · GW(p)

This happens to me too. IMO one of the best arguments for something like profile pictures or something. Not enough entropy in name space.

Replies from: alex-rozenshteyn
comment by rpglover64 (alex-rozenshteyn) · 2024-03-02T20:44:29.041Z · LW(p) · GW(p)

This is also mitigated by automatic images like gravatar or the ssh key visualization. I wonder if they can be made small enough to just add to usernames everywhere while maintaining enough distinguishable representations.

comment by nikola (nikolaisalreadytaken) · 2023-12-28T16:48:09.717Z · LW(p) · GW(p)

I often accidentally mix you up with the Trevor from Open Phil! More differentiation would be great, especially in the case where people share the same first name.

comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-12-23T21:26:54.628Z · LW(p) · GW(p)

I have been around a long while, so the names are mostly familiar to me. I did make a minor embarrassing mistake a few months ago, thinking that Max H (on the East Coast) was the account of my friend Max H (on the west coast). East Coast Max H added a note to his profile to disambiguate. Do you read people's profiles before first messaging them?

comment by ChristianKl · 2023-12-06T00:35:08.372Z · LW(p) · GW(p)

Yes, it happened before for me as well. I think it would be good to have profile pictures to make it easier to recognize users.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-12-23T21:28:40.148Z · LW(p) · GW(p)

Maybe to make it uniform and non-distracting it could just be small grayscale pattern icons next to names based on a hash of the name.

comment by ojorgensen · 2023-12-11T12:54:25.200Z · LW(p) · GW(p)

It would save me a fair amount of time if all lesswrong posts had an "export BibTex citation" button, exactly like the feature on arxiv.  This would be particularly useful for alignment forum posts!

comment by Оuroboros (0ur0b0r0s) · 2023-12-12T22:43:32.160Z · LW(p) · GW(p)

Hello everyone!

After several years seeing (and reading) links to LessWrong posts scattered in other areas of the internet, I decided to sign up for an account today myself and see if I can't find a new community to contribute to here :)

I look forward to reading, writing, and thinking with you all in the future!

comment by lsusr · 2024-02-15T06:11:33.715Z · LW(p) · GW(p)

I want to express appreciation for a feature the Lightcone team implemented a long time ago: Blocking all posts tagged "AI Alignment" keeps this website usable for me.

comment by jeffreycaruso · 2024-01-17T02:43:21.383Z · LW(p) · GW(p)

Hello, I came across this forum while reading an AI research paper where the authors quoted from Yudkowsky's "Hidden Complexity of Wishes." The linked source brought me here, and I've been reading some really exceptional articles ever since. 

By way of introduction, I'm working on the third edition of my book "Inside Cyber Warfare" and I've spent the last few months buried in AI research specifically in the areas of safety and security.  I view AGI as a serious threat to our future for two reasons. One, neither safety nor security have ever been prioritized over profits by corporations dating all the way back to the start of the industrial revolution.  And two, regulation has only ever come to an industry after a catastrophe or a significant loss of life has occurred, not before. 

I look forward to reading more of the content here, and engaging in what I hope will be many fruitful and enriching discussions with LessWrong's members. 

Replies from: TrevorWiesinger, habryka4
comment by trevor (TrevorWiesinger) · 2024-01-25T04:00:22.399Z · LW(p) · GW(p)

Hi Jeffrey! Glad to see more cybersecurity people taking the issue seriously. 

Just so you know, if you want to introduce someone to AGI risk, the best way I know of to introduce laymen to the problem is for them to read Scott Alexander's Superintelligence FAQ [LW · GW]. This will come in handy down the line.

Replies from: jeffreycaruso
comment by jeffreycaruso · 2024-02-11T02:49:38.210Z · LW(p) · GW(p)

Thanks, Trevor. I've bookmarked that link. Just yesterday I started creating a short list of terms for my readers so that link will come in handy. 

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2024-02-18T00:19:08.824Z · LW(p) · GW(p)

@Raemon [LW · GW] is the superintelligence FAQ helpful as a short list of terms for Caruso's readers?

comment by habryka (habryka4) · 2024-01-17T07:13:08.132Z · LW(p) · GW(p)

Welcome! Hope you have a good time!

comment by niplav · 2023-12-05T00:08:54.815Z · LW(p) · GW(p)

I notice I am confused.

I have written what I think is a really cool post [LW · GW]: Announcing that I will be using prediction markets in practice in useful ways, and asking for a little bit of help with that (mainly people betting on the markets). But apparently the internet/LessWrong doesn't feel that way. (Compare to this comment [LW(p) · GW(p)] of mine which got ~4.5 times the upvotes, and is basically a gimmick—in general I'm really confused about what'll get upvoted here and what will be ignored/downvoted, even after half a decade on this site).

I'm not, like, complaining about this, but I'd like to understand why this wasn't better received. Is it:

  • The post is confusingly written, with too much exposition in the beginning (starts with a long quote)
  • The post promises to do something without having done it, so people judge it as a pipedream
  • The idea isn't actually that interesting: We've had replication markets, and the idea of using prediction markets to select experiments was proposed at least in 2013, and the idea is straightforward, as is the execution
  • The title makes it sound like just another proposal and nothing that will actually be executed in practice
  • Stuff like this doesn't matter because TAI soon

I disagree with some of these, but that's not the point, the point is why my prediction of the reception of the post was wrong. So: Why isn't this kinda cool and worth participating in?

Replies from: habryka4, 1a3orn, ChristianKl, papetoast, papetoast, jarviniemi, nathan-helm-burger, Viliam
comment by habryka (habryka4) · 2023-12-05T00:42:52.287Z · LW(p) · GW(p)

Feedback from me: I started reading the post, but it had a bunch of huge blockquotes and I couldn't really figure out what the post was about from the title, so I navigated back to the frontpage without engaging. In-particular I didn't understand the opening quote, which didn't have a source, or how it was related to the rest of the post (in like the 10 seconds I spent on the page). 

An opening paragraph that states a clear thesis or makes an interesting point or generally welcomes me into what's going on would have helped a lot.

Replies from: niplav
comment by niplav · 2023-12-05T00:45:12.982Z · LW(p) · GW(p)

Okay, thanks for the feedback! So a more informative title would be better. I've been using quotes to denote abstracts (or opening paragraph), but maybe that's a bit confusing.

I've changed the title now, and changed the abstract from a quote to bolded.

Replies from: papetoast
comment by papetoast · 2023-12-05T01:19:35.399Z · LW(p) · GW(p)
  • The actual quote was also too long that I would have stopped reading if I wasn't trying to analyse your post.
  • The quote is also out of context, in that I am very confused about what the author was trying to say from the first paragraph. Because I was skimming, I didn't really understand the quote until the market section.

Fortunately, there’s a good (and well-known) alternative alternative to what?, which is to randomize decisions sometimes, at random yeah that makes sense, but how does randomization relate to prediction markets?. You tell people: “I will roll a 20-sided die. If it comes up 1-19, everyone gets their money back and I do what I want what is I do what I want. If it comes up 20, the bets activate and I decide what to do using a coinflip. ok so this is about a bet, but then why coin flip??

Replies from: niplav
comment by niplav · 2023-12-05T01:23:41.668Z · LW(p) · GW(p)

Okay, lesson learned: Don't start a blogpost with a long-ass quote from another post out of context. Put it later after the reader is in flow (apparently the abstract isn't enough). Don't do what's done here.

Replies from: habryka4
comment by habryka (habryka4) · 2023-12-05T01:28:12.583Z · LW(p) · GW(p)

To be clear, I totally didn't parse the opening blockquote as an abstract. I parsed it as a quote from a different post, I just couldn't figure out from where.

comment by 1a3orn · 2023-12-05T01:32:27.712Z · LW(p) · GW(p)

FWIW I was going to start betting on Manifold, but I have no idea how to deal with meditative absorption as an end-state.

Like there are worlds where -- for instance -- Vit D maybe helps this, or Vit D maybe hurts, and it might depend on you, or it depends on what kind of meditation really works for you. So it takes what is already pretty hard bet for me -- just calling whether nicotine is actually likely to help in some way -- and makes it harder -- is nicotine going to help meditation. Just have no idea.

Replies from: niplav
comment by niplav · 2023-12-05T02:08:27.090Z · LW(p) · GW(p)

Yeah, that makes sense. (I think I saw you bet on one of the markets! (And then maybe sell your stake?))

Thanks for trying anyway. Maybe the non-meditation related markets are easier to predict?

I'd like to encourage best-guess-betting, but I understand that there are better opportunities out there.

comment by ChristianKl · 2023-12-06T00:33:54.986Z · LW(p) · GW(p)

It's good to use prediction markets in practice but most people who read the post likely don't get that much value from reading the post. 

Larry McEnerney is good at explaining that good writing isn't writing that's cool or interesting but simply writing that provides value to the reader. 

As far as the actual execution goes, it might have been better to create fewer markets and focus on fewer experiments, so that each one gets one attention.

comment by papetoast · 2023-12-05T03:30:55.736Z · LW(p) · GW(p)

Why isn't this kinda cool and worth participating in?

I wrote two comments about why people don't read your post, but as I was betting I realized another two problems about the markets:

  1. (Not your fault) The Manifold betting integration kind of sucks. Clicking "See 2 more answers" does nothing, and the options are ordered by percentage.
  2. There isn't enough liquidity in your markets. It makes betting difficult because the even M5 increments changes too much. idk, maybe buy some mana to subsidize your markets? It would also make people seeing your market from Manifold more interested to bet as they will have more to gain for the prediction.
Replies from: niplav
comment by niplav · 2023-12-05T11:06:53.822Z · LW(p) · GW(p)

Both make sense. I spent ~all my mana on creating the markets, and as more Mana rolls in from other bets I am subsidizing them.

comment by papetoast · 2023-12-05T03:19:06.772Z · LW(p) · GW(p)

The title doesn't set a good expectation of the contents. If I am a person interested in "Please Bet On My Quantified Self Decision Markets", I want to bet. I won't expect to (and shouldn't be expected to) read all your lengthy experimental details. It took a while for me to find the markets.

Replies from: niplav
comment by niplav · 2023-12-05T11:08:19.394Z · LW(p) · GW(p)

That's funny, I've already changed the title from "Using Prediction Platforms To Select Quantified Self Experiments". I guess the problem is really the block quote, which I'll move somewhere later in the post.

comment by Olli Järviniemi (jarviniemi) · 2023-12-26T14:34:33.376Z · LW(p) · GW(p)

I looked at your post and bounced off the first time. To give a concrete reason, there were a few terms I wasn't familiar with (e.g. L-Theanine, CBD Oil, L-Phenylalanine, Bupropion, THC oil), but I think it was overall some "there's an inferential distance here which makes the post heavy for me". What also made the post heavy was that there were lots of markets - which I understand makes conceptual sense, but makes it heavy nevertheless.

I did later come back to the post and did trade on most of the markets, as I am a big fan of prediction markets and also appreciate people doing self-experiments. I wouldn't have normally done that, as I don't think I know basically anything about what to expect there - e.g. my understanding of Cohen's d is just "it's effect size, 1 d basically meaning one standard deviation", and I haven't even played with real numerical examples.

(I have had this "this assumes a bit too much statistics for me / is heavy"problem when quickly looking at your self-experiment posts. And I do have a mathematical background, though not from statistics.)

I'd guess that you believe that the statistics part is really important, and I don't disagree with that. For exposition I think it would still be better to start with something lighter. And if one could have a reasonable prediction market on something more understandable (to laypeople), I'd guess that would result in more attention and still possibly useful information. (It is unfortunate that attention is very dependent on the "attractiveness" of the market instead of "quality of operationalization".)

Replies from: niplav
comment by niplav · 2024-01-04T16:35:51.321Z · LW(p) · GW(p)

Thank you so much for trading on the markets!

I guess I should've just said "effect size", and clarify in a footnote that I mean Cohen's d.

And if the nootropics post was too statistics-heavy for someone with a math background, I probably need to tone it down/move it to an appendix. I think I can have quality of operationalization if I'm willing to be sloppy in the general presentation (as people probably don't care as much whether I use Cohen's d or Hedge's g or whatever).

comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-12-23T21:33:17.877Z · LW(p) · GW(p)

The opening was off-putting to me. I think a shorter post with the details placed in a linked separate post marked as appendix would get more engagement. Also, bold and caps text is off-putting. But as of the time I checked it had 32 upvotes, which is pretty good. I usually think of less than 10 meaning nobody cares, but 10 - 20 is ok, and more than 20 is pretty good. Only really popular posts are above 50 generally.

Replies from: niplav
comment by niplav · 2023-12-24T02:19:55.792Z · LW(p) · GW(p)

Yeah, maybe I'll amend my comment above—after some help from the Manifold team I've gotten enough interest/engagement on my markets that I'm not as worried anymore—except maybe the LSD microdosing one, which is honestly a steal. (At the time my markets were pretty barren in terms of engagement, which was my main optimization target).

I dunno about the upvote count though, two [LW · GW] posts [LW · GW] about the results of self-experiments have been pretty popular (if anyone wants a way to farm LW karma, that'd be a way to do it…)

I think this endeavour is much cooler than ~most of my past posts, and not particularly complicated (I understand why the posts in this sequence [? · GW] aren't very upvoted, since people justifiedly just want to upvote what they've read and evaluated), so I was confused.

comment by Viliam · 2023-12-05T16:02:12.580Z · LW(p) · GW(p)

At this moment, the post has 25 karma, which is not bad.

From my perspective, positive karma is good, negative karma is bad, but 4x higher karma doesn't necessarily mean 4x better -- it could also mean that more people noticed it, more people were interested, it was short to read so more people voted, etc.

So I think that partially you are overthinking it, and partially you could have made the introduction shorter (basically to reduce the number of lines someone must read before they decide that they like it).

Replies from: niplav
comment by niplav · 2023-12-05T16:08:44.703Z · LW(p) · GW(p)

Yeah, when I posted the first comment in here, I think it had 14?

I was maybe just overly optimistic about the amount of trading that'd happen on the markets.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-12-23T21:37:08.782Z · LW(p) · GW(p)

The off-putting part about betting to me was the non-objective measure of meditative engagement. Gwern's n-back test was better for being objective and precise.

Replies from: niplav
comment by niplav · 2023-12-24T02:22:53.449Z · LW(p) · GW(p)

Hm, interesting. I think there's more α in investigating meditative performance, and thought it'd be not as bad through subjectivity because I randomize & blind. But I get why one would be skeptical of that.

I do test for a bunch of other stuff though, e.g. flashcard performance (more objective I think?), which was surprisingly unmoved in my past two experiments. But I don't resolve the markets based on that. I very briefly considered putting up markets on every affected variable and every combination of substance, but then decided that nobody was going to forecast that.

comment by arjunpi (arjun-p) · 2024-02-04T18:59:11.297Z · LW(p) · GW(p)

Hey, I've been reading stuff from this community since about 2017. I'm now in the SERI MATS program where I'm working with Vanessa Kosoy. Looking forward to contributing something back after lurking for so long :P

comment by MiguelDev (whitehatStoic) · 2023-12-06T13:28:03.085Z · LW(p) · GW(p)

I hope it's not too late to introduce myself, and I apologize if it is the case. I'm Miguel, a former accountant and decided to focus on researching /upskilling to help solve the AI alignment problem.

Sorry if I got people confused here, of what I was trying to do in the past months posting about my explorations on machine learning.

Replies from: TrevorWiesinger, Screwtape, Charlie Steiner
comment by trevor (TrevorWiesinger) · 2023-12-12T03:01:27.590Z · LW(p) · GW(p)

Can I interest you in working in AI policy if technical alignment doesn't work out? You'll want to visit DC and ask a ton of people there if you seem like a good fit (or ask them who can evaluate people). Or you can apply for advising on 80k or use the Lesswrong intercom feature in the bottom-right corner.

I know that technical alignment is quant and AI policy is not, and accounting is quant, but my current understanding is that >50% of accountants can be extremely helpful in AI policy whereas <50% of accountants can do original technical alignment research. 

More ML background is a huge boost in both areas, not just alignment. People good at making original discoveries in alignment will be able to reskill back to alignment research during crunch time [LW · GW], but right now is already crunch time for AI policy.

Replies from: whitehatStoic
comment by MiguelDev (whitehatStoic) · 2023-12-12T07:13:25.061Z · LW(p) · GW(p)

Hi @trevor [LW · GW]! I appreciate the ideas you shared and yeah I agree that most accountants are probably better of helping in the AI policy route!

But to point out, I'm doing some AI policy work/ help back home in the Philippines as part of the newly formed Responsible AI committee so I think I am not falling short from this end.

I have looked at the AI safety problem deeply and my personal assessment is that it is difficult to create workable policies that can route to the best outcomes because we (as a society) lack the understanding of the mechanisms that make the transformer tech work. My vision of AI policies that can work will somehow capture a deep level of lab work being done by AI companies like learning rates standardization or number of epochs allowed that is associated hopefully with a robust and practical [LW · GW] alignment theory, something that we do not have for the moment. Because of this view, I chosed to help in the pursuit of solving the alignment problem instead. The theoretical angle [LW · GW] I am pursuing is significant enough to push me to learn machine learning and so far I was able to create RLFC [LW · GW] and ATL [? · GW]through this process but yeah maybe an alternative scenario for me is doing 100% AI policy work - open for it if it will produce better results in the grand scheme of things.



(Also, regarding the Lesswrong intercom feature in the bottom-right corner: I did have many discussions with the LW team, something I wished was available months ago but yeah I think one needs a certain level of karma to get access to this feature.)

comment by Screwtape · 2023-12-06T18:27:47.629Z · LW(p) · GW(p)

Welcome! Glad to have you here.

Replies from: whitehatStoic
comment by Charlie Steiner · 2023-12-06T15:30:30.741Z · LW(p) · GW(p)

Welcome!

Replies from: whitehatStoic
comment by Yoav Ravid · 2023-12-13T08:29:47.809Z · LW(p) · GW(p)

Feature suggestion: unexplained strong downvotes have been something that bothered people for a long time, and requiring a comment to strongly downvote has been suggested several times before. I agree that this is too much to require, so I have a similar but different idea. When you strong upvote (both positive and negative), you'll have some popup with a few reasons to pick from for why you chose to strongly vote (A bit like the new reacts feature). For strong downvotes it may look like this:

  • This post is overrated, This post is hazardous, This post is false, This post is below standards.

And for strong upvotes it may look like

  • This post is underrated, this post is important, etc.

Choosing one of them will be required in order to strongvote (though both will have an 'other' option, so you don't have to pick a reason that isn't actually your reason). Those reasons will be shown anonymously either to the author or to everyone.

Replies from: habryka4
comment by habryka (habryka4) · 2023-12-13T08:48:39.522Z · LW(p) · GW(p)

Yeah, I do think that's not super crazy. I do think that it needs some kind of "other" option, since I definitely vote for lots of complicated reasons, and I also don't want to be too morally prescriptive about the reasons for why something is allowed to be downvoted or upvoted (like, I think if someone can think of a reason something should be downvoted that I didn't think of, I think they should still downvote, and not wait until I come around to seeing the world the way they see it). 

Seems worth an experiment, I think.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2023-12-13T08:56:20.333Z · LW(p) · GW(p)

Yep, the purpose is providing the author with information, without making it too burdensome to strongvote, and without restricting when a strongvote is allowed.

comment by SashaWu · 2024-02-22T17:00:53.232Z · LW(p) · GW(p)

I've been a lurker here for a long time. Why did I join?

I have a project I would like to share and discuss with the community. But first, I would like to hear from you guys. Will my project fit in here? Is there interest?

My project is: I wrote a book for my 6yo son. It is a bedtime-reading kind of book for a reasonably nerdy intelligent modern child.

Reading to young kids is known to be very beneficial to their development. There are tons of great books for any age and interests. My wife and me have read and enjoyed a lot of them with our boy.

However, I still wasn't satisfied. Most of what I could find was too stale, pedestrian, or just plain irrelevant to a child growing up in the modern world. The world is already vastly different from what the authors of these books had to experience. Right now, our world is on the verge of becoming even more unrecognizably different.

So, I wrote my own. I did read it to my boy, and he enjoyed it a lot. My wife enjoyed it too, and suggested it might be interesting to others.

Why Lesswrong? I think there are both cons and pros (in that order) to publishing it here.

Cons: This is not a book teaching young kids rational thinking as such. Its focus is more on psychological matters: overwhelm, emotions, attachments, obligations. One of the book's themes is cheating: with so much help available from the increasingly sentient technology, when using that help becomes cheating? Another core topic is happiness: what makes us happy? How to achieve and maintain that state? Can there be a cheated happiness?

Pros: It is a weird and rather complex book. Its subject matter includes virtual companions, space travel, and benevolent but still scary superintelligences. My boy is a bit of a language nerd, so there are elements of language-themed wordlbuilding as well. Overall, even though it's a book for kids, I was trying to make it interesting for adults too - at least, for adults who are somewhat similar to me. Lesswrong is one of the few places I know where such people gather.

More cons: Despite sounding vaguely sci-fi, it's not serious sci-fi. I don't pretend to describe the real, or even to any extent realistic, world. It's basically a fairy tale. At times, it chooses to be poetic at the expense of rationality. Also, at times it's quite idiosyncratic to our family so may sound confusing or even off-putting to others.

More pros: I do think it's reasonably well written and fun to read :)


 

Replies from: Yoav Ravid
comment by Yoav Ravid · 2024-02-24T05:42:56.353Z · LW(p) · GW(p)

Welcome to LessWrong! Your story sounds fitting to me. I'd love to read to read it :)

comment by Steven Byrnes (steve2152) · 2024-02-14T00:27:15.659Z · LW(p) · GW(p)

I'm not a fan of @Review Bot [LW · GW] because I think that when people are reading a discussion thread, they're thinking and talking about object-level stuff, i.e. the content of the post, and that's a good thing. Whereas the Review Bot comments draw attention away from that good thing and towards the less-desirable meta-level / social activity of pondering where a post sits on the axis from "yay" to "boo", and/or from "popular" to "unpopular".

(Just one guy's opinion, I don't feel super strongly about it.)

Replies from: habryka4, kave
comment by habryka (habryka4) · 2024-02-14T00:56:37.227Z · LW(p) · GW(p)

I think currently the bot is more noticeable than where it will when we have cleared out the 2023/2024 backlog. Usually the bot just makes a comment on a post when it reaches 100 karma, but since we are just starting it, it's leaving a lot of comments at the same time whenever older posts get voted on that don't yet have a market.

The key UI component I care about is actually not the comment (which was just the most natural place to put this information), but the way the post shows up in post-lists: 

The karma number gets a slightly different (golden-ish) color, and then you can see the likelihood that it ends up at the top of the review on hover as well as at the top of the post. 

The central goal is to both allows us to pull forward a bunch of the benefits of the review, and to create a more natural integration of the review into the everyday experience of the site.

comment by kave · 2024-02-14T00:52:04.782Z · LW(p) · GW(p)

That's plausible. The counter hope for the markets is that they are less "yay"/"boo" because the review is (hopefully) less "yay"/"boo".

Also, it will be less active in "Recent Discussion" soon; currently there's a bit of a backlog of eligible posts that it's getting triggered for.

comment by trevor (TrevorWiesinger) · 2023-12-07T20:46:10.152Z · LW(p) · GW(p)

Some thoughts about e/acc that weren't worthy of a post:

  • E/acc is similar to early 2010s social justice, in that it's little more than a war machine [LW(p) · GW(p)]; they decided that injustice was bad and that therefore they were going to fight it, and that anyone who criticized them was either weakening the coalition or opposing the coalition.
  • Likewise, E/acc decided that acceleration was good and anyone opposing them was evil luddites, and you had to use the full extent of your brain to "win" each confrontation as frequently as possible.
  • E/acc people like Beff Jezos collided with AI safety gradually, so they encountered a worldview is logically developed and probably obviously correct, but they encountered it piece by piece.
  • As a result, they thought up justifications on every rare occasion that they encountered AI safety's logic debunking their takes. 
  • Over time, they progressively ended up twisted into a pro-extinction, pro-paperclip shape, and layered a war machine on top of that stance, which now sufficient to defend against true scissor statements from AI safety (e.g. 85% of e/acc themselves don't want human extinction, but your leaders do).
Replies from: None
comment by [deleted] · 2023-12-12T02:39:58.478Z · LW(p) · GW(p)

The crazy thing is that e/acc, meme cult that it is, I feel is maybe a more realistic view of the world.

If you assume that there's no way you can dissuade others from building AI : this includes wealthy corporations who can lobby with lots of money and demand the right to buy as many GPUs as they want, nuclear armed smaller powers, and China, what do you do?

Imagine 2 simple scenarios.

World A : they built AI. Someone let it get out of hand. You have only pre-AI technology to defend yourself.

World B : you kept up in the arms race but did a better job on model security and quality. Some of the low hanging fruit for AI include things like self replicating factories and gigascale surveillance.

Against a hostile superintelligence you may ultimately lose but do you want the ability to surveil and interpret a vast battlespace for enemy activity, and coordinate and manufacture millions or billions of automated weapons or not?

You absolutely can lose but in a future world of escalating threats your odds are better if your country is strapped with the latest weapons.

Do you agree or disagree? I am not saying e/acc is right, just that historically no arms agreement has ever really been successful. SALT wasn't disarmament and the treaty has effectively ended. Were Russia wealthier it would be back to another nuclear arsenal buildup.

What do you estimate the probability that a global multilateral AI pause could happen? Right now based on the frequentist view that such an event has never been seen in history, should it rationally be 0 or under 1 percent? (Note with this last sentence this isn't my opinion, imagine you are a robot using an algorithm. What would the current evidence support? If you think my statement that an international agreement to ban a promising strategic technology and all equivalent alternatives has never happened is false, how do you know?)

comment by Andy Arditi (andy-arditi) · 2023-12-06T22:11:38.791Z · LW(p) · GW(p)

Hello! I'm Andy - I've recently become very interested in AI interpretability, and am looking forward to discussing ideas here!

comment by NicholasKross · 2023-12-05T01:45:09.122Z · LW(p) · GW(p)

Re-linking my feature request from yesterday [LW(p) · GW(p)].

Replies from: MondSemmel
comment by MondSemmel · 2023-12-06T12:24:55.009Z · LW(p) · GW(p)

Apparently the most reliable way to make sure feature requests are seen is to use the Intercom.

comment by trevor (TrevorWiesinger) · 2023-12-05T12:51:34.907Z · LW(p) · GW(p)

Feature proposal: Highlights from the Comments, similar to Scott Alexander's version

You make a post containing what you judge to be the best of other people's comments on a topic or an important period like the OpenAI incident. The comments original karma isn't shown, but people can give them new votes and the positive votes will still accrue to the writer instead of the poster. 

This is because, like dialogues, writing lesswrong comments is good for prompting thought.

I don't know about highlighting other people's successful comments because they might not want them displayed (maybe some kind of button you click to request permission to use their comment in your own highlights post? they would only click a button to accept or reject).

Replies from: Screwtape, MondSemmel
comment by Screwtape · 2023-12-06T18:30:22.461Z · LW(p) · GW(p)

I'm tentatively tempted to start doing this in a shortform.

I notice I feel like it's fine to highlight someone's comment? They put it on the site, so it's not private. I'd be keeping it on the same site, not taking it somewhere else without attribution. I wouldn't generally like my contributions moved between places or attributed to me on other pseudonyms, and maybe there's a stronger argument here than I'm thinking.

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2023-12-06T20:16:37.641Z · LW(p) · GW(p)

How do shortforms work? Doesn't virtually nobody see them?

Replies from: Screwtape
comment by Screwtape · 2023-12-06T20:25:08.456Z · LW(p) · GW(p)

My understanding is shortforms have next to no visibility unless people are already subscribed to a particular person's shortform feed. That seems about right for me? If I'm interested in what say, Scott thinks the best comments are but not interested in what Ray thinks the best comments are, then I subscribe to one but not the other.

I'm not saying this is the best possible UX, I'm just noting I'm tempted to try this with the affordances I have.

Replies from: habryka4
comment by habryka (habryka4) · 2023-12-06T20:30:30.057Z · LW(p) · GW(p)

As a quick note, I think it's pretty likely we will copy the EA Forum's Quick Takes section: https://forum.effectivealtruism.org/ 

I quite like how it works, and I think it gives about the right level of visibility to shortform posts.

Replies from: Screwtape, TrevorWiesinger
comment by Screwtape · 2023-12-06T23:50:35.561Z · LW(p) · GW(p)

Tangential question: I know how to view all the posts by karma or by other criteria. Is there a way to view all comments by karma or other criteria? It occurs to me that part of the reason I don't usually read comment threads except on my own posts is that I don't know where the good discussion is happening.

comment by trevor (TrevorWiesinger) · 2024-02-28T05:30:51.622Z · LW(p) · GW(p)

Oh boy, I can't wait for this.

Replies from: habryka4
comment by habryka (habryka4) · 2024-02-28T05:32:21.514Z · LW(p) · GW(p)

It's done as of yesterday!

comment by MondSemmel · 2023-12-06T12:28:38.344Z · LW(p) · GW(p)

Apparently the most reliable way to make sure feature requests are seen is to use the Intercom.

Apart from that, I like the suggestion. There are many LW comments that warrant being turned into full posts, and this seems like a neat complementary suggestion.

If the feature was implemented, there would have to be a moderation policy requiring posters not to use this feature to pull comments you disagree with and turn them into top-level disagreements with individuals (if the original commenter wanted to do that, they could dialogue with you), nor to use it for witch hunts ("look at all the bad takes of this guy!").

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-12-24T01:51:20.270Z · LW(p) · GW(p)

Well, you can already visit the profile of someone you disagree with and just scroll through a list of all the comments they've made. So maybe if it's a public comment we don't need to worry about the privacy aspects? When I want to make private comments on a post, I private-message the author. Public comments are for everyone to read.

comment by K Rodrigo (kyle-iroe-pagarigan-rodrigo) · 2023-12-23T21:33:39.520Z · LW(p) · GW(p)

Hello! I'm a young accountant, studying to be a CPA. I've messed around in similar epistemic sandboxes all my life without knowing this community ever existed. This is a lovely place, reminds me of a short story Hemingway wrote called A Clean, Well-Lighted Place. 

I came from r/valueinvesting. I'm very much interested in applying LW's latticework of knowledge towards improving the accounting profession. If there are Sequences and articles you think are relevant to this, I would eat it up. Thank you! 

Replies from: gilch
comment by gilch · 2024-01-04T21:18:18.008Z · LW(p) · GW(p)

Maybe the series starting with You Need More Money [? · GW]

comment by lsusr · 2024-02-16T02:41:31.282Z · LW(p) · GW(p)

I think the Dialogue feature is really good. I like using it, and I think it nudges community behavior in a good direction. Well done, Lightcone team.

Replies from: habryka4
comment by habryka (habryka4) · 2024-02-16T03:09:23.587Z · LW(p) · GW(p)

Thank you! I also am very excited about it, though sadly adoption hasn't been amazing. Would love to see more people organically produce dialogues!

comment by UnplannedCauliflower · 2024-01-29T09:03:15.237Z · LW(p) · GW(p)

LWCW 2024 Save The Date

tl;dr: This year’s LWCW happens 13-16th September 2024. Applications open April/May. We’re expanding to 250 attendees and looking for people interested in assisting our Orga Team.

The main event info is here:

https://www.lesswrong.com/events/tBYRFJNgvKWLeE9ih/less-wrong-community-weekend-2024 [? · GW]

And fragments from that post:

Friday 13th September- Monday 16th September 2024 is the 11th annual LessWrong Community Weekend (LWCW) in Berlin. This is world’s largest rationalist social gathering which brings together 250 aspiring rationalists from across Europe and beyond for four days of socialising, fun and intellectual exploration.

Here is an announcement post from last year [? · GW], so you can develop a sense of what the event is like if you’ve never been. For those who want to extend their stay in Berlin, EAGx is happening the same weekend and next weekend will be the EA Summer Camp.

Details

When: Friday 13th September - Monday 16h September 2024. The event begins with an opening ceremony on Friday afternoon. People are free to leave on Sunday evening, but some activities continue until Monday

Prices: All your money goes into paying for the venue, food, equipment and other expenses directly associated with the event. The exact prices will be announced when the applications open, for now we can estimate between 150€ and 250€. Owing to our generous donors, we can provide some financial aid for those who won’t be able to afford the price.

Contact: If you have any questions post them in the comments below or email lwcw.europe[at]gmail.com

Help Us Spread The Word

LWCW is volunteer organised with no marketing budget so we rely on word of mouth to get the message out. If you’re able to, please consider sharing this post on social media or sending the link to all your friends who might enjoy attending.


 

comment by Vanessa Kosoy (vanessa-kosoy) · 2024-01-28T11:06:54.208Z · LW(p) · GW(p)

I'm going to be in Berkeley February 8 - 25. If anyone wants to meet, hit me up!

comment by WalterL · 2024-01-19T18:41:24.559Z · LW(p) · GW(p)

If you watch the first episode of Hazbin Hotel (quick plot synopsis, Hell's princess argues for reform in the treatment of the damned to an unsympathetic audience) there's a musical number called 'Hell Is Forever' sung by a sneering maniac in the face of an earnest protagonist asking for basic, incremental fixes.

It isn't directly related to any of the causes this site usually champions, but if you've ever worked with the legal/incarceration system and had the temerity to question the way things operate the vibe will be very familiar.  

Hazbin Hotel Official Full Episode "OVERTURE" | Prime Video (youtube.com)

comment by Sherrinford · 2024-01-06T14:31:55.194Z · LW(p) · GW(p)

Almost all the blogs in the world seem to have switched to Substack, so I'm wondering if I'm the only one whose browser is very slow in loading and displaying comments from Substack blogs. Or is this a firefox problem?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2024-01-06T15:18:23.902Z · LW(p) · GW(p)

No, it’s not just you, and it’s not just Firefox. Substack comments really are hideously slow to load. (That’s one of the reasons why they don’t all load at once—which really only makes them worse, UX-wise.)

Replies from: Sherrinford
comment by Sherrinford · 2024-01-07T17:34:59.418Z · LW(p) · GW(p)

I don't really understand why Substack became so popular, compared to eg WordPress. Is Substack writing easier to monetize?

Replies from: gwern, ChristianKl
comment by gwern · 2024-01-07T18:51:33.315Z · LW(p) · GW(p)

Yes. Substack has Stripe billing built in, and a user base which both accepts monetization culturally and is probably already subscribed to another substack so it's much easier to subscribe to a second.

comment by ChristianKl · 2024-01-07T23:13:51.158Z · LW(p) · GW(p)

Getting new updates via email matters a lot of user retention. Sending emails in bulk to get around spam filters is nothing that WordPress can easily do out of the box.

comment by Adam Zerner (adamzerner) · 2023-12-17T08:58:20.085Z · LW(p) · GW(p)

Weird idea: a Uber Eats-like interface for EA-endorsed donations.

Imagine: You open the app. It looks just like Uber Eats. Except instead of seeing the option to spend $12 on a hamburger, you see the option to spend $12 to provide malaria medicine to a sick child.

I don't know if this is a good idea or not. I think evaluating the consequences of this sort of stuff is complicated. Like, maybe it ends up being a PR problem or something, which hurts EA as a movement, which has large negative consequences.

Replies from: Bohaska
comment by Bohaska · 2023-12-29T03:28:23.248Z · LW(p) · GW(p)

Would more people donate to charity if they could do so in one click? Maybe...

comment by gjm · 2023-12-07T23:42:55.813Z · LW(p) · GW(p)

I am confused by the dialogue system. I can't quite tell whether it's telling me the truth but being maddeningly vague about it, or whether it's lying to me, or whether I'm just misunderstanding something.

Every now and then I get a notification hanging off the "bell" icon at top right saying something like "New users interested in dialoguing with you".

On the face of it, this means: at least one specific person has specifically nominated me as someone they would like to have a dialogue with.

So I click on the thing and get taken to a page which shows me (if I'm understanding the text correctly) a list of users I've upvoted recently, divided up according to whether they're "recently active on dialogue matching".

I don't see any indication of the form "this user has nominated you as someone they would like to dialogue with" -- which is fair enough, since there's something to be said for making it possible to say "I'd like to have a dialogue with X" but not tell X that unless they also want to have a dialogue with you. But I also don't see anything of the form "N users have specifically nominated you as someone they would like to dialogue with".

So I have no way to tell whether there's actually someone who would specifically like to Do The Thing with me, or whether it's just that the LW machinery wants me to feel like there is, and the only real basis for this notification I've received is that there are some people who maybe kinda would be a good match, on the basis that I've upvoted them and they've upvoted me or something.

Maybe I'm too cynical, but my natural inclination is to think "if there were actually specific people, then there'd be something explicitly saying so, so probably there aren't". So the general impression I get from this system is (my apologies for the analogy) kinda like a slightly scammy dating app which tells you "73 hot singles are interested in your profile" or something, not because there are actually 73 people who have expressed an interest in dating you but because if they say that then you're more likely to use the app and pay their subscription or view their advertisements.

(Being Old by internet standards and married, I haven't actually used any scammy dating apps, or indeed any not-scammy dating apps. So my impression that that's a thing they sometimes do might be wrong.)

Is there some full explanation somewhere of how this stuff works and what the notifications mean? I do understand that e.g. if X looks at the dialogue-matching list, sees Y there, and checks the little box for Y, and if Y does the same for X, then both of them get told "you and this specific other person are interested in having a dialogue with one another". But I don't understand what "users interested in dialoguing with you" means.

(Another thing I don't know is what LW is telling other users about me on the dialogue-matching page. They get a list of topics I've written/commented on, but so far as I can see I don't have any way to see that list. And they get a list of things I've written that they've read, which of course varies from user to user and it's none of my business what it is for any given person. All this means that I have very little idea what some other user expects, if they have checked the box next to my name.)

Replies from: kave, jacobjacob
comment by kave · 2023-12-07T23:49:34.651Z · LW(p) · GW(p)

It does mean that there are real users who checked you. I think the notifications are plausibly too "scammy dating site" regardless, but they are not false.

Replies from: gjm, gjm
comment by gjm · 2023-12-08T00:08:31.461Z · LW(p) · GW(p)

I realise that there's another thing in this area that I'm possibly confused about. I think I'm not confused and it's just that there isn't a good way to present the relevant information.

So, if I get the notification, that means that at least one person wants to talk to me. So far, so good. And then I go to the dialogue page and see a list of users. But it's not necessarily true that at least one of them wants to talk to me, right?

(Because the list I see is filtered by my having upvoted things they wrote, but AIUI not symmetrically by their having upvoted things I wrote. So maybe user X liked things I wrote, went to the dialogue page, saw my name, and checked the checkbox, causing me to get notified ... but I haven't read what X wrote, or happened not to upvote it -- I don't vote all that much, either up or down -- and so X is not on the list I see. So poor X will be waiting for ever for my response, since I never get presented with the option to suggest dialogue with X.)

This could be "fixed" by including people on the list I see if they've checked my box, but that's no good because then in some cases I can tell that someone's checked my box without ever having to check theirs. (I'm not sure this mechanic actually makes sense for dialogues in the way it maybe does for dating, but it's obviously a very deliberate decision.) Or it could be "fixed" by including people on the list I see if they've upvoted things I wrote, but that's also no good because that leaks information about who's upvoted me. Or it could be "fixed" by including people on the list both at random and if they've checked my box, or both at random and if they've upvoted me, or something, but that's probably no good either because it still leaks some information and many ways of doing it leak way too much information, and because it clutters up the list of potential dialogue partners, and clutters it worse the less information it leaks.

None of these "fixes" seems at all attractive. But the alternative is that in some (many?) cases X will check the box for Y and there will be no way for Y to reciprocate, even if in fact Y would be very interested in dialogue with X.

Replies from: habryka4, kave
comment by habryka (habryka4) · 2023-12-08T00:38:45.684Z · LW(p) · GW(p)

So, if I get the notification, that means that at least one person wants to talk to me. So far, so good. And then I go to the dialogue page and see a list of users. But it's not necessarily true that at least one of them wants to talk to me, right?

No, they will appear on the list somewhere, because the last section on the dialogue matching page is "Recently active on dialogue matching", which shows all users who have made checkboxes within some recent time interval. So if they don't appear in any of the previous lists, they will appear there.

Replies from: gjm
comment by gjm · 2023-12-08T01:20:48.785Z · LW(p) · GW(p)

Ah, so it is. Thanks.

comment by kave · 2023-12-08T00:14:42.222Z · LW(p) · GW(p)

Yeah, it does seem like a tricky design problem. Some discussion of it in the thread here [LW(p) · GW(p)].

My current guess is that it would be better to have a casual-feeling non-anonymous "invite to dialogue" than the dating-style algorithm. I also guess it won't be implemented soon (for a combination of things like its marginal value given matching being smaller and how long I expect dialogues to be an organisational priority).

comment by gjm · 2023-12-07T23:57:55.892Z · LW(p) · GW(p)

Thanks for the clarification! I think there would be some value in either putting some message to that effect on the dialogue page, or else having a page linked from there that provides more explanation of what's going on and what everything means.

(The former might be tricky, since what it would be useful to see there might depend on what's in the user's notifications and maybe also on whether they got to the dialogue page by clicking on one of those notifications or by other means. Or maybe it would be bad for it to depend on that since then the contents of the page would change in not-so-predictable ways, which would be confusing in itself. But maybe a message along the lines of "At least one other user has checked the box to mark you as a user they would like to dialogue with. The most recent time this happened was about two days ago." Or something; I haven't really thought this through.)

Replies from: kave
comment by kave · 2023-12-08T00:04:14.238Z · LW(p) · GW(p)

That seems like a good idea! (I don't know exactly when we'll get to it).

(Also, sorry for the brevity of my messages; I am grateful for the details in yours)

Replies from: gjm
comment by gjm · 2023-12-08T00:09:02.421Z · LW(p) · GW(p)

Brevity is fine. I'm sure you have other things to do besides replying to my comments.

comment by jacobjacob · 2023-12-08T00:16:18.972Z · LW(p) · GW(p)

They get a list of topics I've written/commented on, but so far as I can see I don't have any way to see that list

Yeah, users can't currently see that list for themselves (unless of course you create a new account, upvote yourself, and then look at the matching page through that account!). 

However, the SQL for this is actually open source, in the function getUserTopTags: https://github.com/ForumMagnum/ForumMagnum/blob/master/packages/lesswrong/server/repos/TagsRepo.ts

What we show is "The tags [? · GW] a user commented on in the last 3 years, sorted by comment count, and excluding a set of tags that I deemed as less interesting to show to other users, for example because they were too general (World Modeling, ...), too niche (Has Diagram, ...) or too political (Drama, LW Moderation, ...)."

Replies from: gjm
comment by gjm · 2023-12-08T01:22:12.265Z · LW(p) · GW(p)

Just out of curiosity, is the name "ForumMagnum" an anatomical pun?

Replies from: habryka4
comment by habryka (habryka4) · 2023-12-08T01:29:12.441Z · LW(p) · GW(p)

Lol, no, but that is kind of hilarious. 

I think it's a reference to Francis Bacons' "Instauratio Magna" ("The Great Instauration"), though I am not sure why we would have chosen "Magnum" instead of "Magna" as the spelling.

Replies from: Rana Dexsin
comment by Rana Dexsin · 2023-12-08T11:57:24.415Z · LW(p) · GW(p)

The Latin noun “instauratio” is feminine, so “magna” uses the feminine “-a” ending to agree with it. “forum” in Latin is neuter, so “magnum” would be the corresponding form of the adjective. (All assuming nominative case.)

Replies from: habryka4
comment by habryka (habryka4) · 2023-12-08T19:09:05.141Z · LW(p) · GW(p)

Huh, I learned something today about the name of my own Forum. Thank you!

comment by Vanessa Kosoy (vanessa-kosoy) · 2024-03-02T14:04:17.271Z · LW(p) · GW(p)

Did anyone around here try Relationship Hero and has opinions?

Replies from: habryka4, Liron
comment by habryka (habryka4) · 2024-03-02T19:23:17.279Z · LW(p) · GW(p)

Presumably @Liron [LW · GW] but he is of course biased :P 

comment by Liron · 2024-03-06T23:43:26.289Z · LW(p) · GW(p)

Founder here :) I'm biased now, but FWIW I was also saying the same thing before I started this company in 2017: a good dating/relationship coach is super helpful. At this point we've coached over 100,000 clients and racked up many good reviews.

I've personally used a dating coach and a couples counselor. IMO it helps twofold:

  1. Relevant insights and advice that the coach has that most people don't, e.g. in the domain of communication skills, common tactics that best improve a situation, pitfalls to avoid.
  2. A neutral party who's good at letting you (and potentially a partner) objectively review and analyze the situation.

Relationship Hero hires, measures and curates the best coaches, and streamlines matching you to the best coach based on your scenario. Here's a discount link for LW users to get $50 off.

comment by CstineSublime · 2024-02-20T11:43:12.642Z · LW(p) · GW(p)

Long time lurker introducing myself.

I'm a Music Video Maker who is hoping to use Instrumental Rationality towards accomplishing various creative-aesthetic goals and moving forward on my own personal Hamming Question. The Hammertime [? · GW]sequence has been something I've been very curious about but unsuccessful in implementing.

I'll be scribbling shortform notes which might document my grappling with goals. Most of them will be in some way related to the motion picture production or creativity in general. "Questions" as a topic may creep in, it's one of my favorite topics. Having trained as a documentary filmmaker in a previous life, I have spent a lot of time interviewing people and loved it. I also curate a list of interesting or good questions. I get joy from being asked interesting questions.

Replies from: habryka4
comment by habryka (habryka4) · 2024-02-21T19:29:32.079Z · LW(p) · GW(p)

Welcome! Hope you have a good time. Asking good questions is quite valuable, and I think a somewhat undersupplied good on the site, so am glad to have you around!

Replies from: CstineSublime
comment by CstineSublime · 2024-02-21T23:34:01.209Z · LW(p) · GW(p)

Thank you, then I will try to ask good questions when I feel I am in possession of one.

comment by Adam Zerner (adamzerner) · 2024-01-26T04:43:36.710Z · LW(p) · GW(p)

I don't like that when you disagree with someone, as in hitting the "x" for the agree/disagree voting [LW · GW], the "x" appears red. It makes me feel on some level like I am saying that the comment is bad when I merely intend to disagree with it.

comment by Yoav Ravid · 2023-12-31T16:23:23.063Z · LW(p) · GW(p)

The new comments outline feature is great! Thanks, LW team :)

Replies from: gwern
comment by gwern · 2023-12-31T17:02:58.770Z · LW(p) · GW(p)

One idea for improving the floating ToC comment tree: use LLMs to summarize them. Comments can be summarized into 1-3 emoji (GPT-3 was very good at this back in 2020), and each separate thread can be given a one-sentence summary. As it is, it's rather bare and you can get some idea of the structure of the tree and eg. who is bickering with whom, but nothing else.

comment by Dano (El Dano) · 2024-03-21T17:58:24.562Z · LW(p) · GW(p)

Hello! I have been reading and lurking the place for a long time. This place seemed different than other social media/forums because of the level of discussion held here. It's daunting to finally create an account, but I hope to start commenting/posting later.

Also, I find it funny to consider websites as "places", although it makes sense to call it that way.

Replies from: habryka4
comment by habryka (habryka4) · 2024-03-22T18:04:40.277Z · LW(p) · GW(p)

Hello and welcome! Looking forward to reading the things you write, and I hope you have a good time!

comment by a littoral wizard · 2024-03-08T00:02:16.583Z · LW(p) · GW(p)

Hey!

I'm an IT consultant who works very closely with an innovative AI-driven product.  Or, to cut the bullshit, I help deploy and integrate customer service platforms filled to the brim with those very chatbots that annoy endless customers daily.

But that's just my day job.  I'm working on a novel (or perhaps a piece of serialized fiction) that just might have silicon shoggoths in it.  That's the kind of content the local fauna enjoys, right?  It's a little too satirical to be entirely rational, but some recent twitter-chatter out of this community has me convinced that I might have tapped into the right zeitgeists.

Looking for some tips on posting fiction here.  Just one long nested thread of chapters?  Or is there a better way to do it?

Replies from: habryka4
comment by habryka (habryka4) · 2024-03-08T02:17:07.844Z · LW(p) · GW(p)

Welcome! Hope you have a good time!

Most fiction posted here tends to be serial in-structure, so that one unit of content is about the size of a blogpost. My guess is that's the best choice, but you can also try linking the whole thing.

comment by kave · 2024-01-20T17:31:09.701Z · LW(p) · GW(p)

Curious about people's guesses in this market: 

comment by lesswronguser123 (fallcheetah7373) · 2024-01-17T18:02:36.418Z · LW(p) · GW(p)

Hey there! I just got curious while reading steven pinker's book on rationality about the "rationality community" he keeps referring about then I saw him mentioning trying to be "less wrong" and then I searched it up to stumble upon this place. You guys read and write a lot just browsing here, maybe I should focus on increasing my attention span even more.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2024-01-22T09:19:06.929Z · LW(p) · GW(p)

Welcome! I think you may be interested in a review of Steven Pinker's book on rationality [LW · GW].

comment by Davis_Kingsley · 2024-01-04T13:36:51.516Z · LW(p) · GW(p)

Whatever happened to AppliedDivinityStudies, anyway? Seemed to be a promising blog adjacent to the community but I just checked back to see what the more recent posts were and it looks to have stopped posting about a year ago?

Replies from: gwern
comment by ville · 2024-01-01T11:17:13.662Z · LW(p) · GW(p)

Hi LessWrong! I am Ville, I have been reading LW / ACX and other rationalish content for a while and was thinking of joining into the conversation. I have been writing on Medium previously, but have been struggling with the sheer amount of clickbait and low-effort content on the platform. I also don't really write frequently enough to justify a Substack or other dedicated personal blog.

However as LW has a very high standard for content, I am unsure if my writing would be something people here would enjoy. Most recently, I wrote a series of two fables about the risks of measures becoming targets,  concerning search engine optimisation and biological clocks. These were an attempt to blend creative writing with non-fiction, as I've been a bit disillusioned with the typical, slightly dry, non-fiction writing. While the concepts are quite elementary, I feel especially the post on biological clocks might give some food for thought as the topic of how to measure the effectiveness of anti-ageing / life extension treatments will have to be solved at some point, and relying on biomarkers can backfire in all kinds of ways.

My question to more experienced users is whether you think the below blog posts would be something the LW community would like to see posted and discussed on the platform?
- https://villekuosmanen.medium.com/the-fall-of-googlopolis-76140339c3fc
- https://villekuosmanen.medium.com/aldix-and-the-book-of-life-5da3f0001b6c

In any case, I am hoping to join some conversations here in the future! Also, I live in London so some of you may have seen me in the events at Newspeak House (such as the ACX meet-up in autumn) and I am planning on attending more events in the future.

Replies from: papetoast
comment by papetoast · 2024-01-01T16:03:29.177Z · LW(p) · GW(p)

I didn't read either links, but you can write whatever you want on LessWrong! While most posts you see are very high quality, this is because there is a distinction between frontpage posts (promoted by mods) and personal blogposts (the default). See Site Guide: Personal Blogposts vs Frontpage Posts [LW · GW].

And yes some people do publish blogposts on LessWrong, jefftk [LW · GW] being one that I follow.

FAQ: What can I post on LessWrong? [? · GW]

Posts on practically any topic are welcomed on LessWrong. I (and others on the team) feel it is important that members are able to “bring their entire selves” to LessWrong and are able to share all their thoughts, ideas, and experiences without fearing whether they are “on topic” for LessWrong. Rationality is not restricted to only specific domains of one’s life and neither should LessWrong be. [...]

comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-12-24T01:43:32.918Z · LW(p) · GW(p)

I had a discussion recently where I gave feedback to Ben P. about the dialogue UI. This got my brain turning, and a few other recommendations for UI changes bubbled up to top of mind.

Vote display (for karma and agree/disagree)

Histogram of distribution of votes (tiny, like sparklines, next to the vote buttons). There should be four bars: strong negative vote count, negative vote count, positive vote count, strong positive vote count. The sum of all votes is less informative and interesting to me than the distribution. I want to know the difference between something being controversial versus meh. Or, for example, if something is strongly liked by a few, but mildly disliked by many.

Comment Optimization
(I'm not saying these things need to be mandatory, just having them as options would be sufficient. I do think they should be on-by-default though.)
Hide number of up/down votes, poster name, agree/disagree until you've voted on karma or agree/disagree respectively. 
Hide emojis & underlines until you've voted karma and agree/disagree.

Vote buttons at bottom of comments, rather than top. (Optimizing for reading long comments where the top of the comment will be out of sight. Also, don't want someone to vote on the comment before they've read the whole thing!)

Agree/disagree voting on posts, not just comments. Maybe even on paragraphs? But others' votes not visible unless you place your own.

Less collapsing of comments. Long comments are a good thing! I rarely think long comments are empty rants. They almost always have logical structure and points that need the whole thing to understand. Collapsing/shortening should happen only on request.
The thing that should be collapsed is comment threads. Maybe don't show anything but the first message in a thread and the thread comment count until a vote has been registered for a comment. Discourage skimming, encourage careful reading.

The comment map is helpful. Even more helpful would be if it had indicators for where you'd interacted with those comments. So that you can either deliberately go back to a comment you agreed/disagreed with to find a useful quote, or avoid the comments you've interacted with in order to find novel comments to read.


Profile Optimization
The number of posts or comments displayed on main page or a person's profile is way too short. I want to be able to set a preference for showing like... 50 or 100 or 300 before needing to click 'see more'.

And I'm usually looking for either a comment OR a post, and know ahead of time which type. So the sections should start collapsed entirely by default, or simply be links. A link to "posts", "comments", "short forms", "all writing combined". Being able to then sort the viewed category either by recency or karma or reactivity (total number of agree and disagree votes) or controversy (highest number of pairs of agree/disagree votes, ie min(agree, disagree)/2 )

If I have the full text of the users' most recent comments and posts in a list of 300 items long, it's easy for me to text search for keywords. This is a much better option than trying to use a search system built/maintained by the site.

Loading so many items at once may take some time. You could load in batches, displaying them as they get loaded (and showing a spinning loading symbol at the top while this happens.) This is far preferable to having to click for each batch!

Similarly, I always want the front page to display the top 100 or 200 posts, not like... 15 or whatever.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-12-26T18:14:56.478Z · LW(p) · GW(p)

Oh yeah, and the order of interacting with a post should be: read post, vote, comment. So why is the vote button at the top? We don't want to encourage people to vote before reading! So why have them read the post, scroll to the top, vote, scroll back to the bottom, comment....

Replies from: habryka4
comment by habryka (habryka4) · 2023-12-26T18:23:13.363Z · LW(p) · GW(p)

Posts that have more than like 3 paragraphs of text also have vote buttons at the bottom. It's just very short posts where it looks really weird to have two vote sections right next to each other where we omit one of them.

Replies from: nathan-helm-burger
comment by Nathan Helm-Burger (nathan-helm-burger) · 2023-12-27T06:28:40.727Z · LW(p) · GW(p)

Yes, I'm aware of that. I'm saying that they shouldn't have them at the top. Why let someone vote on a post if they haven't made it to the bottom?

comment by quetzal_rainbow · 2023-12-14T19:09:10.655Z · LW(p) · GW(p)

Dear LW team, I have found that I can upvote/agreement-vote deleted comments and it gives karma to author of deleted comment. Is it supposed to work like this?

Replies from: habryka4
comment by habryka (habryka4) · 2023-12-14T19:37:31.302Z · LW(p) · GW(p)

Seems kinda fine. Seems like a weird edge-case that doesn't really matter that much. I would consider it a bug, but not a very important one to fix.

comment by Adam Zerner (adamzerner) · 2024-02-26T22:17:04.664Z · LW(p) · GW(p)

I am seeing new a new "Quick Takes" feature on LessWrong. However, I can't find any announcement or documentation for the feature. I tried searching for "quick takes" and looking on the FAQ [? · GW]. Can someone describe "Quick Takes"?

Replies from: Raemon
comment by Raemon · 2024-02-27T01:52:54.805Z · LW(p) · GW(p)

They are just a renaming of "shortform", with some new UI. "Quick Take" sort of conveyed what we were actually going for which is more like "you wrote it down quickly" than "it was literally short".

Replies from: habryka4
comment by habryka (habryka4) · 2024-02-27T03:14:36.656Z · LW(p) · GW(p)

The EA Forum came up with the name when they adopted the "shortform" feature, and it seemed like a better name to me, so we copied it.

comment by Sherrinford · 2024-01-23T17:23:52.015Z · LW(p) · GW(p)

By now there are several AI policy organizations. However, I am unsure what the typical AI safety policy is that any of them would enforce if they had unlimited power. Is there a summary of that?

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2024-01-25T04:12:30.259Z · LW(p) · GW(p)

Surprisingly enough, this question actually has a really good answer.

Given unlimited power, you create dath ilan [LW · GW] on Earth. That's the most optimal known strategy given the premise. 

Yudkowsky's model is far from perfect (other people like Duncan have thought about their own directions), but it's the one that's most fleshed out by far (particularly in projectlawful [LW · GW]), and it's optimal state in that it allows people to work together and figure out for themselves how to make things better.

Replies from: Sherrinford
comment by Sherrinford · 2024-01-28T19:43:46.205Z · LW(p) · GW(p)

Okay, maybe I should rephrase my question: What is the typical AI safety policy they would enact if they could advise president, parliament and other real-world institutions?

Replies from: gilch
comment by gilch · 2024-02-04T20:28:35.303Z · LW(p) · GW(p)

Initial ask would be compute caps for training runs. In the short term, this means that labs can update their models to contain more up-to-date information but can't make them more powerful than they are now.

This need only apply to nations currently in the lead (mostly U.S.A.) for the time being but will eventually need to be a universal treaty backed by the threat of force. In the longer term, compute caps will have to be lowered over time to compensate for algorithmic improvements increasing training efficiency.

Unfortunately, as technology advances, enforcement would probably eventually become too draconian to be sustainable. This "pause" is only a stopgap intended to buy us more time to implement a more permanent solution. That would at least look like a lot more investment in alignment research, which unfortunately risks improving capabilities as well. Having spent a solid decade already, Yudkowsky seems pessimistic that this approach can work in time and has proposed researching human intelligence augmentation instead, because maybe then the enhanced humans could solve alignment for us.

Also in the short term, there are steps that could be taken to reduce lesser harms, such as scamming. AI developers should have strict liability for harms caused by their AIs. This would discourage the publishing of the weights of the most powerful models. Instead, they would have to be accessed through an API. The servers could at least be shut down or updated if they start causing problems. Images/videos could be steganographically watermarked so abusers could be traced. This isn't feasible for text (especially short text), but servers could at least save their transcripts, which could be later subpoenaed.

Replies from: Sherrinford
comment by Sherrinford · 2024-02-04T21:52:54.741Z · LW(p) · GW(p)

Thank you very much. Why would liability for harms caused by AIs discourage the publishing of the weights of the most powerful models?

comment by Jacob G-W (g-w1) · 2023-12-05T02:19:04.508Z · LW(p) · GW(p)

It should probably say 2023 review instead of 2022 at the top of lesswrong.

Replies from: habryka4
comment by habryka (habryka4) · 2023-12-05T03:14:20.872Z · LW(p) · GW(p)

It is terribly confusing, but it should not. Each year we review the posts that are at least one year old, as such, at the end of 2023, we review all posts from 2022, hence "2022 Review".

Replies from: TrevorWiesinger, g-w1
comment by trevor (TrevorWiesinger) · 2023-12-05T14:43:09.342Z · LW(p) · GW(p)

For the voting system's point cost, that was the function that outputs the point costs (1,10,45) from the vote count (1,4,9), which is basically the same as (1,2,3)?

Replies from: MondSemmel
comment by MondSemmel · 2023-12-06T12:36:43.991Z · LW(p) · GW(p)

The first option costs 1, the second costs sum(1..4), and the third costs sum(1..9). So the idea is that every vote costs 1 more vote point, and the cost for n votes is simply . I don't know where the formula comes from, however.

Replies from: habryka4, TrevorWiesinger
comment by habryka (habryka4) · 2023-12-07T20:52:15.497Z · LW(p) · GW(p)

It's quadratic voting: https://vitalik.eth.limo/general/2019/12/07/quadratic.html 

Replies from: MondSemmel
comment by MondSemmel · 2023-12-07T23:19:03.809Z · LW(p) · GW(p)

My thought process on writing that comment was roughly: "This is quadratic voting, right? Let me check the Wikipedia page. Huh, that page suggests a formula where vote cost scales quadratically with vote number. Maybe I misremembered what quadratic voting is? Let me just comment with what I do remember."

So the problem was that I'd only glanced at the Wikipedia article, and didn't realize that the simplified formula there, , is either an oversimplification or an outright editing error where they drop a factor of . The actual approximation of the quadratic voting formula (as explained in the linked Vitalik essay, which I'd apparently also read years ago but had mostly forgotten since), is , as per this:

 

And @trevor [LW · GW], here's a quote from that essay on the motivation for this formula:

But what do we actually want? Ultimately, we want a scheme where how much influence you "buy" is proportional to how much you care...

So how do we match these two up? The answer is clever: your n'th unit of influence costs you $n .

comment by trevor (TrevorWiesinger) · 2023-12-06T16:26:16.073Z · LW(p) · GW(p)

That is a surprisingly satisfying answer, thank you.

comment by Jacob G-W (g-w1) · 2023-12-05T12:34:53.589Z · LW(p) · GW(p)

Ah, sorry for the confusion. Thanks!

comment by niplav · 2023-12-05T00:17:11.972Z · LW(p) · GW(p)

I remember a Slate Star Codex post about a thought experiment that goes approximately like this:

  • In the past, AI systems have taken over the universe and colonized it completely
  • Those AI systems are in extremely strong multipolar competition with one another, and the competitive dynamics are incredibly complex
  • In fact, those dynamics are so complex and inviolable that they constitute whole new physical laws
  • So in fact we are just living "on" those competitive AI systems as a substrate, similar to how ecosystems have competing and cooperating cells as a substrate
  • This might've already happened multiple times

Does anyone know which post I'm remembering? I've tried googling but Google is not very helpful.

Replies from: jam_brand
comment by Cheops (cheops-steller) · 2024-03-28T00:26:54.805Z · LW(p) · GW(p)

Hello there. This seems to be a quirky corner of the internet that I should've discovered and started using years ago. Looking forward to reading these productive conversations! I am particularly interested in information, computation, complex system and intelligence.

Replies from: habryka4
comment by habryka (habryka4) · 2024-03-28T04:13:13.968Z · LW(p) · GW(p)

Hey Cheops!

Good to have you around, you'll definitely not be alone here with these interests. And always feel free to complain about any problems you run into either in these Open Threads, or via the Intercom chat in the bottom right corner.

comment by charlieoneill (kingchucky211) · 2024-03-22T22:01:25.522Z · LW(p) · GW(p)

@Ruby [LW · GW] @Raemon [LW · GW] @RobertM [LW · GW] I've had a post waiting to be approved for almost two weeks now (https://www.lesswrong.com/posts/gSfPk8ZPoHe2PJADv/can-quantised-autoencoders-find-and-interpret-circuits-in, username: charlieoneill). Is this normal? Cheers!

Replies from: habryka4
comment by habryka (habryka4) · 2024-03-24T20:51:47.280Z · LW(p) · GW(p)

Huh, definitely not normal, and I don't remember anything in the queue. It seems to be approved now.

comment by Anand Baburajan (anand-baburajan) · 2024-03-12T18:47:29.471Z · LW(p) · GW(p)

Hello! I'm building a tool with a one of a kind UI for LessWrong kind of deep, rational discussions. I've always loved how writing forces a deeper clarity of thinking and focuses on getting to the right answer. The tool is called CQ2. It has a sliding panes design with quote-level threads. There's a concept of "posts" for more serious discussions with many people and there's "chat" for less serious ones, but both of them have a UI crafted for deep discussions. It's open source as well.

I simulated some LessWrong discussions there – they turned out to be more organised and easy to follow. However, it is a bit inconvenient – there's horizontal scrolling and one needs to click to open new threads. Since forums need to prioritize convenience, I think CQ2's design isn't good for LessWrong. But I think the inconvenience is worth it for such discussions at writing-first teams, since it helps hyper-focus on one thing at a time and avoid losing context.

If you have such discussions at work, I would love to learn about your team, your frustrations with existing communication tools, and better understand how CQ2 can help! I would appreciate any feedback or leads! I think my comment might come off as an ad, but I (and CQ2) strongly share LessWrong's "improving our reasoning and decision-making" core belief.

I found LessWrong a few months back. It's a wonderful platform and I particularly love the clean design.

comment by rpglover64 (alex-rozenshteyn) · 2024-03-02T20:50:48.181Z · LW(p) · GW(p)

@Habryka [LW · GW] @Raemon [LW · GW] I'm experiencing weird rendering behavior on Firefox on Android. Before voting, comments are sometimes rendered incorrectly in a way that gets fixed after I vote on them.

Is this a known issue?

Replies from: habryka4
comment by habryka (habryka4) · 2024-03-02T23:50:04.865Z · LW(p) · GW(p)

I have not seen this! Could you post a screenshot?

Replies from: alex-rozenshteyn
comment by rpglover64 (alex-rozenshteyn) · 2024-03-03T16:32:15.136Z · LW(p) · GW(p)

before: 

after: 

Here the difference seems only to be spacing, but I've also seen bulleted lists appear. I think but I can't recall for sure that I've seen something similar happen to top-level posts.

Replies from: habryka4
comment by habryka (habryka4) · 2024-03-04T02:22:11.279Z · LW(p) · GW(p)

Thank you! I will have someone look into this early next week, and hopefully fix it.

comment by PeterL (peter-loksa) · 2024-03-01T22:53:07.149Z · LW(p) · GW(p)

Hello, my name is Peter and recently I read Basics of Rationalist Discourse and iteratively checked/updated the current post based on the points stated in those basics:

I (possibly falsely) feel that moral (i.e. "what should be") theories should be reducible because I see the analogy with the demand of "what is" theories to be reducible due to Occam's razor. I admit that my feeling might be false (and I know analogy might not be a sufficient reason), and I am ready to admit that it is. However, despite reading the whole Mere Goodness from RAZ I cannot remember any reasons (maybe they are there, and I don't blame the author (EY), if not), but many interesting statements (e.g. about becoming the pleasure brain center itself). And I remember there was such a long dialog there that might have explained this to me but I didn't comprehend its bottom line.
 

This post is not intended to have any conclusion more general than about my state of mind, and if there is such an impact, I don't mean it.

comment by Aech · 2024-02-19T22:20:48.674Z · LW(p) · GW(p)

Hello! First of many comments as I dive into the AI Alignment & Safety area to start contributing.  

Very new to this forum and AI in general, about to start my AI Safety & Alignment course to get familiar. Many posts in this forum feel advanced to me but I guess that is the beginning. 

Replies from: habryka4
comment by habryka (habryka4) · 2024-02-20T01:36:53.534Z · LW(p) · GW(p)

Welcome! I hope you have a good time here, and if you run into any problems, feel free to ping the admin team on the Intercom chat in the bottom right corner.