Neil Warren's Shortform

post by Neil (neil-warren) · 2023-03-27T21:21:40.595Z · LW · GW · 23 comments

Contents

23 comments

23 comments

Comments sorted by top scores.

comment by Neil (neil-warren) · 2024-04-02T22:26:32.442Z · LW(p) · GW(p)

A functionality I'd like to see on LessWrong: the ability to give quick feedback for a post in the same way you can react to comments (click for image). When you strong-upvote or strong-downvote a post, a little popup menu appears offering you some basic feedback options. The feedback is private and can only be seen by the author. 

I've often found myself drowning in downvotes or upvotes without knowing why. Karma is a one-dimensional measure, and writing public comments is a trivial inconvience [LW · GW]: this is an attempt at middle ground, and I expect it to make post reception clearer. 

See below my crude diagrams.

Replies from: Yoav Ravid, yanni, papetoast
comment by Yoav Ravid · 2024-04-03T06:22:29.004Z · LW(p) · GW(p)

I suggested something similar [LW(p) · GW(p)] a few months back as a requirement for casting strong votes.

comment by yanni kyriacos (yanni) · 2024-04-03T00:53:15.711Z · LW(p) · GW(p)

Strong upvote, but I won't tell you why.

comment by papetoast · 2024-04-06T02:32:25.429Z · LW(p) · GW(p)

Relatedly, in-line private feedback. I saw a really good design for alerting typos here.

Replies from: neil-warren
comment by Neil (neil-warren) · 2024-04-06T07:40:40.036Z · LW(p) · GW(p)

Yeah, that's an excellent idea. I often spot typos in posts, but refrain from writing a comment unless I collect like three. Thanks for sharing!

comment by Neil (neil-warren) · 2024-04-01T08:27:19.359Z · LW(p) · GW(p)

Bonus song in I have been a good Bing: "Claude's Anguish", a 3-minute death-metal song whose lyrics were written by Claude when prompted with "how does the AI feel?": https://app.suno.ai/song/40fb1218-18fa-434a-a708-1ce1e2051bc2/ (not for the faint of heart)

Replies from: jane-mccourt
comment by Heron (jane-mccourt) · 2024-04-01T19:18:09.962Z · LW(p) · GW(p)

I hate death metal. This is a great song!

comment by Neil (neil-warren) · 2024-04-17T22:55:43.881Z · LW(p) · GW(p)

FHI at Oxford
by Nick Bostrom (recently turned into song [LW · GW]):

the big creaky wheel
a thousand years to turn

thousand meetings, thousand emails, thousand rules
to keep things from changing
and heaven forbid
the setting of a precedent

yet in this magisterial inefficiency
there are spaces and hiding places
for fragile weeds to bloom
and maybe bear some singular fruit

like the FHI, a misfit prodigy
daytime a tweedy don
at dark a superhero
flying off into the night
cape a-fluttering
to intercept villains and stop catastrophes

and why not base it here?
our spandex costumes
blend in with the scholarly gowns
our unusual proclivities
are shielded from ridicule
where mortar boards are still in vogue

comment by Neil (neil-warren) · 2024-04-01T07:53:02.066Z · LW(p) · GW(p)

I'm glad "thought that faster" is the slowest song of the album. Also where's the "Eliezer Yudkowsky" in the "ft. Eliezer Yudkowsky"? I didn't click on it just to see Eliezer's writing turned into song, I came to see Eliezer sing. Missed opportunity. 

comment by Neil (neil-warren) · 2024-04-25T07:56:24.691Z · LW(p) · GW(p)

Poetry and practicality

I was staring up at the moon a few days ago and thought about how deeply I loved my family, and wished to one day start my own (I'm just over 18 now). It was a nice moment.

Then, I whipped out my laptop and felt constrained to get back to work; i.e. read papers for my AI governance course, write up LW posts, and trade emails with EA France. (These I believe to be my best shots at increasing everyone's odds of survival).

It felt almost like sacrilege to wrench myself away from the moon and my wonder. Like I was ruining a moment of poetry and stillwatered peace by slamming against reality and its mundane things again.

But... The reason I wrenched myself away is directly downstream from the spirit that animated me in the first place. Whether I feel the poetry now that I felt then is irrelevant: it's still there, and its value and truth persist. Pulling away from the moon was evidence I cared about my musings enough to act on them.

The poetic is not a separate magisterium from the practical; rather the practical is a particular facet of the poetic. Feeling "something to protect" in my bones naturally extends to acting it out. In other words, poetry doesn't just stop. Feel no guilt in pulling away. Because, you're not.

comment by Neil (neil-warren) · 2023-06-23T14:57:18.452Z · LW(p) · GW(p)

Is Superintelligence by Nick Bostrom outdated?

Quick question because I don't have enough alignment knowledge to tell: is Superintelligence outdated? It was published nearly 10 years ago and a lot has happened since then. If it is outdated, a re-edition of the book might be wise, if only because that'll make the book more attractive. Because of the fast-moving nature of the field,I admit to not having read the book because the release date made me hesitate (I figured online resources would be more up-to-date). 

comment by Neil (neil-warren) · 2024-04-20T23:06:05.730Z · LW(p) · GW(p)

Can we have a black banner for the FHI? Not a person, still seems appropriate imo.

comment by Neil (neil-warren) · 2023-03-27T21:21:40.811Z · LW(p) · GW(p)

What is magic?

Presumably we call whatever we can't explain "magic" before we understand it, at which point it becomes simply a part of the natural world. This is what many fantasy novels fail to account for; if we actually had magic, we wouldn't call it magic. There are thousands of things in the modern world that would definitely enter the criteria for magic of a person living in the 13th Century. 

So we do have magic; but why doesn't it feel like magic? I think the answer to this question is to be found in how evenly distributed our magic is. Almost everyone in the world benefits from the magic that is electricity; it's so common and so many people have it that it isn't considered magic. It's not magic because everyone has it, and so it isn't more impressive than an eye or an opposable thumb. In fantasy novels, the magic tends to be concentrated into a single caste of people. 

Point being: if everyone were a wizard, we wouldn't call ourselves wizards, because wizards are more magical than the average person by definition. 


Entropy dictates that everything will be more or less evenly distributed, and so worlds from the fantasy books are very unlikely to appear in our universe. Magic as I've loosely defined it here does not exist and it is freakishly unlikely to. We can dream though.  

Replies from: Richard_Kennaway, sharmake-farah
comment by Richard_Kennaway · 2023-03-28T16:16:42.683Z · LW(p) · GW(p)

Related:

"If You Demand Magic, Magic Won't Help" [LW · GW]

"Excluding the Supernatural" [LW · GW]

"Joy in the Merely Real" [LW · GW]

"Mundane Magic" [LW · GW]

Replies from: neil-warren
comment by Neil (neil-warren) · 2023-03-28T21:19:32.372Z · LW(p) · GW(p)

Eliezer Yudkowsky is kind of a god around here, isn't he? 

Would you happen to know what percentage of total upvotes on this website are attributed to his posts? It's impressive how many sheer good ideas written in clear form that he's had to come up with to reach that level. Cool and everything, but isn't it ultimately proof that LessWrong is still in its fledgling stage (which it may never leave), as it depends so much on the ideas of its founder? I'm not sure how one goes about this, but expanding the LessWrong repertoire in a consequential way seems like a good next step for LessWrong. Perhaps that includes changing the posts in the Library... I don't know. 

Anyhow thanks for this comment, it was great reading!

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2023-03-28T21:28:13.976Z · LW(p) · GW(p)

Eliezer Yudkowsky is kind of a god around here, isn't he?

The Creator God, in fact. LessWrong was founded by him.

All of the Sequences [? · GW] are worth reading.

Replies from: neil-warren
comment by Neil (neil-warren) · 2023-03-28T21:36:44.871Z · LW(p) · GW(p)

Right, but if LessWrong is to become larger, it might be a good idea to stop leaving his posts as the default (the Library, the ones being recommended in the front page, etc.) I don't doubt that his writing is worth reading and I'll get to it, I'm just offering an outsider's view on this whole situation, which seems a little stagnant to me in a way. 

That last reply of mine, a reply to a reply to a Shortform post I made, can be found after just a little scrolling on the main page of LessWrong. I should be a nobody to the algorithm, yet I'm not. My only point is that LessWrong seems big because it has a lot of posts but it isn't growing as much as it should be. That may be because the site is too focused on a single set of ideas, and that shooes some people away. I think it's far from being an echo chamber, but it's not as lively as I would think it should be.

As I've noted though, I'm a humble outsider and have no idea what I'm talking about. I'm only writing this because often outsider advice is valuable as there's no chance in getting trapped into echo thinking at all. 

comment by Noosphere89 (sharmake-farah) · 2023-03-27T21:59:47.969Z · LW(p) · GW(p)

I think there is another reason why it doesn't feel like magic, and in order to find it, we have to find the element that changed the least: The human body and brain didn't get affected by the industrial revolution, and humans are the most important part of any societal shift.

Replies from: neil-warren
comment by Neil (neil-warren) · 2023-03-27T22:15:33.794Z · LW(p) · GW(p)

What do you mean? What I read is: magic is subjective, and since the human brain hasn't changed in 200,000 years nothing will ever feel like magic. I'm not sure that's what you meant though, could you explain? 

Replies from: sharmake-farah
comment by Noosphere89 (sharmake-farah) · 2023-03-27T22:57:40.137Z · LW(p) · GW(p)

I'll admit, I didn't actually think all that well here.

Replies from: neil-warren
comment by Neil (neil-warren) · 2023-03-28T12:02:02.917Z · LW(p) · GW(p)

I'm still new to this, but I can say I love a culture where there is a button for retracting statements without deleting them. I will most likely have to use it a lot as I progress around here.

comment by Neil (neil-warren) · 2024-05-10T18:10:23.820Z · LW(p) · GW(p)

I'm working on a non-trivial.org project meant to assess the risk of genome sequences by comparing them to a public list of the most dangerous pathogens we know of. This would be used to assess the risk from both experimental results in e.g. BSL-4 labs and the output of e.g. protein folding models. The benchmarking would be carried out by an in-house ML model of ours. Two questions to LessWrong: 

1. Is there any other project of this kind out there? Do BSL-4 labs/AlphaFold already have models for this? 

2. "Training a model on the most dangerous pathogens in existence" sounds like an idea that could backfire horribly. Can it backfire horribly? 

comment by Neil (neil-warren) · 2023-04-03T19:52:33.129Z · LW(p) · GW(p)

We can't negociate with something smarter than us 

Superintelligence will outsmart us or it isn't superintelligence. As such, the kind of AI that would truly pose a threat to us is also an AI we cannot negotiate with.

No matter what arguments we make [LW · GW], superintelligence will have figured them out first. We're like ants trying to appeal to a human, and the human can understand pheromones but we can't understand human language. It's entirely up to the human and its own arguments whether we get squashed or not. 

Worth reminding yourself of this from time to time, even if it's obvious. 

Counterpoints: 

  1. It may not take a true superintelligence to kill us all, meaning we could perhaps negociate with a pre-AGI machine
  2. The "we cannot negociate" part is not taking into account the fact that we are the Simulators and thus technically have ultimate power over it