Posts

Consequentialism is a compass, not a judge 2024-04-13T10:47:44.980Z
Privacy and writing 2024-04-06T08:20:07.162Z
How does it feel to switch from earn-to-give? 2024-03-31T16:27:22.860Z
Politics are not serious by default 2024-03-28T23:36:32.279Z
Evolution is an observation, not a process 2024-02-06T14:49:31.021Z
You can rack up massive amounts of data quickly by asking questions to all your friends 2024-01-21T01:27:21.701Z
Detachment vs attachment [AI risk and mental health] 2024-01-15T00:41:04.935Z
AI as a natural disaster 2024-01-10T00:42:39.616Z
The Sequences on YouTube 2024-01-07T01:44:39.663Z
Taboo "procrastination" 2023-12-12T21:33:08.700Z
Liv Boeree Ted Talk Moloch & AI 2023-11-10T14:04:58.742Z
The smallest possible button (or: moth traps!) 2023-09-02T15:24:20.453Z
Some rules for life (v.0,0) 2023-08-17T00:43:57.913Z
You don't get to have cool flaws 2023-07-28T05:37:31.414Z
Puffer-pope reality check 2023-07-05T09:27:11.200Z
"Natural is better" is a valuable heuristic 2023-06-20T22:25:49.841Z
Cartography, blowing one's mind, the illusion of separation and other general musings 2023-06-16T19:19:00.725Z
If you are too stressed, walk away from the front lines 2023-06-12T14:26:44.030Z
Superintelligence will outsmart us or it isn't superintelligence 2023-04-03T15:01:00.900Z
Neil Warren's Shortform 2023-03-27T21:21:40.595Z

Comments

Comment by neil-warren on [deleted post] 2024-04-26T15:57:04.616Z

I'm taking this post down, it was to set up an archive.org link as requested by Bostrom, and no longer serves that purpose. Sorry, this was meant to be discreet.

Comment by Neil (neil-warren) on Neil Warren's Shortform · 2024-04-25T07:56:24.691Z · LW · GW

Poetry and practicality

I was staring up at the moon a few days ago and thought about how deeply I loved my family, and wished to one day start my own (I'm just over 18 now). It was a nice moment.

Then, I whipped out my laptop and felt constrained to get back to work; i.e. read papers for my AI governance course, write up LW posts, and trade emails with EA France. (These I believe to be my best shots at increasing everyone's odds of survival).

It felt almost like sacrilege to wrench myself away from the moon and my wonder. Like I was ruining a moment of poetry and stillwatered peace by slamming against reality and its mundane things again.

But... The reason I wrenched myself away is directly downstream from the spirit that animated me in the first place. Whether I feel the poetry now that I felt then is irrelevant: it's still there, and its value and truth persist. Pulling away from the moon was evidence I cared about my musings enough to act on them.

The poetic is not a separate magisterium from the practical; rather the practical is a particular facet of the poetic. Feeling "something to protect" in my bones naturally extends to acting it out. In other words, poetry doesn't just stop. Feel no guilt in pulling away. Because, you're not.

Comment by Neil (neil-warren) on "You're the most beautiful girl in the world" and Wittgensteinian Language Games · 2024-04-21T12:27:03.113Z · LW · GW

Too obvious imo, though I didn't downnvote. This also might not be an actual rationalist failure mode; in my experience at least, rationalists have about the same intuition all the other humans have about when something should be taken literally or not.

As for why the comment section has gone berserk, no idea, but it's hilarious and we can all use some fun.

Comment by Neil (neil-warren) on Neil Warren's Shortform · 2024-04-20T23:06:05.730Z · LW · GW

Can we have a black banner for the FHI? Not a person, still seems appropriate imo.

Comment by Neil (neil-warren) on "You're the most beautiful girl in the world" and Wittgensteinian Language Games · 2024-04-20T19:52:29.738Z · LW · GW

See also Alicorn's Expressive Vocabulary

Comment by Neil (neil-warren) on Neil Warren's Shortform · 2024-04-17T22:55:43.881Z · LW · GW

FHI at Oxford
by Nick Bostrom (recently turned into song):

the big creaky wheel
a thousand years to turn

thousand meetings, thousand emails, thousand rules
to keep things from changing
and heaven forbid
the setting of a precedent

yet in this magisterial inefficiency
there are spaces and hiding places
for fragile weeds to bloom
and maybe bear some singular fruit

like the FHI, a misfit prodigy
daytime a tweedy don
at dark a superhero
flying off into the night
cape a-fluttering
to intercept villains and stop catastrophes

and why not base it here?
our spandex costumes
blend in with the scholarly gowns
our unusual proclivities
are shielded from ridicule
where mortar boards are still in vogue

Comment by Neil (neil-warren) on Consequentialism is a compass, not a judge · 2024-04-13T18:55:56.648Z · LW · GW

I've come to think that isn't actually the case. E.g. while I disagree with Being nicer than clippy, it quite precisely nails how consequentialism isn't essentially flawless:

I haven't read that post, but I broadly agree with the excerpt. On green did a good job imo in showing how weirdly imprecise optimal human values are. 

It's true that when you stare at something with enough focus, it often loses that bit of "sacredness" which I attribute to green. As in, you might zoom in enough on the human emotion of love and discover that it's just an endless tiling of Shrodinger's equation. 

If we discover one day that "human values" are eg 23.6% love, 15.21% adventure and 3% embezzling funds for yachts, and decide to tile the universe in exactly those proportions...[1] I don't know, my gut doesn't like it. Somehow, breaking it all into numbers turned humans into sock puppets reflecting the 23.6% like mindless drones. 

The target "human values" seems to be incredibly small, which I guess encapsulates the entire alignment problem. So I can see how you could easily build an intuition from this along the lines of "optimizing maximally for any particular thing always goes horribly wrong". But I'm not sure that's correct or useful. Human values are clearly complicated, but so long as we haven't hit a wall in deciphering them, I wouldn't put my hands up in the air and act as if they're indecipherable. 

Unbounded utility maximization aspires to optimize the entire world. This is pretty funky for just about any optimization criterion people can come up with, even if people are perfectly flawless in how well they follow it. There's a bunch of attempts to patch this, but none have really worked so far, and it doesn't seem like any will ever work.

I'm going to read your post and see the alternative you suggest. 

  1. ^

    Sounds like a Douglas Adams plot

Comment by Neil (neil-warren) on Consequentialism is a compass, not a judge · 2024-04-13T13:40:57.279Z · LW · GW

Interesting! Seems like you put a lot of effort into that 9,000-word post. May I suggest you publish it in little chunks instead of one giant post? You only got 3 karma for it, so I assume that those who started reading it didn't find it worth the effort to read the whole thing. The problem is, that's not useful feedback for you, because you don't know which of those 9,000 words are presumably wrong. If I were building a version of utilitarianism, I would publish it in little bursts of 2-minute posts. You could do that right now with a single section of your original post. Clearly you have tons of ideas. Good luck! 

Comment by Neil (neil-warren) on Consequentialism is a compass, not a judge · 2024-04-13T13:30:58.755Z · LW · GW

You know, I considered "Bob embezzled the funds to buy malaria nets" because I KNEW someone in the comments would complain about the orphanage. Please don't change. 

Actually, the orphanage being a cached thought is precisely why I used it. The writer-pov lesson that comes with "don't fight the hypothetical" is "don't make your hypothetical needlessly distracting". But maybe I miscalculated and malaria nets would be less distracting to LWers. 

Anyway, I'm of course not endorsing fund-embezzling, and I think Bob is stupid. You're right in that failure modes associated with Bob's ambitions (eg human extinction) might be a lot worse than those of your typical fund-embezzler (eg the opportunity cost of buying yachts). I imagined Bob as being kind-hearted and stupid, but in your mind he might be some cold-blooded brooding "the price must be paid" type consequentialist. I didn't give details either way, so that's fair. 

If you go around saying "the ends justify the means" you're likely to make major mistakes, just like if you walk around saying "lying is okay sometimes". The true lesson here is "don't trust your own calculations, so don't try being clever and blowing up TSMC", not "consequentialism has inherent failure modes". The ideal of consequentialism is essentially flawless; it's when you hand it to sex-obsessed murder monkeys as an excuse to do things that shit hits the fan.

In my mind then, Bob was a good guy running on flawed hardware. Eliezer calls patching your consequentialism by making it bounded "consequentialism, one meta-level up". For him, refusing to embezzle funds for a good cause because the plan could obviously turn sour is just another form of consequentialism. It's like belief in intelligence, but flipped; you don't know exactly how it'll go wrong, but there's a good chance you're unfathomably stupid and you'll make everything worse by acting on "the ends justify the means". 

From a practical standpoint though, we both agree and nothing changes: both the cold-hearted Bob and the kind Bob must be stopped. (And both are indeed more likely to make ethically dubious decisions because "the ends justify the means".) 

Post-scriptum:

Honestly the one who embezzles funds for unbounded consequentialist purposes sounds much more intellectually interesting

Yeah, this kind of story makes for good movies. When I wrote Bob I was thinking of The Wonderful Story of Mr.Sugar, by Roald Dahl and adapted by Wes Anderson on Netflix. It's at least vaguely EA-spirited, and is kind of in that line (although the story is wholesome, as the name indicates, and isn't meant to warn against dangers associated with boundless consequentialism at all).[1]

 

  1. ^

    Let's wait for the SBF movie on that one

Comment by Neil (neil-warren) on Martín Soto's Shortform · 2024-04-12T09:31:19.836Z · LW · GW

Link is broken

Comment by Neil (neil-warren) on Politics are not serious by default · 2024-04-08T15:34:53.872Z · LW · GW

Re: sociology. I found a meme you might enjoy, which would certainly drive your teacher through the roof: https://twitter.com/captgouda24/status/1777013044976980114 

Comment by Neil (neil-warren) on Neil Warren's Shortform · 2024-04-06T07:40:40.036Z · LW · GW

Yeah, that's an excellent idea. I often spot typos in posts, but refrain from writing a comment unless I collect like three. Thanks for sharing!

Comment by Neil (neil-warren) on Neil Warren's Shortform · 2024-04-02T22:26:32.442Z · LW · GW

A functionality I'd like to see on LessWrong: the ability to give quick feedback for a post in the same way you can react to comments (click for image). When you strong-upvote or strong-downvote a post, a little popup menu appears offering you some basic feedback options. The feedback is private and can only be seen by the author. 

I've often found myself drowning in downvotes or upvotes without knowing why. Karma is a one-dimensional measure, and writing public comments is a trivial inconvience: this is an attempt at middle ground, and I expect it to make post reception clearer. 

See below my crude diagrams.

Comment by Neil (neil-warren) on NickH's Shortform · 2024-04-02T18:04:16.478Z · LW · GW

I'm not clear on what you're calling the "problem of superhuman AI"?

Comment by neil-warren on [deleted post] 2024-04-01T17:25:34.126Z

I was given clear instructions from a math phd about how to dump random lean files into the repository I created to confuse lesswrongers for at least a few minutes. But then I got confused while attempting to follow the instructions. There’s only so much my circuits can handle. I’m running most of my code on a Chromebook! Fear me.

Comment by Neil (neil-warren) on Neil Warren's Shortform · 2024-04-01T08:27:19.359Z · LW · GW

Bonus song in I have been a good Bing: "Claude's Anguish", a 3-minute death-metal song whose lyrics were written by Claude when prompted with "how does the AI feel?": https://app.suno.ai/song/40fb1218-18fa-434a-a708-1ce1e2051bc2/ (not for the faint of heart)

Comment by Neil (neil-warren) on Neil Warren's Shortform · 2024-04-01T07:53:02.066Z · LW · GW

I'm glad "thought that faster" is the slowest song of the album. Also where's the "Eliezer Yudkowsky" in the "ft. Eliezer Yudkowsky"? I didn't click on it just to see Eliezer's writing turned into song, I came to see Eliezer sing. Missed opportunity. 

Comment by Neil (neil-warren) on Apply to be a Safety Engineer at Lockheed Martin! · 2024-03-31T22:16:49.742Z · LW · GW

I'm not convinced. I felt the training video was incomplete, and the deadline too short.

Comment by Neil (neil-warren) on Will no one rid me of this turbulent pest? · 2024-03-31T16:45:30.904Z · LW · GW

"Debug" the solution

Comment by Neil (neil-warren) on Politics are not serious by default · 2024-03-31T14:36:16.427Z · LW · GW

I think that's fair. Public transport is a lot more important in France than in the US, for example, and is usually the first casually in political upheavals. As with the retirement age debacle a few months ago, railway and bus operators (along with other public services like garbage collectors and school administration) went on mass strikes. It's easier here to make big, daring political actions than in the US where eg cars are the default mode of transport. 

Comment by Neil (neil-warren) on Politics are not serious by default · 2024-03-30T09:38:50.222Z · LW · GW

This is all great news

Comment by Neil (neil-warren) on Politics are not serious by default · 2024-03-29T23:39:35.935Z · LW · GW

Certainly I would expect people to grow up relatively normal, even in a crazy climate. What I see for religion, I expect to see here. Beyond the natural "immunity" I think my peers will develop over time, I imagine that whatever revolutionary fervor they get from youth will fade as well. My communist friend is going to be a high school philosophy teacher soon enough; by then his "glorious revolution" won't stretch much further than in a few academic dissertations (read by literally no one). 

That story with the sociology teacher is certainly crazy. I think I've learned the relevant lesson though, to avoid anything with "sociology" written on it like it's the plague. You may correct me, but it seems like a generally icky and imprecise discipline built up on a mountain or rationalization to the point that teachers have to explode into desperate fits in an attempt to hopelessly recover some semblance of a connection to reality. 

Parcoursup admissions close in a few days, and I've applied to Sciences Po as well. If I get in, I plan to start a rationality association as well as an existential risk one. However chaotic and facepalmingly pointlessly political the campus might be, I hear the associations are great, so hopefully that will work out all right. 

I've already started working on the project: tinyurl.com/biais-cognitifs

Comment by Neil (neil-warren) on D&D.Sci: The Mad Tyrant's Pet Turtles · 2024-03-29T22:52:48.775Z · LW · GW

More of... whatever this is on LessWrong, please! Great humor! Imma go open sheets now and optimally estimate turtle weights (as one does on a good friday night). 

Edit: hot damn, you've got a whole sequence of this stuff!

Comment by Neil (neil-warren) on yanni's Shortform · 2024-03-29T13:55:30.606Z · LW · GW

They took it down real quick for some reason.

Comment by Neil (neil-warren) on Politics are not serious by default · 2024-03-29T11:25:52.091Z · LW · GW

Concept creep is a bastard. >:(

Comment by Neil (neil-warren) on yanni's Shortform · 2024-03-28T23:49:08.070Z · LW · GW

This reminds me of when Charlie Munger died at 99, and many said of him "he was just a child". Less of a nod to transhumanist aspirations, and more to how he retained his sparkling energy and curiosity up until death. There are quite a few good reasons to write "dead far too young". 

Comment by Neil (neil-warren) on Politics are not serious by default · 2024-03-28T23:45:23.795Z · LW · GW

More French stories: So, at some point, the French decided what kind of political climate they wanted. What actions would reflect on their cause well? Dumping manure onto the city center using tractors? Sure! Lining up a hundred stationary taxi cabs in every main artery of the city? You bet! What about burning down the city hall's door, which is a work of art older than the United States? Mais évidemment!

"Politics" evokes all that in the mind of your average Frenchman. No, not sensible strategies that get your goals done, but the first shiny thing the protesters thought about. It'd be more entertaining to me, except for the fact that I had to skip class at some point because I accidentally biked headfirst into a burgeoning cloud of tear gas (which the cops had detonated in an attempt to ward off the tractors). There are flagpoles in front of the government building those tractors dumped the manure on. They weren't entirely clean, and you can still see the manure level, about 10 meters high. 

Comment by Neil (neil-warren) on Voting Results for the 2022 Review · 2024-02-29T11:00:00.412Z · LW · GW

The new designs are cool, I'd just be worried about venturing too far into insight porn. You don't want people reading the posts just because they like how they look (although reading them superficially is probably better than not reading them at all). Clicking on the posts and seeing a giant image that bleeds color into the otherwise sober text format is distracting. 

I guess if I don't like it there's always GreaterWrong.

Comment by Neil (neil-warren) on Evolution is an observation, not a process · 2024-02-08T22:24:36.000Z · LW · GW

Yeah I think I'm wrong about this. Thanks to all of you commenters for feedback. I'm updating.

Comment by Neil (neil-warren) on Evolution is an observation, not a process · 2024-02-07T19:21:09.639Z · LW · GW

Got it. Thank you.

Comment by Neil (neil-warren) on Evolution is an observation, not a process · 2024-02-07T17:41:44.258Z · LW · GW

What specifically? I don't need a long explanation (you can get on with your life), just a pointer.

Comment by Neil (neil-warren) on Evolution is an observation, not a process · 2024-02-06T19:22:39.000Z · LW · GW

"Process" and "wants to" are in the map, not the territory. I don't think anyone needs any justification for pointing that discrepancy out. Even if "process" and "wants to" are useful heuristics, I would not be miffed if LW posts resurfaced from time to time to remind everyone that we are not living in the territory here. I explain this in more detail in my response to Razied's comment. 

Comment by Neil (neil-warren) on Evolution is an observation, not a process · 2024-02-06T19:16:50.222Z · LW · GW

Fair enough. I certainly didn't try to mince words. My goal was to violently shave off any idea of "agency" my friend was giving to evolution. He was walking around satisfied with his explanation that evolution selects for the fittest and is therefore optimizing for the fittest.[1]  The point of the dialogue format was to point out that you can call it an optimization process, but when you taboo that word you figure out it's hard to pinpoint exactly what is being optimized for. If you're going to call something an optimization process, you'd better tell me exactly what is being optimized for. If you can't, you are probably using that word as a curiosity stopper or something. 

I think we'll be able to pinpoint what evolution optimizes for, someday. [2] Gravity as a force optimizes for the creation of stars: enough so that loose clouds of hydrogen are pretty much guaranteed to form stars. You could say "gravity optimizes for the creation of stars from hydrogen clouds" and anticipate experience with seamless accuracy. Evolution is like this except it's so much more complex that in order to explain it as an optimization process you'll have to resort to the dreaded word "emergence". 

I think there's also something to be said about reminding people from time to time that "optimization pressure" and "emergence" and are in the map, not the territory; the territory is a different beast. I think you could reasonably take on the "true" way of seeing things for an hour or two after reading this post, and then go back to your business believing in the heuristic that evolution is an optimization process (once you've finished with your partial transfiguration). 

  1. ^

    Note the verb "optimized", which implies that something active is going on.

  2. ^

    In fact, most of the work has probably been done by Dawkins and others and there's a mountain of math out there that explains exactly what evolution is optimizing for. If that's the case, I definitely want to understand it someday, and find all of this very exciting. But neither I nor my friend are in a position to explain what evolution is optimizing toward, at least in a way that would let us accurately anticipate experience.

Comment by Neil (neil-warren) on You can rack up massive amounts of data quickly by asking questions to all your friends · 2024-01-21T15:35:26.071Z · LW · GW

There are versions of the thought experiment where if Omega predicts you will choose to use a randomizer, it won't put the money in box B. But in just the default experiment, this seems like an entertaining outcome!

Comment by Neil (neil-warren) on Detachment vs attachment [AI risk and mental health] · 2024-01-16T07:07:22.354Z · LW · GW

Haha, well I changed the title and it has 10 more karma than it had yesterday, so there was something up with the title. Thank you again! 

Comment by Neil (neil-warren) on Detachment vs attachment [AI risk and mental health] · 2024-01-15T20:42:32.946Z · LW · GW

Yeah I was wondering what made this post fail and an unclear name might be part of it. Thanks for the feedback! 

Comment by Neil (neil-warren) on Taboo "procrastination" · 2024-01-15T16:18:08.461Z · LW · GW

These are all sub-types of procrastination. In my experience, thinking about this as "procrastination" is less helpful than ignoring that word entirely and finding the specific reason why I'm procrastinating instead. I'm not trying to redefine procrastination, only saying that you may want to taboo it. 

Comment by Neil (neil-warren) on Gentleness and the artificial Other · 2024-01-09T23:04:08.415Z · LW · GW

Have you read Children of Time, by Adrian Tchaikovsky? It's a beautiful and relatively recent science fiction book and winner of the Arthur C. Clarke Award. It approaches the theme of other beings, artificial consciousness, emergent consciousness, empathy, and quite a few other things. That doesn't entirely cut it, but to me it seems like it is speaking directly to your post.

Without spoiling too much, it follows an event in which engineered retroviruses designed to make apes intelligent, hurled into a newly-terraformed planet by human colonists, accidentally makes the portia genus of jumping spider more intelligent instead. The book launches into imaginative and precise evolutionary worldbuilding, tracing the rise of an entire civilisation swarming with what are to us alien minds. (Portia are more on the level of octopi than bears as far as otherness is concerned.) Portia are a type of jumping spider native to rainforests in the East Indies. Despite having only 200,000 neurons or so, they are considered some of the most intelligent critters at their scale. They sustain themselves entirely off eating other types of spiders, by spending hours calculating the best angle of attack and silently crawling around the enemy web, before attacking at once. They seem to be capable of thinking up plans, and are masters of spatial reasoning (they non-coincidentally have particularly good eyes for their size). The word "arachnid" might send a shiver down your spine right now, but by the end of reading this book, I swear, your arachnophobia will be cured (perhaps not at the instinctual level, but at the intellectual do-I-like-spiders-and-think-they-are-cool level). What would you expect a civilisation of intelligent portia to be like? What threats do you think they would face? Well, jot down your predictions and go find out! https://www.amazon.com/Children-Time-Adrian-Tchaikovsky/dp/1447273303 

Comment by Neil (neil-warren) on The Sequences on YouTube · 2024-01-07T14:16:32.587Z · LW · GW

Yep, "floating head" videos as they're called are actually quite popular. They are also by far the easiest way to start a channel and starting a channel is better than not having one at all. 

I would also like to add animations, diagrams and visualizations, and find some other ways to make content much more engaging. Up until now most of my efforts were directed toward having the courage to publish this in the first place. Now, I can move on to actually getting down to the logistics! 

Comment by Neil (neil-warren) on Trading off Lives · 2024-01-03T09:21:31.271Z · LW · GW

Related: https://www.lesswrong.com/posts/3wYTFWY3LKQCnAptN/torture-vs-dust-specks 

Comment by Neil (neil-warren) on 2023 Unofficial LessWrong Census/Survey · 2023-12-30T23:30:33.205Z · LW · GW

I have completed the survey! Whohoo!

Comment by Neil (neil-warren) on Explain/Worship/Ignore? · 2023-12-28T16:54:12.434Z · LW · GW

This is so well written it's insane. Crisp, clear, and crazily simple, despite being foundational. I have lots to learn.

Comment by Neil (neil-warren) on Changing main content font to Valkyrie? · 2023-12-15T14:23:27.854Z · LW · GW

(Probably not interesting to 90% of people, but would love to get input from our local typography nerds)

 

I like how this implies ∼ 10% of LW can be considered a typography nerd. 

Comment by Neil (neil-warren) on Taboo "procrastination" · 2023-12-13T12:12:25.747Z · LW · GW

That's a good one to add, yes, thanks.

Comment by Neil (neil-warren) on Liv Boeree Ted Talk Moloch & AI · 2023-11-10T14:59:06.762Z · LW · GW

Yep, boy, thanks for that!

Comment by Neil (neil-warren) on Some rules for life (v.0,0) · 2023-10-31T18:35:30.612Z · LW · GW

It's a really, really useful exercise to lay out whatever things you think are wisdom out there. You should try publishing yours! I published my own horridly cringeworthy list (apparently I've changed in two months) for feedback!

Comment by Neil (neil-warren) on Some rules for life (v.0,0) · 2023-10-31T13:44:44.938Z · LW · GW

I'm doomed already. I've narrowly escaped a close encounter after reading a few chapters from at least one of them.  I may be cornered, however, but I shan't admit defeat. I've used a site blocker (haha!). This is what an actually pessimistic containment strategy looks like.  Check for the time being, minions of wildbow!

Comment by Neil (neil-warren) on Some rules for life (v.0,0) · 2023-10-31T13:17:37.689Z · LW · GW

[Update!] I have now finished Worm. I'm kind of just really relieved that my ambient thought is no longer obsessed by finding new ways to munchkin superpowers. I'm free! 

Given that Ward is even longer than Worm, I'm going to wait a while until I fall back into obsession. 

Comment by Neil (neil-warren) on The AI apocalypse myth. · 2023-09-08T18:38:15.521Z · LW · GW

This is probably the best short resource to read to understand the concept "something literally thousands of times smarter than us" in your gut (also exists as a great RA video). Unfortunately, stopping an AGI--a true AGI once we get there--is a little more difficult than throwing a bucket of water into the servers. That would be hugely underestimating the sheer power of being able to think better

I wrote a post recently on how horrifyingly effective moth traps are. Thanks to the power of intelligence, humans are able to find the smallest possible button in reality that they need to press to achieve a given goal. AGI would do this, only much, much better. Moth traps leverage an entire dimension beyond what a moth can understand: the same could be said of AGI. This is something that I, at least, have found difficult to internalize. You cannot, by definition, model something smarter than you. So, to understand AGI's danger, you must resort to finding things stupider than you, and try to backpedal from there to figure out what being more intelligent lets you do

I hope this comment helped you understand why your post currently has negative karma. Don't be discouraged though! Ideas on LW get ruthlessly pummeled, pillaried and thoroughly chewed upon. But we separate ideas from people, so don't let the results from this post discourage you at all! Please, find some ideas worth writing about and publish! Hope you have a great day. 

Comment by Neil (neil-warren) on Find Hot French Food Near Me: A Follow-up · 2023-09-06T16:12:58.066Z · LW · GW

I'm French. Pétard is a very minor swear word, on par with "great Scott!" 

It's not meant as an insult at all. The most common French swear word is probably "putain" (used like "fuck" is) and pétard is used as an attenuated version, (like saying "fudge"). 

(As a frenchman, I also admit to the existence of a writhing snake inside my gut telling me to downvote this heretical post which dares! compare French cuisine with German cuisine. Luckily, I have learned enough rationality to override my primal instincts.)