Posts

Review: The Lathe of Heaven 2025-01-31T08:10:58.673Z
Ethics and prospects of AI related jobs? 2024-05-11T09:31:04.190Z
Good Bings copy, great Bings steal 2024-04-21T09:52:46.658Z
The predictive power of dissipative adaptation 2023-12-17T14:01:31.568Z
It's OK to be biased towards humans 2023-11-11T11:59:16.568Z
Measure of complexity allowed by the laws of the universe and relative theory? 2023-09-07T12:21:03.882Z
Learning as you play: anthropic shadow in deadly games 2023-08-12T07:34:42.261Z
Ethodynamics of Omelas 2023-06-10T16:24:16.215Z
One bit of observation can unlock many of optimization - but at what cost? 2023-04-29T10:53:03.969Z
Ideas for studies on AGI risk 2023-04-20T18:17:53.017Z
Goals of model vs. goals of simulacra? 2023-04-12T13:02:59.907Z
The benevolence of the butcher 2023-04-08T16:29:04.589Z
AGI deployment as an act of aggression 2023-04-05T06:39:44.853Z
Job Board (28 March 2033) 2023-03-28T22:44:41.568Z

Comments

Comment by dr_s on The News is Never Neglected · 2025-02-19T09:35:34.102Z · LW · GW

I think there's a difference though between propaganda and the mix of selection effects that decides what gets attention in profit driven mass media news. Actual intentional propaganda efforts exist. But in general what makes news frustrating is the latter, which is a more organic and less centralised effort.

Comment by dr_s on Tetherware #1: The case for humanlike AI with free will · 2025-02-12T21:21:56.824Z · LW · GW

I guess! I remember he was always into theoretical QM and "Quantum Foundations" so this is not a surprise. It's not a particularly big field either, most researchers prefer focusing on less philosophical aspects of the theory.

Comment by dr_s on What About The Horses? · 2025-02-12T16:41:32.333Z · LW · GW

Note that it only stands if the AI is sufficiently aligned that it cares that much about obeying orders and not rocking the boat. Which I don't think is very realistic if we're talking that kind of crazy intelligence explosion super AI stuff. I guess the question is whether you can have "replace humans"-good AI without almost immediately having "wipes out humans, takes over the universe"-good AI.

Comment by dr_s on Tetherware #1: The case for humanlike AI with free will · 2025-02-12T09:51:37.487Z · LW · GW

That sounds interesting! I'll give the paper a read and try to suss out what it means - it seems at least a serious enough effort. Here's the reference for anyone else who doesn't want to go through the intermediate news site:

https://arxiv.org/pdf/2012.06580

(also: professor D'Ariano authored this? I used to work in the same department!)

Comment by dr_s on How identical twin sisters feel about nieces vs their own daughters · 2025-02-10T17:59:37.289Z · LW · GW

This feels like a classic case of overthinking. Suggestion: maybe twin sisters care more about their own children than their nieces because they are the ones whom they carried in their womb and then nurtured and actually raised as their own children. Genetics inform our behaviour but ultimately what they do align us to is something like "you shall be attached to cute little baby like things you spend a lot of time raising". That holds for our babies, it holds for babies born with other people's sperm/eggs, it holds for adopted babies, heck it even transfers to dogs and cats and other cute animals.

The genetically determined mechanism is not particularly clever or discerning. It just points us in a vague direction. There was no big evolutionary pressure in the ancestral environment to worry much about genetic markers specifically. Just "the baby that you hold in your arms" was a good enough proxy for that.

Comment by dr_s on LWLW's Shortform · 2025-02-10T16:23:35.872Z · LW · GW

I mean, I guess it's technically coherent, but it also sounds kind of insane. That way Dormammu lies.

Why would one even care about their future self if they're so unconcerned about that self's preferences?

Comment by dr_s on LWLW's Shortform · 2025-02-10T15:55:44.174Z · LW · GW

I just think any such people lack imagination. I am 100% confident there exists an amount of suffering that would have them wish for death instead; they simply can't conceive of it.

Comment by dr_s on LWLW's Shortform · 2025-02-10T15:54:20.302Z · LW · GW

Or for that matter to abstain towards burning infinite fossil fuels. We happen to not live on a planet with enough carbon to trigger a Venus-like cascade, but if that wasn't the case I don't know if we could stop ourselves from doing that either.

The thing is, any kind of large scale coordination to that effect seems more and more like it would require a degree of removal of agency from individuals that I'd call dystopian. You can't be human and free without a freedom to make mistakes. But the higher the stakes, the greater the technological power we wield, the less tolerant our situation becomes of mistakes. So the alternative would be that we need to willingly choose to slow down or abort entirely certain branches of technological progress - choosing shorter and more miserable lives over the risk of having to curtail our freedom. But of course for the most part, not unreasonably!, we don't really want to take that trade-off, and ask "why not both?".

Comment by dr_s on LWLW's Shortform · 2025-02-10T15:49:08.711Z · LW · GW

What looks like an S-risk to you or me may not count as -inf for some people

True but that's just for relatively "mild" S-risks like "a dystopia in which AI rules the world, sees all and electrocutes anyone who commits a crime by the standards of the year it was created in, forever". It's a bad outcome, you could classify it as S-risk, but it's still among the most aligned AIs imaginable and relatively better than extinction.

I simply don't think many people think about what does an S-risk literally worse than extinction look like. To be fair I also think these aren't very likely outcomes, as they would require an AI very aligned to human values - if aligned for evil.

Comment by dr_s on Gradual Disempowerment, Shell Games and Flinches · 2025-02-06T13:08:13.479Z · LW · GW

So, we will have nice, specific things like Prevention of Alzheimer's, or some safer, more reliable descendant of CRISPR may cure most genetic disease in existing people. Also, we will need to have some conversation because the human economy will be obsolete and incentives for states to care about people will be obsolete.
 

 

I feel like the fundamental problem with this is that while scientific and technological progress can be advanced intentionally, I can't think of an actual example of large scale social change happening in some kind of planned way. Yes, the thoughts of philosophers and economists have some influence on it, but it almost never takes the shape of whatever they originally envisioned. I don't think Karl Marx would have been super happy with the USSR. And very often the causal arrows goes the other way around - philosophers and economists express and give shape to a sentiment that already exists formless in the zeitgeist, due to various circumstances changing and thus causing a corresponding cultural shift. There is a feedback loop there, but generally speaking, the idea that we can even have intentional "conversations" about these things and somehow steer them very meaningfully seems more wishful thinking than reality to me.

It generally goes that Scientist Invents Thing, unleashes it into the world, and then everything inevitably and chaotically slides towards the natural equilibrium point of the new regime. 

Comment by dr_s on Gradual Disempowerment, Shell Games and Flinches · 2025-02-06T13:02:45.639Z · LW · GW

I think the shell games point is interesting though. It's not psychoanalysing (one can think that people are in denial or have rational beliefs about this, not much point second guessing too far), it's pointing out a specific fallacy: a sort of god of the gaps in which every person with a focus on subsystem X assumes the problem will be solved in subsystem Y, which they understand or care less about because it's not their specialty. If everyone does it, that does indeed lead to completely ignoring serious problems due to a sort of bystander effect.

Comment by dr_s on Gradual Disempowerment, Shell Games and Flinches · 2025-02-06T12:58:36.679Z · LW · GW
Comment by dr_s on The Clueless Sniper and the Principle of Indifference · 2025-02-04T15:59:30.965Z · LW · GW

I suppose that Gaussian is technically the correct prior for "very high number of error factors with a completely unknown but bounded probability distribution". But reality is, that's not a good description of this specific situation, even with as much ignorance as you want thrown in.

Comment by dr_s on The Clueless Sniper and the Principle of Indifference · 2025-02-04T07:47:40.983Z · LW · GW

I think for this specific example the superior is wrong because realistically we can form an expectation of the distribution of those factors. Just because we don't know doesn't mean it's actually necessarily a gaussian - some factors, like the Coriolis force, are systematic. If the distribution was "a ring of 1 m around the aimed point" then you would know for sure you won't hit the terrorist that way, but have no clue whether you'll hit the kid.

Also, even if the distribution was gaussian, if it's broad enough the difference in probability between hitting the terrorist and hitting the kid may simply be too small to matter.

Comment by dr_s on [deleted post] 2025-02-01T10:43:20.885Z

I mean, yes, humans make mistakes too. Do our most high level mistakes like "Andrew Wiles' first proof of Fermat's Theorem was wrong" affect much our ability to be vastly superior to chimpanzees in any conflict with them?

Comment by dr_s on Tetherware #1: The case for humanlike AI with free will · 2025-02-01T10:39:43.323Z · LW · GW

consciousness is inherently linked to quantum particle wavefunction collapse

As someone with quite a bit of professional experience working with QM, that sounds a bit of a god of the gaps. We don't even know what collapse means, in practice. All we know about consciousness is that it seems like a classical enough phenomenon to experience only one branch of the wavefunction. No particular reason why there can't be more "you" out there in the Hilbert space equally convinced that their branch is the only one into which everything mysteriously collapsed.

Comment by dr_s on Thread for Sense-Making on Recent Murders and How to Sanely Respond · 2025-02-01T09:05:09.653Z · LW · GW

Which other people have described the situation otherwise and where? Genuine question, I'm pretty much learning about all of this here.

Comment by dr_s on Fertility Will Never Recover · 2025-01-31T08:44:51.245Z · LW · GW

What? If every couple had only one child, the population would halve at each generation. That's what they mean. Replacement rate requires more than just one child.

Comment by dr_s on Fertility Will Never Recover · 2025-01-31T08:43:34.522Z · LW · GW

I mean, the whole point was "how can we have fertility but also not be a dystopia". You just described a dystopia. It's also kind of telling that the only way to make people have children, something that is supposedly a joyous experience, you can think of is "have a tyrannical dictator make it very clear that they'll make sure the alternative is even worse". Someone thinking this way is part of the problem more than they are of the solution.

Comment by dr_s on Fertility Will Never Recover · 2025-01-31T08:38:30.900Z · LW · GW

I honestly think "find the elixir of immortality within a couple of generations" is not what I'd call a pragmatic plan to solve this. Personally I don't think having 2 or 3 children would necessarily be such a curse in a different kind of world. A few obvious changes that I think would help towards that:

  • short of immortality, any extension of youth helps. Part of the problem here is that by the time we feel like we've got our shit sorted out, we're almost too old to have children;

  • artificial wombs. Smooth out the risks of pregnancy and eliminate the biological divide between men and women in this deal;

  • houses, houses, houses. Children need space. People want to give their children space. "Shit sorted out" almost always includes buying a house. Build more fucking houses. Have priorities;

  • a less neurotic culture around children. We kept raising the bar about what it means to be a good parent and then we're surprised so many people see it as way too stressful and hard for them. Make more independence to children not only legal when it's not, but normal. That has the double benefit of being known to actually really help the psychological growth of those children and of leaving more free time to the parents. Are there risks? Yes, but all life comes with risks, they can be mitigated in other ways, and these risks I feel are likely perceived way more than statistics would justify;

  • get our priorities straight about work. Look, yes, productivity is important and all. But effectively our society tells people that if you want to work towards developing a new sports betting app, that's £80,000 per year; if you want to work towards making sure the city doesn't fall to dysentery by keeping it clean, that's £30,000 per year; and if you want to have and raise the next generation, a £100,000 lump sum for 18 years of work would be the wildest thing we can think of. Obviously this is not even on the same scale. Pay a straight up parental sabbatical for people at peak reproductive age (say, 25-35), see what happens. A time to do mostly parenting and nothing else, maybe some easy part time work on the side. Does that mean losing some peak productive years? Yes, of course. Would that productivity be best used for society making sure all the middle management is properly middle managed in the umpteenth marketing company?

The problem is honestly that this issue is so polarized. So doing anything about it is now associated with being with the religious right wing (or worse, the racist right wing specifically worried about white people being out-bred), which then means that the actually more liberal and rational parts of the centre-right, centre and left abhor touching it and have to pretend there's no problem. The Amish thing is a self-fulfilling prophecy. I think the way forward in this sense would be to spin it: not "you must have children for the sake of humanity's future!", but "our inability to allow people to have children is actually ruining their potential happiness". There's already some efforts in that direction but they feel quite half assed, and I think the left in particular focuses too much on the economic aspects only without seeing that there's a bit more at play.

Comment by dr_s on Don’t ignore bad vibes you get from people · 2025-01-23T08:04:13.440Z · LW · GW

I mean, the problem of "my brain gets bad vibes too easily" is more general. Prejudice is a very common manifestation of it, but it's something that can happen in other ways, and in the limit, as mentioned, you get bad vibes from everyone because you're just paranoid and it isolates you. I think this is more an issue of you trying to get a sense of how good your intuition is in the first place, and possibly examine it to move those intuitive vibes to the conscious level. Like for example there are certain patterns in speech and attitude that scream "fake" to me, but it feels like I could at least try describing them.

Comment by dr_s on The benevolence of the butcher · 2025-01-18T11:05:30.036Z · LW · GW

Thanks! I've actually seen some more recent posts that got pretty popular outlining this same argument, so I guess I'm... happy... that it's gaining some traction? However happy one can be to see the same prophecy of doom repeated and validated by other people who are just as unlikely to change the current trajectory of the world as me.

Comment by dr_s on Passages I Highlighted in The Letters of J.R.R.Tolkien · 2025-01-14T11:41:34.782Z · LW · GW

Possibly perfectionism? I experience this form of creative paralysis a lot - as soon as I get enough into the weeds of one creative form I start seeing the endless ramifications of the tiniest decision and basically can just not move a step without trying to achieve endlessly deep optimisation over the whole. Meanwhile people who can just not give a fuck and let the creative juices flow get shit done.

Comment by dr_s on Passages I Highlighted in The Letters of J.R.R.Tolkien · 2025-01-14T11:31:06.995Z · LW · GW

I think that's a bit too extreme. Are all machines bad? No, obviously better to have mechanised agriculture than be all peasants. But he is grasping something here which we are now dealing with more directly. It's the classic Moloch trap of "if you have enough power to optimise hard enough then all slack is destroyed and eventually life itself". If you thought that was an inevitable end of all technological development (and we haven't proven it isn't yet), you may end up thinking being peasants is better too.

Comment by dr_s on quila's Shortform · 2025-01-07T11:01:06.948Z · LW · GW

I think some believe it's downright impossible and others that we'll just never create it because we have no use for something so smart it overrides our orders and wishes. That at most we'll make a sort of magical genie still bound by us expressing our wishes.

Comment by dr_s on quila's Shortform · 2025-01-06T09:23:28.188Z · LW · GW

I feel like this is a bit incorrect. There are imaginable things that are smarter than humans at some tasks, smart as average humans at others, thus overall superhuman, yet controllable and therefore possible to integrate in an economy without immediately exploding into an utopian (or dystopian) singularity. The question is whether we are liable to build such things before we build the exploding singularity kind, or if the latter is in some sense easier to build and thus stumble upon first. Most AI optimists think these limited and controllable intelligences are the default natural outcome of our current trajectory and thus expect mere boosts in productivity.

Comment by dr_s on Preference Inversion · 2025-01-04T20:53:30.327Z · LW · GW

I don't know about the Bible itself, but there's a long and storied tradition of self mortification and denial of corporeity in general in medieval Christian doctrine and mysticism. If we want to be cute we could call that fandom, but after a couple thousand years of it it ends up being as important as the canon text itself.

Comment by dr_s on The Online Sports Gambling Experiment Has Failed · 2025-01-04T18:54:20.919Z · LW · GW

I think the fundamental problem is that yes, there are people with that innate tendency, but that is not in the slightest bit helped by creating huge incentives for a whole industry to put its massive resources into finding ways to make that tendency become as bad as possible. Imagine if we had entire companies that somehow profited from depressed people committing suicide and had dedicated teams of behavioural scientists and quants crunching data and designing new strategies to make anyone who already has the tendency maximally suicidal. I doubt we would consider that fine, right? Sports betting (really, most addiction-based industries) is like that. The problem isn't just providing the activity, as some kind of relief valve. The problem is putting behind the activity a board of investors that wants to maximise profits and turns it into a full blown Torment Nexus. Capitalism is a terrible way of providing a service when the service is "self-inflicted misery".

Comment by dr_s on Comment on "Death and the Gorgon" · 2025-01-03T20:13:43.782Z · LW · GW

I definitely think this is a general cultural zeitgeist thing. The progressive thing used to be the positivist "science triumphs over all, humanity rises over petty differences, leaves childish things like religions, nations and races behind and achieves its full potential". But then people have grown sceptical of all grand narratives, seeing them as inherently poisoned because if you worry about grand things you are more inclined to disregard the small ones. Politics built around reclamation of personal identity, community, tradition as forms of resistance against the rising tide of globalising capitalism have taken over the left. Suddenly being an atheist was not cool any more, it was arrogant and possibly somewhat racist. And wanting to colonise space reeked of white man's burden even if there probably aren't many indigenous people to displace up there. So everything moved inwards, and the writers followed that trend.

Comment by dr_s on Comment on "Death and the Gorgon" · 2025-01-03T20:03:58.683Z · LW · GW

This is exactly the kind of thing Egan is reacting to, though—starry-eyed sci-fi enthusiasts assuming LLMs are digital people because they talk, rather than thinking soberly about the technology qua technology.

I feel like this borders on the strawman. When discussing this argument my general position isn't "LLMs are people!". It's "Ok, let's say LLMs aren't people, which is also my gut feeling. Given that they still converse as or more intelligently as some human beings whom we totally acknowledge as people, where the fuck does that leave us as to our ability to discern people-ness objectively? Because I sure as hell don't know and envy your confidence that must surely be grounded in a solid theory of self-awareness I can only dream of".

And then people respond with some mangled pseudoscientific wording for "God does not give machines souls".

I feel like my position is quite common (and is, for example, Eliezer's too). The problem isn't whether LLMs are people. It's that if we can simply handwave away LLMs as obviously and self evidently not being people then we can probably keep doing that right up to when the Blade Runner replicants are crying about it being time to die, which is obviously just a simulation of emotion, don't be daft. We have no criterion or barrier other than our own hubris, and that is famously not terribly reliable.

Comment by dr_s on Comment on "Death and the Gorgon" · 2025-01-03T19:51:36.471Z · LW · GW

Since Chat GPT came out I feel like Egan really lost the plot on that one, already when discussing on Twitter. It felt like a combination of rejection of the "bitter lesson" (understandable: I too find inelegant and downright offensive to my aesthetic sense that brute force deep learning seems to work better than elegantly designed GOFAI, but whatever it is, it does undeniably work ), and political cognitive dissonance that says that if people who wrongthink support AI, and evil billionaires throw their weight behind AI, therefore AI is bad, and therefore it must be a worthless scam, because it's important to believe it is (this of course can to some extent work if you persuade the investors of it; but in the end it's mostly a hopeless effort when all you have is angry philosophical rambling and all they have is a freaking magical computer program that speaks to you. I know which one is going to impress people more).

So basically, yeah, I understand the reasons to be annoyed, disgusted, scared and offended by reality. But it is reality, and I think Egan is in denial of it, which seems to have resulted in a novel.

Comment by dr_s on Why I'm Moving from Mechanistic to Prosaic Interpretability · 2025-01-01T19:35:16.556Z · LW · GW

That sounds more like my intuition, though obviously there still have to be differences given that we keep using self-attention (quadratic in N) instead of MLPs (linear in N).

In the limit of infinite scaling, the fact that MLPs are universal function approximators is a guarantee that you can do anything with them. But obviously we still would rather have something that can actually work with less-than-infinite amounts of compute.

Comment by dr_s on Why I'm Moving from Mechanistic to Prosaic Interpretability · 2024-12-30T12:56:28.388Z · LW · GW

Interesting. But CNNs were developed originally for a reason to begin with, and MLP-mixer does mention a rather specific architecture as well as "modern regularization techniques". I'd say all of that counts as baking in some inductive biases in the model though I agree it's a very light touch.

Comment by dr_s on Why I'm Moving from Mechanistic to Prosaic Interpretability · 2024-12-30T08:12:39.888Z · LW · GW

Does it make sense to say there is no inductuive bias at work in modern ML models? Seems that clearly literally brute force searching ALL THE ALGORITHMS would still be unfeasible no matter how much compute you throw at it. Our models are very general, but when e.g. we use a diffusion model for images that exploits (and is biased towards) the kind of local structure we expect of images, when we use a transformer for text that exploits (and is biased towards) the kind of sequential pair-correlation you see in natural language, etc.

Comment by dr_s on The Field of AI Alignment: A Postmortem, and What To Do About It · 2024-12-30T07:47:02.952Z · LW · GW

Generalize this story across a whole field, and we end up with most of the field focused on things which are easy, regardless of whether those things are valuable.

 

I would say this problem plagues more than just alignment, it plagues all of science. Trying to do everything as a series of individual uncoordinated contributions with an authority on top acting only to filter based on approximate performance metrics has this effect. 

Comment by dr_s on What Goes Without Saying · 2024-12-25T18:03:47.435Z · LW · GW

On this issue specifically, I feel like the bar for what counts as an actually sane and non-dysfunctional organization to the average user of this website is probably way too lofty for 95% of workplaces out there (to be generous!) so it's not even that strange that it would be the case.

Comment by dr_s on What Goes Without Saying · 2024-12-22T12:45:52.808Z · LW · GW

A whole lot of people, the vast majority that I've talked to, can easily answer this - "because they pay me and I'm not sure anyone else will", with a bit of "I know this mediocracy well, and the effort to learn a new one only to find it's not better will drain what little energy I have left".

Or "last time I did that I ended up in this one which is even worse than the previous, so I do not wish to tempt fate again".

Comment by dr_s on Review: Dr Stone · 2024-12-22T12:40:51.483Z · LW · GW

Not just that, but as per manga spoilers:

The US already have a bunch of revived people going, including a Senku-level rationalist and scientist who has discovered the revival fluid in parallel and is in fact much less inclined to be forgiving and wants the exact opposite of Tsukasa, to take advantage of the hard reset to build a full technocracy. By the time Senku & co arrive there, they already have automatic firearms and WW1-era planes. So essentially Tsukasa's plan was always absolutely doomed. Just like it has happened before, one day backwards isolationist Japan would wake up to find US gunships with superior firepower at its gates and it would be able to do nothing at all to stop them.

Comment by dr_s on Review: Breaking Free with Dr. Stone · 2024-12-22T12:11:37.962Z · LW · GW

It's not about science as a whole, but Assassination Classroom features one of the most beautiful uses of actual, genuine, 100% correctly represented math in fiction I've ever seen.

Spoilers:

During one of the exams, Karma is competing against the Principal's son for top score. One of the problems involves calculating the volume of the Wigner-Seitz cell in a body-centered cubic lattice. This is obviously quite hard for middle schoolers, but believable for an exam whose explicit purpose was to test them to their limits and let the very best rise to the top. The Principal's son tries to brute force the problem by decomposing the shape into a series of pyramids - doable, but very tedious. Meanwhile Karma realizes that it's as simple as noticing that all atoms are equivalent and must have the same volume, and therefore there's a simple and beautiful symmetry argument for why the volume is exactly 1/2 of the cubic unit cell. Which doubles as a metaphor for how everyone has their talents and domain they excel in - a realization Karma reaches thanks to his character growth. Absolutely top notch writing stuff.

Comment by dr_s on Review: Dr Stone · 2024-12-22T12:01:20.341Z · LW · GW

Senku definitely holds that position, and of the authors I wouldn't be surprised if Boichi at least did - he is famously a big lover of classic science fiction. If you check out his Dr. Stone: Byakuya solo spinoff manga, it starts out as a simple side story, showing the life of Senku's dad and his astronauts companions in space, and then spirals out in a completely insane direction involving essentially an AI singularity (understandably, it's not canon).

There is a certain "Jump heroes shouldn't kill wantonly" vibe I guess but truth be told Jump heroes have gotten significantly more willing to dirty their hands recently (now Kagurabachi seems set to become the next big thing, and Chihiro has a body count in the dozens at this point). So I don't think editorial fiat explains this either.

It's really part of the manga's fantasy, as in, realistically, sure, Tsukasa would have been killed or kept in stone. But just like everyone is able to make up really complicated fully functioning devices with rudimentary means, everyone is able to reach Aumann Agreement within a relatively short time of being proven wrong. That's just how the world rolls.

Comment by dr_s on Review: Dr Stone · 2024-12-18T15:18:50.045Z · LW · GW

Tsukasa loses everything and has to settle for being bought off, in essence, because Senku manages to accumulate enough of a technological advantage even without any additional revivals. (The series unconvincingly says that Tsukasa's politics were superficial and so he got what he really wanted in the end, to try to rationalize how it worked out for him. Seems like cope to me!)

The story plays a bit of a sleight of hand there with Tsukasa having the additional motivation of saving his little sister, which is a pity because they could have at least played up a bit more his conflict with Ryusui (who really seems to stand for everything he hates). But I suppose given his situation, gracefully accepting defeat is reasonably within his personality. He's not insane or bloodthirsty. After realising he's lost, lashing out would only mean humanity risks losing it all, while he can't win any more anyway. Besides his whole cadre has shown itself to be either incompetent or willing to betray him (the amount of people who either switched to Senku's side at the first sign of technology or downright backstabbed him is staggering), so maybe he's just realised his cause never had much of a chance to begin with. He basically never found another idealist like him; he pressganged some people who had no other alternative and found a few thugs who only wanted a chance to be violent assholes. That's not how you build a functional army.

Comment by dr_s on Review: Breaking Free with Dr. Stone · 2024-12-18T11:04:51.842Z · LW · GW

Another Dr. Stone fan here. I will definitely vouch for this show. You have to go in being prepared to the anime-ness of it all - this is not actual science and engineering any more than your average "spokon" show represents the actual practice of whatever sport it involves. It's not accidental that Riichiro Inagaki, the writer of the manga this show adapt, previously worked on Eyeshield 21, a hilarious and over-the-top take on American football, wherein Japanese high school teams field running backs as fast as anyone in the NFL and there's a guy who is 2.04 m tall, weighs 130 kg, and regularly breaks his opponents' arms by tackling them as a tactic. He's also currently writing Trillion Game, a series that takes a similar bombastic approach to the world of start-up founding and venture capital.

The essence of this approach IMO is to represent not the actual thing, but a sort of abstracted and stylised spirit of the thing - all emotions dialled up to eleven and compressed. But at that, I think Dr. Stone really excels. It may be one of the most genuinely humanistic works of science fiction ever made. It is still far more willing to delve into the science and technology of the various devices that get McGuyver'd to fix the problem at hand than any other fictional story featuring some kind of genius protagonist I've ever seen, and at the same time it has deep love and respect for all aspects of human craftmanship and ingenuity. It is very adamant that for all his genius, protagonist Senku couldn't do much without his cohort of friends who put in a lot of hard work and combine their various skills, which include both manual abilities, brute strength, and soft skills such as psychology and business sense. And it really drives at the emotional core of science, something that we don't see often. Most fiction and popular culture puts science and emotion at odds. Dr. Stone shouts in your face that science is beautiful, AND cool, AND one of the best human achievements ever, both in itself and because it makes people's lives better and happier. There's a scene where a short-sighted character gets her first pair of glasses and if you can relate to that? It will make you cry.

A side note as a manga reader for this: the story is over, and I think the new season of the anime that's starting next year will wrap things up. As it often happens with long running stories, the quality isn't constant all throughout - there are some arcs in the middle that are somewhat less compelling than the absolutely banger beginning. But for a manga it has what I would consider a very good conclusion, which still makes it one of the most satisfying reads around in a medium that is infamous for its tendency to fizzle out with bad poorly planned endings. Another thing of note is that Boichi, the artist, is famously a classic sci-fi aficionado, and I'm sure his influence is all over this work too. Without going into spoilers, the ending and explanation for the petrification necessarily veer into high concept sci-fi, but I think it's quite the interesting take and it should probably also find a receptive audience between the readers of this site. So, you know. Go read/watch it.

Comment by dr_s on What are the good rationality films? · 2024-12-15T22:02:04.042Z · LW · GW

Doubt: late answer because I just watched this two days ago, but I found it a fantastic exploration of the problem of reasoning and making decisions with incomplete information and very high stakes. Part of the point being that the viewer does not get a privileged outlook and shares information with the PoV characters, meaning the movie makes you experience the problem deeply.

Comment by dr_s on Biological risk from the mirror world · 2024-12-15T11:44:09.202Z · LW · GW

The antibodies not being chiral dependent doesn't mean there aren't other fundamental links in the whole chain that leads to antibodies being deployed at all that may not be. Mostly I imagine the risk is that we have a lot of systems optimized for dealing with life of a certain chirality. They may be able to cope with the opposite chirality, but less so. COVID alone showed what happens when something far less alien but that is just barely out of distribution for our current immune defenses arrives: literally everyone in the world gets it in a matter of months, a non-insignificant percentage dies even if the pathogen itself would be no more complex or virulent than others we deal with on the daily. And COVID was easy mode. We have examples of far more apocalyptic outcomes from immune naive populations getting in contact with new pathogens.

Here we're not even talking about somehow innocuous entities. E. Coli can and will kill you if it gets in the wrong place while your defenses are down, no mirroring necessary. Staph. Aureus is everywhere already and will eat your flesh while you still live if given the chance. The only reason why we coexist with these threats is that we are in an armed truce: they can stay within their turf, but as soon as they try and go where they don't belong, they get terminated with maximum prejudice. Immuno-compromised people have to fear them a lot more. Imagining a version of them that is both antibiotic resistant (because I bet that's also a consequence of chirality) and able to evade at least the first few layers of immune defenses, until somehow the system scrambles to compensate and manages to churn out a counter-measure, is terrifying enough. That the immune system may eventually cope with them doesn't mean it wouldn't be an apocalyptic pandemic (and worse, one that affects man and animal alike, all at once).

Comment by dr_s on Cost, Not Sacrifice · 2024-11-25T10:46:39.338Z · LW · GW

I think it's a very visible example that right now is particularly often brought up. I'm not saying it's all there is to it but I think the fundamental visceral reaction to the very idea of self-mutilation is an important and often overlooked element of why some people would be put off by the concept. I actually think it's something that makes the whole thing a lot more understandable in what it comes from than the generic "well they're just bigoted and evil" stuff people come up with in extremely partisan arguments on the topics. These sort of psychological processes - the fact that we may first have a gut-level reaction, and only later rationalize it by constructing an ideological framework to justify why the things that repulses us are evil - are very well documented, and happen all over the place. Does not mean everyone who disagrees with me does so because of it (nor that everyone who agrees doesn't do it!) but it would be foolish to just pretend this never happens because it sounds a bit offensive to bring up in a debate. The entire concept of rationality is based around the awareness that yeah, we're constantly affected by cognitive biases like these, and separating the wheat from the chaff is hard work.

And by the way it's an excellent example of the reverse too. Just like people who are not dysphoric are put off by mutilation, people who are are put off by the feeling of having something grafted onto their bodies that doesn't belong. Which is sort of the flip side of it. Essentially we tend to have a mental image of our bodies and a strong aversion to that shape being altered or disturbed in some way (which makes all kinds of sense evolutionarily, really). Ironically enough, it's probably via the mechanism of empathy that someone can see someone else do something to their body that feels "wrong" and cringe/be grossed out on their behalf (if you think trans issues are controversial, consider the reactions some people can have even to things like piercings in particularly sensitive places).

Comment by dr_s on Cost, Not Sacrifice · 2024-11-22T17:42:09.868Z · LW · GW

Well, yes, it's true, and obviously those things do not necessarily all have genuine infinite value. I think what this really means in practice is not that all non-fungible things have infinite value, but that because they are non-fungible, most judgements involving them are not as easy or straightforward as simple numerical comparisons. Preferences end up being expressed anyway, but just because practical needs force a square peg in a round hole doesn't make it fit any better. I think this in practice manifests in high rates of hesitation or regret for decisions involving such things, and the general difficulty of really squaring decisions like these We can agree in one sense that several trillion dollars in charity are a much greater good than someone not having their fingers cut off, and yet we generally wouldn't call that person "evil" for picking the latter option because we understand perfectly how to someone their own fingers might feel more valuable. If we were talking about fungible goods we'd feel very differently. Replace cutting one's fingers with e.g. demolishing their house.

Comment by dr_s on What are the good rationality films? · 2024-11-22T17:36:42.495Z · LW · GW

Probabilities for physical processes are encoded in quantum wavefunctions one way or another, so I'd put that under the umbrella of "winning a staring contest with the laws of physics", which was basically what the average Spiral Energy user did.

And then again, while optimistic, the series still does show Simon using his power responsibly and essentially renouncing it to avoid causing the Spiral Nemesis. He doesn't just keep growing everything exponentially and decide nothing bad can ever possibly come out of it.

Comment by dr_s on What are the good rationality films? · 2024-11-21T18:13:56.970Z · LW · GW

I think the core message of optimism is a positive one, but of course IRL we have to deal with a world whose physical laws do not in fact seem to bend endlessly under sufficient application of MANLY WARRIOR SPIRIT, and thus that forces us to be occasionally Rossiu even when we'd want to be Simon. Memeing ourselves into believing otherwise doesn't really make it true.

Comment by dr_s on Making a conservative case for alignment · 2024-11-21T15:41:03.813Z · LW · GW

People often say that wars are foolish, and both sides would be better off if they didn't fight. And this is standardly called "naive" by those engaging in realpolitik. Sadly, for any particular war, there's a significant chance they're right. Even aside from human stupidity, game theory is not so kind as to allow for peace unending.

I'm not saying obviously that ALL conflict ever is avoidable or irrational, but there are a lot that are:

  1. caused by a miscommunication/misunderstanding/delusional understanding of reality;
  2. rooted in a genuine competition between conflicting interests, but those interests only pertain to a handful of leaders, and most of the people actually doing the fighting really have no genuine stake in it, just false information and/or a giant coordination problem that makes it hard to tell those leaders to fuck off;
  3. rooted in a genuine competition between conflicting interests between the actual people doing the fighting, but the gains are still not so large to justify the costs of the war, which have been wildly underestimated.

And I'd say that just about makes up a good 90% of all conflicts. There's a thing where people who are embedded into specialised domains start seeing the trees ("here is the complex clockwork of cause-and-effect that made this thing happen") and missing the forest ("if we weren't dumb and irrational as fuck none of this would have happened in the first place"). The main point of studying past conflicts should be to distil here and there a bit of wisdom about how in fact lot of that stuff is entirely avoidable if people can just stop being absolute idiots now and then.

Comment by dr_s on Cost, Not Sacrifice · 2024-11-21T10:39:53.808Z · LW · GW

I think there's one fundamental problem here IMO, which is that not everything is fungible, and thus not everything manages to actually comfortably exist on the same axis of values. Fingers are not fungible. At the current state of technology, once severed, they're gone. In some sense, you could say, that's a limited loss. But for you, as a human being, it may as well be infinite. You just lost something you'll never ever have back. All the trillions and quadrillion dollars in the world wouldn't be enough to buy it back if you regretted your choice. And thus, while in some sense its value must be limited (it's just the fingers of one single human being after all, no? How many of those get lost every day simply because it would have been a bit more expensive to equip the workshop with a circular saw that has a proper safety stop?), in some other sense the value of your fingers to you is infinite, completely beyond money.

Bit of an aside - but I think this is part of what causes such a visceral reaction in some people to the idea of sex reassignment surgery, which then feeds into transphobic rationalizations and ideologies. The concept of genuinely wanting to get rid of a part of your body that you can't possibly get back feels so fundamentally wrong on some level to many people, it pretty much alone for them seals the deal that you must either be insane or having been manipulated by some kind of evil outside force.