Posts

An argument that consequentialism is incomplete 2024-10-07T09:45:12.754Z
Population ethics and the value of variety 2024-06-23T10:42:21.402Z
Book review: The Quincunx 2024-06-05T21:13:55.055Z
A case for fairness-enforcing irrational behavior 2024-05-16T09:41:30.660Z
I'm open for projects (sort of) 2024-04-18T18:05:01.395Z
A short dialogue on comparability of values 2023-12-20T14:08:29.650Z
Bounded surprise exam paradox 2023-06-26T08:37:47.582Z
Stop pushing the bus 2023-03-31T13:03:45.543Z
Aligned AI as a wrapper around an LLM 2023-03-25T15:58:41.361Z
Are extrapolation-based AIs alignable? 2023-03-24T15:55:07.236Z
Nonspecific discomfort 2021-09-04T14:15:22.636Z
Fixing the arbitrariness of game depth 2021-07-17T12:37:11.669Z
Feedback calibration 2021-03-15T14:24:44.244Z
Three more stories about causation 2020-11-03T15:51:58.820Z
cousin_it's Shortform 2019-10-26T17:37:44.390Z
Announcement: AI alignment prize round 4 winners 2019-01-20T14:46:47.912Z
Announcement: AI alignment prize round 3 winners and next round 2018-07-15T07:40:20.507Z
How to formalize predictors 2018-06-28T13:08:11.549Z
UDT can learn anthropic probabilities 2018-06-24T18:04:37.262Z
Using the universal prior for logical uncertainty 2018-06-16T14:11:27.000Z
Understanding is translation 2018-05-28T13:56:11.903Z
Announcement: AI alignment prize round 2 winners and next round 2018-04-16T03:08:20.412Z
Using the universal prior for logical uncertainty (retracted) 2018-02-28T13:07:23.644Z
UDT as a Nash Equilibrium 2018-02-06T14:08:30.211Z
Beware arguments from possibility 2018-02-03T10:21:12.914Z
An experiment 2018-01-31T12:20:25.248Z
Biological humans and the rising tide of AI 2018-01-29T16:04:54.749Z
A simpler way to think about positive test bias 2018-01-22T09:38:03.535Z
How the LW2.0 front page could be better at incentivizing good content 2018-01-21T16:11:17.092Z
Beware of black boxes in AI alignment research 2018-01-18T15:07:08.461Z
Announcement: AI alignment prize winners and next round 2018-01-15T14:33:59.892Z
Announcing the AI Alignment Prize 2017-11-04T11:44:19.000Z
Announcing the AI Alignment Prize 2017-11-03T15:47:00.092Z
Announcing the AI Alignment Prize 2017-11-03T15:45:14.810Z
The Limits of Correctness, by Bryan Cantwell Smith [pdf] 2017-08-25T11:36:38.585Z
Using modal fixed points to formalize logical causality 2017-08-24T14:33:09.000Z
Against lone wolf self-improvement 2017-07-07T15:31:46.908Z
Steelmanning the Chinese Room Argument 2017-07-06T09:37:06.760Z
A cheating approach to the tiling agents problem 2017-06-30T13:56:46.000Z
What useless things did you understand recently? 2017-06-28T19:32:20.513Z
Self-modification as a game theory problem 2017-06-26T20:47:54.080Z
Loebian cooperation in the tiling agents problem 2017-06-26T14:52:54.000Z
Thought experiment: coarse-grained VR utopia 2017-06-14T08:03:20.276Z
Bet or update: fixing the will-to-wager assumption 2017-06-07T15:03:23.923Z
Overpaying for happiness? 2015-01-01T12:22:31.833Z
A proof of Löb's theorem in Haskell 2014-09-19T13:01:41.032Z
Consistent extrapolated beliefs about math? 2014-09-04T11:32:06.282Z
Hal Finney has just died. 2014-08-28T19:39:51.866Z
"Follow your dreams" as a case study in incorrect thinking 2014-08-20T13:18:02.863Z
Three questions about source code uncertainty 2014-07-24T13:18:01.363Z

Comments

Comment by cousin_it on Dentistry, Oral Surgeons, and the Inefficiency of Small Markets · 2024-11-02T09:44:40.712Z · LW · GW

Tragedy of capitalism in a nutshell. The best action is to dismantle the artificial scarcity of doctors. But the most profitable action is to build a company that will profit from that scarcity - and, when it gets big enough, lobby to perpetuate it.

Comment by cousin_it on The Alignment Trap: AI Safety as Path to Power · 2024-10-30T10:09:26.056Z · LW · GW

Yeah, this is my main risk scenario. But I think it makes more sense to talk about imbalance of power, not concentration of power. Maybe there will be one AI dictator, or one human+AI dictator, or many AIs, or many human+AI companies; but anyway most humans will end up at the bottom of a huge power differential. If history teaches us anything, this is a very dangerous prospect.

It seems the only good path is aligning AI to the interests of most people, not just its creators. But there's no commercial or military incentive to do that, so it probably won't happen by default.

Comment by cousin_it on Electrostatic Airships? · 2024-10-27T11:46:42.168Z · LW · GW
Comment by cousin_it on What is malevolence? On the nature, measurement, and distribution of dark traits · 2024-10-27T09:42:26.743Z · LW · GW

The British weren't much more compassionate. North America and Australia were basically cleared of their native populations and repopulated with Europeans. Under British rule in India, tens of millions died from many famines, which instantly stopped after independence.

Colonialism didn't end due to benevolence. Wars for colonial liberation continued well after WWII and were very brutal, the Algerian war for example. I think the actual reason is that colonies stopped making economic sense.

So I guess the difference between your view and mine is that I think colonialism kept going basically as long as it benefited the dominant group. Benevolence or malevolence didn't come into it much. And if we get back to the AI conversation, my view is that when AIs become more powerful than people and can use resources more efficiently, the systemic gradient in favor of taking everything away from people will be just way too strong. It's a force acting above the level of individuals (hmm, individual AIs) - it will affect which AIs get created and which ones succeed.

Comment by cousin_it on What is malevolence? On the nature, measurement, and distribution of dark traits · 2024-10-25T21:07:38.746Z · LW · GW

I think a big part of the problem is that in a situation of power imbalance, there's a large reward lying around for someone to do bad things - plunder colonies for gold, slaves, and territory; raise and slaughter animals in factory farms - as long as the rest can enjoy the fruits of it without feeling personally responsible. There's no comparable gradient in favor of good things ("good" is often unselfish, uncompetitive, unprofitable).

Comment by cousin_it on What is malevolence? On the nature, measurement, and distribution of dark traits · 2024-10-24T14:28:58.622Z · LW · GW

I'm afraid in a situation of power imbalance these interpersonal differences won't matter much. I'm thinking of examples like enclosures in England, where basically the entire elite of the country decided to make poor people even poorer, in order to enrich themselves. Or colonialism, which lasted for centuries with lots of people participating, and the good people in the dominant group didn't stop it.

To be clear, I'm not saying there are no interpersonal differences. But if we find ourselves at the bottom of a power imbalance, I think those above us (even if they're very similar to humans) will just systemically treat us badly.

Comment by cousin_it on What is malevolence? On the nature, measurement, and distribution of dark traits · 2024-10-23T15:47:02.564Z · LW · GW

I'm not sure focusing on individual evil is the right approach. It seems to me that most people become much more evil when they aren't punished for it. A lot of evil is done by organizations, which are composed of normal people but can "normalize" the evil and protect the participants. (Insert usual examples such as factory farming, colonialism and so on.) So if we teach AIs to be as "aligned" as the average person, and then AIs increase in power beyond our ability to punish them, we can expect to be treated as a much-less-powerful group in history - which is to say, not very well.

Comment by cousin_it on OpenAI defected, but we can take honest actions · 2024-10-21T12:25:53.247Z · LW · GW

The situation where AI is a good tool for manipulating public opinion, and the leading AI company has a bad reputation, seems unstable. Maybe AI just needs to get a little better, and then AI-written arguments in favor of AI will win public opinion decisively? This could "lock in" our trajectory even worse than now, and could happen long before AGI.

Comment by cousin_it on An argument that consequentialism is incomplete · 2024-10-13T10:57:04.546Z · LW · GW

No problem about long reply, I think your arguments are good and give me a lot to think about.

My attempt at interpreting what you mean is that you’re drawing a distinction between morality about world-states vs. morality about process, internal details, experiencing it, ‘yourself’.

I just thought of another possible classification: "zeroth-order consequentialist" (care about doing the action but not because of consequences), "first-order consequentialist" (care about consequences), "second-order consequentialist" (care about someone else being able to choose what to do). I guess you're right that all of these can be translated into first-order. But by the same token, everything can be translated to zeroth-order. And the translation from second to first feels about as iffy as the translation from first to zeroth. So this still feels fuzzy to me, I'm not sure what is right.

Comment by cousin_it on Rationalist Gnosticism · 2024-10-13T09:39:24.315Z · LW · GW

Maybe HPMOR? A lot of people treated it like "our guru has written a fiction book that teaches you how to think more correctly, let's get more people to read it". And maybe it's possible to write such a book, but to me the book was charming in the moment but fell off hard when rereading later.

Comment by cousin_it on Rationalist Gnosticism · 2024-10-13T09:25:10.997Z · LW · GW
Comment by cousin_it on Prices are Bounties · 2024-10-13T08:52:00.205Z · LW · GW

I guess it depends on your stance on monopolies. If you think monopolies result only from government interference in the market, then you'll be more laissez-faire. But if you notice that firms often want to join together in cozy cartels and have subtle ways to do it (see RealPage), or that some markets lead to natural monopolies, then the problem of protecting people from monopoly prices and reducing the reward for monopolization is a real problem. And yeah, banning price gouging is a blunt instrument - but it has the benefit of being direct. Fighting the more indirect aspects of monopolization is harder. So in this branch of the argument, if you want to allow price gouging, that has to come in tandem with better antimonopoly measures.

Comment by cousin_it on Rationalist Gnosticism · 2024-10-11T17:59:35.729Z · LW · GW

I mean, in science people will use Newton's results and add their own. But they won't quote Newton's books at you on every topic under the sun, or consider you a muggle if you haven't read these books. He doesn't get the guru treatment that EY does.

Comment by cousin_it on Martín Soto's Shortform · 2024-10-11T15:34:45.631Z · LW · GW

Here's another possible answer: maybe there are some aspects of happiness that we usually get as a side effect of doing other things, not obviously connected to happiness. So if you optimize to exclude things whose connection to happiness you don't see, you end up missing some "essential nutrients" so to speak.

Comment by cousin_it on Rationalist Gnosticism · 2024-10-11T15:27:49.351Z · LW · GW

rationalist Gnostics are still basically able to deal with hylics

That sounds a bit like "dealing with muggles". I think this kind of thing is wrong in general, it's better to not have gurus and not have muggles.

Comment by cousin_it on Rationalist Gnosticism · 2024-10-11T13:51:08.950Z · LW · GW

I think most farmers would agree that there are many other jobs as useful as farming. But to a gnostic, the pneumatic/psychic/hylic classification is the most important fact about a person.

Comment by cousin_it on Rationalist Gnosticism · 2024-10-11T09:50:01.602Z · LW · GW

One big turn-off of Gnosticism for me is dividing people into three sorts, of which the most numerous (hylics) is the most inferior. So groups with gnosticism-type beliefs will often think of themselves as distinct and superior to regular people.

Comment by cousin_it on An argument that consequentialism is incomplete · 2024-10-09T16:44:52.671Z · LW · GW

Yeah. I think consequentialism is a great framing that has done a lot of good in EA, where the desired state of the world is easy to describe (remove X amount of disease and such). And this created a bit of a blindspot, where people started thinking that goals not natively formulated in terms of end states ("play with this toy", "respect this person's wishes" and such) should be reformulated in terms of end states anyway, in more complex ways. To be honest I still go back and forth on whether that works - my post was a bit polemical. But it feels like there's something to the idea of keeping some goals in our "internal language", not rewriting them into the language of consequences.

Comment by cousin_it on Thinking About a Pedalboard · 2024-10-08T18:43:34.584Z · LW · GW

Nothing to say on pedalboards that you don't know already, but another question, have you tried optimizing your setup process? 35 minutes feels a bit long. At some point I was doing setup for a band and I got it down to 15 minutes by writing a script, optimizing which steps go together, and practicing.

Comment by cousin_it on An argument that consequentialism is incomplete · 2024-10-08T18:26:35.971Z · LW · GW

Here's maybe another point of view on this: consequentialism fundamentally talks about receiving stuff from the universe. An hour climbing a mountain, an hour swimming in the sea, or hey, an hour in the joy experience machine. The endpoint of consequentialism is a sort of amoeba that doesn't really need a body to overcome obstacles, or a brain to solve problems, all it needs to do is receive and enjoy. To the extent I want life to be also about doing something or being someone, that might be a more natural fit for alternatives to consequentialism - deontology and virtue ethics.

Comment by cousin_it on An argument that consequentialism is incomplete · 2024-10-08T07:31:47.047Z · LW · GW

Some people (like my 12yo past self) actually do want to reach the top of the mountain. Other people, like my current self, want things like take a break from work, get light physical exercise, socialize, look at nature for a while because I think it’s psychologically healthy, or get a sense of accomplishment after having gotten up early and hiked all the way up.

There's plenty of consequentialism debate in other threads, but here I'd just like to say that this snippet is a kind of unintentionally sad commentary on growing up. It's probably not even sad to you; but to me it evokes a feeling of "how do we escape from this change, even temporarily".

Comment by cousin_it on An argument that consequentialism is incomplete · 2024-10-08T07:21:57.560Z · LW · GW

This post strikes me as saying something extremely obvious and uncontroversial, like “I care about what happens in the future, but I also care about other things, e.g. not getting tortured right now”. OK, yeah duh, was anyone disputing that??

I'm thinking about cases where you want to do something, and it's a simple action, but the consequences are complex and you don't explicitly analyze them - you just want to do the thing. In such cases I argue that reducing the action to its (more complex) consequences feels like shoehorning.

For example: maybe you want to climb a mountain because that's the way your heuristics play out, which came from evolution. So we can "back-chain" the desire to genetic fitness; or we can back-chain to some worldly consequences, like having good stories to tell at parties as another commenter said; or we can back-chain those to fitness as well, and so on. It's arbitrary. The only "bedrock" is that when you want to climb the mountain, you're not analyzing those consequences. The mountain calls you, it doesn't need to be any more complex than that. So why should we say it's about consequences? We could just say it's about the action.

And once we allow ourselves to do actions that are just about the action, it seems calling ourselves "consequentialists" is somewhere between wrong or vacuous. Which is the point I was making in the post.

Comment by cousin_it on An argument that consequentialism is incomplete · 2024-10-07T22:30:42.870Z · LW · GW

This is tricky. In the post I mentioned "playing", where you do stuff without caring about any goal, and most play doesn't lead to anything interesting. But it's amazing how many of humanity's advances were made in this non-goal-directed, playing mode. This is mentioned for example in Feynman's book, the bit about the wobbling plate.

Comment by cousin_it on An argument that consequentialism is incomplete · 2024-10-07T21:26:41.996Z · LW · GW

Maybe. Or maybe the wish itself is about climbing the mountain, just like it says, and the other benefits (which you can unwind all the way back to evolutionary ones) are more like part of the history of the wish.

Comment by cousin_it on An argument that consequentialism is incomplete · 2024-10-07T16:26:40.295Z · LW · GW

Yeah, I've been thinking along similar lines. Consequentialism stumbles on the richness of other creatures, and ourselves. Stumbles in the sense that many of our wishes are natively expressed in our internal "creature language", not the language of consequences in the world.

Comment by cousin_it on An argument that consequentialism is incomplete · 2024-10-07T16:02:49.574Z · LW · GW

Yeah, you can say something like "I want the world to be such that I follow deontology" and then consequentialism includes deontology. Or you could say "it's right to follow consequentialism" and then deontology includes consequentialism. Understood this way, the systems become vacuous and don't mean anything at all. When people say "I'm an consequentialist", they usually mean something more: that their wishes are naturally expressed in terms of consequences. That's what my post is arguing against. I think some wishes are naturally consequentialist, but there are other equally valid wishes that aren't, and expressing all wishes in terms of consequences isn't especially useful.

Comment by cousin_it on My Objections to "We’re All Gonna Die with Eliezer Yudkowsky" · 2024-09-26T08:51:27.068Z · LW · GW

The relevant point is his latter claim: “in particular with respect to “learn ‘don’t steal’ rather than ‘don’t get caught’.”″ I think this is a very strong conclusion, relative to available data.

I think humans don't steal mostly because society enforces that norm. Toward weaker "other" groups that aren't part of your society (farmed animals, weaker countries, etc) there's no such norm, and humans often behave badly toward such groups. And to AIs, humans will be a weaker "other" group. So if alignment of AIs to human standard is a complete success - if AIs learn to behave toward weaker "other" groups exactly as humans behave toward such groups - the result will be bad for humans.

It gets even worse because AIs, unlike humans, aren't raised to be moral. They're raised by corporations with a goal to make money, with a thin layer of "don't say naughty words" morality. We already know corporations will break rules, bend rules, lobby to change rules, to make more money and don't really mind if people get hurt in the process. We'll see more of that behavior when corporations can make AIs to further their goals.

Comment by cousin_it on Struggling like a Shadowmoth · 2024-09-24T11:00:29.137Z · LW · GW
Comment by cousin_it on What are the best arguments for/against AIs being "slightly 'nice'"? · 2024-09-24T10:44:45.532Z · LW · GW

I think there isn't much hope in this direction. Most AI resources will probably be spent on competition between AIs, and AIs will self-modify to remove wasteful spending. It's not enough to have a weak value that favors us, if there's a stronger value that paves over us. We're teaching AI based on human behavior and with a goal of chasing money, but people chasing money often harm other people, so why would AI be nicer than that. It's all just wishful thinking.

Comment by cousin_it on Slave Morality: A place for every man and every man in his place · 2024-09-20T09:50:03.986Z · LW · GW

I think Nietzsche would agree that "slave morality" originated with Jesus. The main new idea that Jesus brought as a moral philosopher was compassion, feeling for the other person. It's pretty find to hard in earlier sources, for example the heroes of the Iliad hurt weaker people without a second thought.

To me it feels obvious that the idea of compassion needs to exist, and needs to have force. Because otherwise we'd have a human society operating by the laws of the natural world, and if you look at what animals do to each other, there's no limit to how bad things can get.

Can compassion also become a tool of power and abuse? Sure. But let's not go back to a world without compassion, please.

Comment by cousin_it on Hyperpolation · 2024-09-16T14:06:12.757Z · LW · GW

I'm not sure the article fully justifies the thesis. It shows hyperpolation only for a handful of cases where the given function is a slice of a simpler higher-dimensional function. But interpolation isn't limited to such cases. Interpolation is general: given any set of (x,y) pairs, there are reasonable ways to interpolate between them. Is there a nontrivial way of doing hyperpolation in general?

Comment by cousin_it on Book Recommendations for social skill development? · 2024-08-21T23:27:40.497Z · LW · GW

Sure. But even then, trying looks very different from not trying. I got a saxophone, played it by myself for a few weeks, then booked a lesson with a teacher. At the start of the lesson I played a few notes and said: "Anything jump out at you? I think some notes come out flat, and the sound isn't bright enough, anyway you see what I'm doing and any advice would be good." Then he gave me a lot of good advice tailored to my level.

Here's a video from Ben Finegold making the same point about learning chess:

"Is it possible to be good at chess when you start playing at 18?" Anything's possible. But if you want to get good at chess - I'm not saying you do, but if you do - then people who say things like "what opening should I play?" and "my rating's 1200, how do I get to 1400?" and you know, "my coach says this but I don't want to do that", or you know, "I lost five games in a row on chess.com so I haven't played in a week", those kind of things have nothing to do with getting better at chess. It's inquisitive of you, and that's what most people do. People who get better at chess play chess, study chess, think about chess, love chess and do chess stuff. People who don't get better at chess spend 90 percent of their energy thinking about "how do I get better" and asking questions about it. That's not how you get better at something. You want to get better at something, you do it and you work hard at it, and it's not just chess, that's your whole life.

Comment by cousin_it on An anti-inductive sequence · 2024-08-14T14:23:41.712Z · LW · GW

I think the idea you're looking for is Martin-Löf random sequences.

Comment by cousin_it on Ten counter-arguments that AI is (not) an existential risk (for now) · 2024-08-14T11:02:08.725Z · LW · GW
Comment by cousin_it on Rabin's Paradox · 2024-08-14T10:41:39.639Z · LW · GW

Yeah. See also Stuart's post:

Expected utility maximization is an excellent prescriptive decision theory... However, it is completely wrong as a descriptive theory of how humans behave... Matthew Rabin showed why. If people are consistently slightly risk averse on small bets and expected utility theory is approximately correct, then they have to be massively, stupidly risk averse on larger bets, in ways that are clearly unrealistic. Put simply, the small bets behavior forces their utility to become far too concave.

Comment by cousin_it on In Defense of Open-Minded UDT · 2024-08-14T06:17:44.780Z · LW · GW

Going back to the envelopes example, a nosy neighbor hypothesis would be "the left envelope contains $100, even in the world where the right envelope contains $100". Or if we have an AI that's unsure whether it values paperclips or staples, a nosy neighbor hypothesis would be "I value paperclips, even in the world where I value staples". I'm not sure how that makes sense. Can you give some scenario where a nosy neighbor hypothesis makes sense?

Comment by cousin_it on In Defense of Open-Minded UDT · 2024-08-13T10:28:25.426Z · LW · GW

Imagine if we had narrowed down the human prior to two possibilities, P_1 and P_2 . Humans can’t figure out which one represents our beliefs better, but the superintelligent AI will be able to figure it out. Moreover, suppose that P_2 is bad enough that it will lead to a catastrophe from the human perspective (that is, from the P_1 perspective), even if the AI were using UDT with 50-50 uncertainty between the two. Clearly, we want the AI to be updateful about which of the two hypotheses is correct.

This seems like the central argument in the post, but I don't understand how it works.

Here's a toy example. Two envelopes, one contains $100, the other leads to a loss of $10000. We don't know which envelope is which, but it's possible to figure out by a long computation. So we make a money-maximizing UDT AI, whose prior is "the $100 is in whichever envelope {long_computation} says". Now if the AI has time to do the long computation, it'll do it and then open the right envelope. And if it doesn't have time to do the long computation, and is offered to open a random envelope or abstain, it will abstain. So it seems like ordinary UDT solves this example just fine. Can you explain where "updatefulness" comes in?

Comment by cousin_it on You don't know how bad most things are nor precisely how they're bad. · 2024-08-04T21:35:18.867Z · LW · GW
Comment by cousin_it on Dragon Agnosticism · 2024-08-02T16:14:31.039Z · LW · GW

I think even one dragon would have a noticeable effect on the population of large animals in the area. The huge flying thing just has to eat so much every day, it's not even fun to imagine being one. If we invent shapeshifting, my preferred shape would be some medium-sized bird that can both fly and dive in the water, so that it can travel and live off the land with minimal impact. Though if we do get such technology, we'd probably have to invent territorial expansion as well, something like creating many alternate Earths where the new creatures could live.

Comment by cousin_it on Universal Basic Income and Poverty · 2024-07-28T12:37:28.842Z · LW · GW
Comment by cousin_it on Universal Basic Income and Poverty · 2024-07-26T10:58:44.713Z · LW · GW

I'm not sure the “poverty equilibrium” is real. Poverty varies a lot by country and time period, various policies in various places have helped with poverty, so UBI might help as well. Though I think other policies (like free healthcare, or fixing housing laws) might help more per dollar.

Comment by cousin_it on Universal Basic Income and Poverty · 2024-07-26T10:57:07.260Z · LW · GW
Comment by cousin_it on Demography and Destiny · 2024-07-22T09:11:48.509Z · LW · GW

I think the main point of the essay might be wrong. It's not necessarily true that evolution will lead to a resurgence of high fertility. Yes, evolution is real, but it's also slow: it works on the scale of human lifetimes. Culture today evolves faster than that. It's possible that culture can keep adapting its fertility-lowering methods faster than humans can evolve defenses against them.

Comment by cousin_it on Why Georgism Lost Its Popularity · 2024-07-21T22:27:56.965Z · LW · GW

I think you're right, Georgism doesn't get passed because it goes against the interests of landowners who have overwhelming political influence. But if the actual problem we're trying to solve is high rents, maybe that doesn't require full Georgism? Maybe we just need to make construction legally easier. There's strong opposition to that too, but not as strong as literally all landowners.

Comment by cousin_it on Ice: The Penultimate Frontier · 2024-07-16T10:49:20.900Z · LW · GW
Comment by cousin_it on Ice: The Penultimate Frontier · 2024-07-14T09:42:10.614Z · LW · GW

It seems to me that land of the same quality as this can already be bought for cheaper in many places. The post says the new land could be more valuable because of better governance, but governance is an outcome of human politics, so it's orthogonal to old/new land. In Jules Verne's Propeller Island, a power conflict eventually leads to physical destruction of the island.

Comment by cousin_it on Reliable Sources: The Story of David Gerard · 2024-07-11T11:25:16.534Z · LW · GW

My impression is that Wikipedia was founded on an ideal of neutrality, but Gerard doesn't really believe in that ideal - he considers it harmful, a kind of shield for the status quo. That's a possible position, but I'm not sure how one can hold it and at the same time edit Wikipedia in good faith. Does anyone know how that can be justified?

Comment by cousin_it on When is a mind me? · 2024-07-09T08:23:30.344Z · LW · GW

anything that acts like us has our qualia

Well, a thing that acts like us in one particular situation (say, a thing that types "I'm conscious" in chat) clearly doesn't always have our qualia. Maybe you could say that a thing that acts like us in all possible situations must have our qualia? This is philosophically interesting! It makes a factual question (does the thing have qualia right now?) logically depend on a huge bundle of counterfactuals, most of which might never be realized. What if, during uploading, we insert a bug that changes our behavior in one of these counterfactuals - but then the upload never actually runs into that situation in the course of its life - does the upload still have the same qualia as the original person, in situations that do get realized? What if we insert quite many such bugs?

Moreover, what if we change the situations themselves? We can put the upload in circumstances that lead to more generic and less informative behavior: for example, give the upload a life where they're never asked to remember a particular childhood experience. Or just a short life, where they're never asked about anything much. Let's say the machine doing the uploading is aware of that, and allowed to optimize out parts that the person won't get to use. If there's a thought that you sometimes think, but it doesn't influence your I/O behavior, it can get optimized away; or if it has only a small influence on your behavior, a few bits' worth let's say, then it can be replaced with another thought that would cause the same few-bits effect. There's a whole spectrum of questionable things that people tend to ignore when they say "copy the neurons", "copy the I/O behavior" and stuff like that.

Comment by cousin_it on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-06T16:06:26.390Z · LW · GW
Comment by cousin_it on Doomsday Argument and the False Dilemma of Anthropic Reasoning · 2024-07-06T11:16:44.892Z · LW · GW

Since you conclude that both SIA and SSA are flawed because we know a lot about our parents, let's see if that works. Imagine a world where people spend their first years not knowing much about their parents, or about human reproduction. Suppose they each live in a kind of egg, and receive newscasts from outside only carrying information about the particular anthropic problem we want them to solve (e.g. "the world currently contains N people", or "there are two theories about astronomy" and so on). How should people solve anthropic problems under such conditions? Should they use SIA or SSA? Or should they still reject both, but based on some other argument, and the "we know a lot about our parents" argument was a red herring?