Posts
Comments
illusionists actually do not experience qualia
I once had an epiphany that pushed me from fully in Camp #2 intellectually rather strongly towards Camp #1. I hadn't heard about illusionism before, so it was quite a thing. Since then, I've devised probably dozens of inner thought experiments/arguments that imho +- proof Camp #1 to be onto something, and that support the hypothesis that qualia can be a bit less special than we make them to be despite how impossible that may seem. So I'm intellectually quite invested in Camp #1 view.
Meanwhile, my experience has definitely not changed, my day-to-day me is exactly what it always was, so in that sense definitely "experience" qualia just like anyone.
Moreover, it is just as hard as ever before to take my intellectual belief that our 'qualia' might be a bit less absolutely special than we make it to be, seriously in day-to-day life. I.e. emotionally, I'm still +- 100% in Camp #2, and I guess I might be in a rather similar situation
Just found proof! Look at the beautiful parallel, in Vipassana according to MCTB2 (or audio) by Daniel Ingram:
[..] dangerous term “mind”, [..] it cannot be located. I’m certainly not talking about the brain, which we have never experienced, since the standard for insight practices is what we can directly experience. As an old Zen monk once said to a group of us in his extremely thick Japanese accent, “Some people say there is mind. I say there is no mind, but never mind! Heh, heh, heh!” However, I will use this dangerous term “mind” often, or even worse “our mind”, but just remember when you read it that I have no choice but to use conventional language, and that in fact there are only utterly transient mental sensations. Truly, there is no stable, unitary, discrete entity called “mind” that can be located! By doing insight practices, we can fully understand and appreciate this. If you can do this, we’ll get along just fine. Each one of these sensations [..] arises and vanishes completely before another begins [..]. This means that the instant you have experienced something, you can know that it isn’t there anymore, and whatever is there is a new sensation that will be gone in an instant.
Ok, this may prove nothing at all, and I haven't even (yet) personally started trying to mentally observe what's told in that quote, but I must say, on a purely intellectual level, this makes absolutely perfect sense to me exactly from the thoughts I hoped to convey in the post.
(not the first time I have the impression there are some particular elements of deep observations meditators, e.g. Sam Harris, explain, can actually be intellectually - but maybe only intellectually, maybe exactly not intuitively - grasped by rather pure reasoning about the brain and some of its workings/with some thought experiments or so. But in the above, I find the fit now particularly well between my 'theoretical' post and the seeming practice insights)
Will Jack Voraces narrate more Significant Digits chapters ever in addition to the 4 episodes that are found in the usual HPMOR JV narration podcasts; does anyone know anything about this? If not, does anyone have info why the first 4 SD chapters are there in his voice, but the remaining not?
If resources and opportunities are not perfectly distributed, the best advancements may remain limited to the wealthiest, making capital the key determinant of access.
Largely agree. Nuance: Instead natural resources may quickly become the key bottleneck, even more so than what we usually denote 'capital' (i.e. built environment). So it's specifically natural resources you want to hold, even more than capital; the latter may become easier, cheaper to reproduce with the ASI, so yield less scarcity rent
An exception is of course if you hold 'capital' that in itself consists of particularly many embodied resources instead of embodied labor (with 'embodied' I mean: inputs had been used in its creation): its value will reflect the scarce natural resources it 'contains', and may thus also be high.
If you ever have to go to the hospital for any reason, suit up, or at least look good.
[Rant alert; personal anecdotes aiming to emphasize the underlying issue:] Feeling less crazy when reading I'm not an outlier with my wearing suit when going to a doc. What has brought me: Got pain in the throat but nothing can be seen (maybe as the red throat skin kind of by definition doesn't reveal red itchy skin or so?) = you're psychosomatic. Got weird twitches after eating sugar none can explain = kick you out yelling 'go eat ice cream, you're not diabetic or anything' (literally!) - until next time you bring a video of the twitches, and until you eat chocolate before an appointment to be sure you can show them the weird twitches live. Try to understand at least a tiny bit about the hernia OP they're about to do on you (incl. something about probabilities)? Get treated with utter disdain.
In my country medicine students were admitted based on how many latin words they memorize or something, instead of things correlated with IQ, idk whether things are similar in other countries and may help explain the state of affairs.
I presume you wrote this with not least a phenomenally unconscious AGI in mind. This brings me to the following two separate but somewhat related thoughts:
A. I wonder what you [or any reader of this comment]: What would you conclude or do if you (i) yourself did not have any feeling of consciousness[1], and then (ii) stumbled upon a robot/computer writing the above, while (iii) you also know - or strongly assume - whatever the computer writes can be perfectly explained (also) based merely by the logically connected electron flows in their processor/'brain'?
B. I could imagine - a bit speculation:
- A today-style LLM reading more such texts might exactly be nudged towards caring about conscious beings in a general sense
- An independent, phenomenally unconscious alien intelligence, say stumbling upon us from the outside, might be rather quick to dismiss it
- ^
I'm aware of the weirdness of that statement; 'feeling not conscious' as a feeling itself implies feeling - or so. I reckon you still understand what I mean: Imagine yourself as a bot with no feelings etc.
I upvote for bringing the useful terminology for that case to the attention that I wasn't aware of.
Then, too much "true/false", too much "should" in what is suggested imho.
In reality, if I, say, choose not to drink the potion, I might still be quite utilitarian in usual decisions, it's just that I don't have the guts or so, or at this very moment I simply have a bit too little empathy with the trillion years of happiness for my future self, so it doesn't match up with my dreading the almost sure death. All this without implying that I really think we ought to discount these trillion years. I just am an imperfect altruist with my future self; I have fear of dying even if it's an imminent death, etc. So it's just a basic preference to reject it, not a grand non-utilitarian theory implied by it. I might in fact even prescribe that potion to others in some situations, but still not like to drink it myself.
So, I think it does NOT follow that I'd have to believe "what happens on faraway exoplanets or what happened thousands of years ago in history could influence what we ought to do here and now", at least not just from rejecting this particular potion.
Agree. I find it powerful especially about popular memes/news/research results. With only a bit of oversimplification: Give me anything that sounds like it is a sexy story to tell independently of underlying details, and I sadly have to downrate the information value of my ears' hearing it, to nearly 0: I know in our large world, it'd be told likely enough independently of whether it has any reliable origin or not.
Maybe no "should", but maybe an option to provide either (i) personal quick messages to OP, linked to the post, or (ii) anonymous public comments, could help. I guess (ii) would be silly all in all though. Leaves (i) as an option, anonymous or not anonymous. Not anonymous would make it close to existing PM; anonymous might indeed encourage low-effort rough explanations for downvoting.
It's crucial that some people get discouraged and leave for illegible reasons
Interesting. Can you elaborate why? I find it natural one should have the option to downvote anonymously & with no further explanation, but the statement still doesn't seem obvious to me.
I think you're on to something!
To my taste, what you propose is slightly more specific than required. What I mean, at least for me, the essential takeaway from your reading is a bit broader than what you explicitly write*: A bit of paternalism by the 'state', incentivizing our short-term self to doing stuff good for our long-term self. Which might become more important once the abundance means the biggest enemies to our self-fulfillment are internal. So healthy internal psychology can become more crucial. And we're not used to taking this seriously, or at least not to actively tackling that internal challenge by seeking outside support.
So, the paternalistic incentives you mention could be cool.
Centering our school system, i.e. the compulsory education system, more around this type of a bit more mindful-ish things, could be another part.
Framing: I'd personally not so much frame it as 'supplemental income', even if it also act as that: Income, redistribution, making sure humans even once unemployed are well fed, really shall come from UBI (plus if some humans in the loop remain really bottleneck, all scarcity value for their deeds to go to them, no hesitation), full stop. But that's really just about framing. Overall I agree, yes, some extra incentive payments would seem all in order. To the degree that the material wealth they provide still matters in light of the abundance. Or, even, indeed, in a world where bad psychology does become a major threat to the otherwise affluent society, it could be even an idea to withhold a major part of the spoils from useful AI, just to be able to incentivize use to also do our job to remain/become sane.
*That is, at least I'm not spontaneously convinced exactly those specific aspects you mention are and will remain the most important ones, but overall such types of aspects of sound inner organization within our brain might be and remain crucial in a general sense.
an AI system passing the ACT - demonstrating sophisticated reasoning about consciousness and qualia - should be considered conscious. [...] if a system can reason about consciousness in a sophisticated way, it must be implementing the functional architecture that gives rise to consciousness.
This is provably wrong. This route will never offer any test on consciousness:
Suppose for a second that xAI in 2027, a very large LLM, will be stunning you by uttering C, where C = more profound musings about your and her own consciousness than you've ever even imagined!
For a given set of random variable draws R used in the randomized output generation of xAI's uttering, S the xAI structure you've designed (transformers neuron arrangements or so), T the training you had given it:
What is P(C | {xAI conscious, R, S, T})? It's 100%.
What is P(C | {xAI not conscious, R, S, T})? It's of course also 100%. Schneider's claims you refer to don't change that. You know you can readily track what the each element within xAI is mathematically doing, how the bits propagate, and, if examining it in enough detail, you'd find exactly the output you observe, without resorting to any concept of consciousness or whatever.
As the probability of what you observe is exactly the same with or without consciousness in the machine, there's no way to infer from xAI's uttering whether it's conscious or not.
Combining this with the fact that, as you write, biological essentialism seems odd too, does of course create a rather unbearable tension, that many may still be ignoring. When we embrace this tension, some see raise illusionism-type questions, however strange those may feel (and if I dare guess, illusionist type of thinking may already be, or may grow to be, more popular than the biological essentialism you point out, although on that point I'm merely speculating).
Assumption 1: Most of us are not saints.
Assumption 2: AI safety is a public good.[1]
[..simple standard incentives..]
Implication: The AI safety researcher, eventually finding himself rather too unlikely to individually be pivotal on either side, may rather 'rationally'[2] switch to ‘standard’ AI work.[3]
So: A rather simple explanation seems to suffice to make sense of the big picture basic pattern you describe.
Doesn't mean, the inner tension you point out isn't interesting. But I don't think very deep psychological factors needed to explain the general 'AI safety becomes AI instead' tendency, which I had the impression the post was meant to suggest.
- ^
Or, unaligned/unloving/whatever AGI a public bad.
- ^
I mean: individually ‘rational’ once we factor in another trait - Assumption 1b: The unfathomable scale of potential aggregate disutility from AI gone wrong, bottoms out into a constrained ‘negative’ individual utility in terms of the emotional value non-saint Joe places on it. So a 0.1 permille probability of saving the universe may individually rationally be dominated by mundane stuff like having an still somewhat cool and well-paying job or something.
- ^
The switch may psychologically be even easier if the employer had started out as actually well-intent and may now still have a bit of an ambiguous flair.
Called Windfall Tax
Random examples:
VOXEU/CEPR Energy costs: Views of leading economists on windfall taxes and consumer price caps
Reuters Windfall tax mechanisms on energy companies across Europe
Especially with the 2022 Ukraine energy prices, the notion's popularity spiked along.
Seems to me also a very neat way to deal with supernormal short-term profits due to market price spikes, in cases where supply is extremely inelastic.
I guess, and some commentaries suggest, in actual implementation, with complex firm/financial structures etc., and with actual clumsy politics, not always as trivial as it might look on first sight, but feasible, and some countries managed to implement some in the energy crisis.
[..] requires eating the Sun, and will be feasible at some technology level [..]
Do we have some basic physical-feasibility insights on this or you just speculate?
Indeed the topic I've dedicated the 2nd part of the comment, as the "potential truth" how I framed it (and I have no particular objection to you making it slightly more absolutist).
This is interesting! And given you generously leave it rather open as to how to interpret it, I propose we should think the other way round than people usually might tend to, when seeing such results:
I think there's not even the slightest hint at any beyond-pure-base-physics stuff going on in LLMs revealing even any type of
phenomenon that resists [conventional] explanation
Instead, this merely reveals our limitations of tracking (or 'emphasizing with') well enough the statistics within the machine. We know we have just programmed and bite-by-bite-trained into it exactly every syllable the LLM utters. Augment your brain with a few extra neurons or transistors or what have you, and that smart-enough version of you would be capable of perfectly understanding why in response to the training you gave it, it spits out exactly the words it does.[1]
So, instead, it's interesting the other way round:
Realizations you describe could be a step closer to showing how a simple pure basic machine can start to be 'convinced' it has intrinsic value and so on - just the way we all are convinced of having that.
So AI might eventually bring illusionism nearer to us, even if I'm not 100% sure getting closer to that potential truth ends well for us. Or that, anyway, we'd really be able to fully buy into the it even if it were to become glaringly obvious to any outsider observing us.
- ^
Don't misread that as me saying it's anyhow easy... just, in the limit, basic (even if insanely large scale and convoluted maybe) tracking of the mathematics we put in would really bring us there. So, admittedly, don't take literally 'a few' more neurons to help you, but instead a huge ton..
Indeed. I though it to be relatively clear with "buy" I meant to mostly focus on things we typically explicitly buy with money (for brevity even for these I simplified a lot, omitting that shops are often not allowed to open 24/7, some things like alcohol aren't sold to people of all ages, in some countries not sold in every type of shop, and/or or not at all times).
Although I don't want to say that exploring how to port the core thought to broader categories of exchanges/relationships couldn't bring interesting extra insights.
I cannot say I've thought about it deep enough, but I've thought and written a bit about UBI, taxation/tax competition and so on. My imagination so far is:
A. Taxation & UBI would really be natural and workable, if we were choosing the right policies (though I have limited hope our policy making and modern democracy is up to the task, especially also with the international coordination required). Few subtleties that come to mind:
- Simply tax high revenues or profits.
- No need to tax "AI (developers?)"/"bots" specifically.
- In fact, if AIs remain rather replicable/if we have many competing instances: Scarcity rents will be in raw factors (e.g. ores and/or land) rather than the algorithms used to processing them
- UBI to the people.
- International tax (and migration) coordination as essential.
- Else, especially if it's perfectly mobile AIs that earn the scarcity rents, we end up with one or a few tax havens that amass & keep the wealth to them
- If you have good international coordination, and can track revenues well, you may use very high tax rates, and correspondingly spread a very high share of global value added with the population.
- If specifically world economy will be dominated by platform economies, make sure we deal properly with it, ensuring there's competition instead of lock-in monopoly
- I.e. if, say, we'd all want to live in metaverses, avoid everyone being forced to live in Meta's instead of choosing freely among competing metaverses.
Risks include:
- Expect geographic revenue distribution to be foreign to us today, and potentially more unequal with entire lands with zero net contribution in terms of revenue-earning value added
- Maybe ores (and/or some types of land) will capture the dominant share of value added, not anymore the educated populations
- Maybe instead it's a monopoly or oligopoly, say with huge shares in Silicon Valley and/or its Chinese counterpart or what have you
- Inequality might exceed today's: Today poor people can become more attractive by offering cheap labor. Tomorrow, people deprived of valuable (i) ores or so, or (ii) specific, scarcity-rent earning AI capabilities, may be able to contribute zero, so have zero raw earnings
- Our rent-seeking economic lobbies who successfully put their agents at top policy-makers in charge, and who lead us to voting for antisocial things, will have ever stronger incentive to keep rents for themselves. Stylized example: We'll elect the supposedly-anti-immigration populist, but whose main deed is to make sure firms don't pay high enough taxes
- You can more easily land-grab than people-grab by force, so may expect military land conquest to become more a thing than in the post-war decades where minds seemed the most valuable thing
- Human psychology. Dunno what happens with societies with no work (though I guess we're more malleable, able to evolve into a society that can cope with it, than some people think, tbc)
- Trade unions and alike, trying to keep their jobs somehow, and finding pseudo-justifications for it, so the rest of society lets them do that.
B. Specifically to your following point:
I don't think the math works out if / when AI companies dominate the economy, since they'll capture more and more of the economy unless tax rates are high enough that everyone else receives more through UBI than they're paying the AI companies.
Imagine it's really at AI companies where the scarcity rents i.e. profits, occur (as mentioned, that's not at all clear): Imagine for simplicity all humans still want TVs and cars, maybe plus metaverses, and AI requires Nvidia cards. By scenario definition, AI produces everything, and as in this example we assume it's not the ores that earn the scarcity rents, and the AIs are powerful in producing stuff from raw earth, we don't explicitly track intermediate goods other than Nvidia cards the AIs produce too. Output be thus:
AI output = 100 TVs, 100 cars, 100 Nvidia cards, 100 digital metaverses, say in $bn.
Taxes = Profit tax = 50% (could instead call it income tax for AI owners; in reality would all be bit more complex, but overall doesn't matter much).
AI profit 300 ( = all output minus the Nividia cards)
People thus get $150bn; AI owners get $150bn as distributed AI profit after taxes
People consume 50 TVs, 50 cars, 50 digital metaverses
AI owners also consume 50 TVs, 50 cars, 50 digital metaverses
So you have a 'normal' circular economy that works. Not so normal, e.g. we have simplified for AI to require not only no labor but also no raw resources (or none with scarcity rent captured by somebody else). You can easily extend it to more complex cases.
In reality, of course, output will be adjusted, e.g. with different goods the rich like to consume instead of thousands of TVs per rich person, as happens already today in many forms; what the rich like to do with the wealth remains to be seen. Maybe fly around (real) space. Maybe get better metaverses. Or employ lots of machines to improve their body cells.
C. Btw, the "we'll just find other jobs" imho is indeed overrated, and I think the bias, esp. among economists, can be very easily explained when looking at history (where these economists had been spot on) yet realizing, that in future, machines will not anymore augment brains but replace them instead.
I find things as "Gambling Self-Exclusion Schemes" of multiple countries, thanks for the hint, indeed a good example, corroborating that at least in some of the most egregious examples of addictive goods unleashed on the population some action in in the suggested direction is technically & politically feasible - how successful tbc; looking fwd to looking into it in more detail!
Depends on what we call super-dumb - or what where we draw the system borders of "society". I include the special interest groups as part of our society; and are the small wheel in it gearing us towards the 'dumb' outcome in the aggregate. But yes, the problem is simply not trivial, smart/dumb is too relative, so my term was not useful (just expressing my frustration with our policies & thinking, that your nice post reminded me of)
This is a good topic for exploration, though I don't have much belief that there's any feasible implementation "at a societal level".
Fair. I have instead the impression I see plenty of avenues. Bit embarrassingly: they are so far indeed not sufficiently structured in my head, require more detailed tinkering out, exploring failure modes and avenues for addressing in detail, plus they might, require significant restructuring of the relevant markets, and, worst, I have insufficient time to explore them in much detail quite now). But yes, it would remain to be shown & tested-out as mentioned in the post, and I hope I once can explore/write about it a bit more. For now my ambition is: Look, that is indeed a serious topic to explore; we should at least be aware of the possibility to upgrade people's liberty by providing them 'only' alienable instead of inalienable rights to buy or consume. And start looking around as to what we might be able to do..
There are plenty of options at individual levels, mostly informal - commitments to friends and family, writing down plans and reviewing them later, etc.
An indicator of how good we are by using the "options at individual level" is how society looks; as explained, it doesn't look good (though, as caveat-ed in the post, admittedly I cannot say how much of what seems to be addressable by commitment is indeed solely commitment failure; though there is imho plenty of indirect and anecdotal evidence suggesting it contributes a sizeable junk of it).
It's not clear at all why we would, in principle, enforce the wishes of one part of someone onto another part.
"in principle" right, in practice I think it's relatively simple. Nothing is really simple, but:
- I think we could quite easily pull out of our hands (not meant as derogatory as it sounds) a bit of analytical theoreming to show under reasonably general conditions fitting some of the salient facts about our 'short-term misbehavior', the great benefits, say in medium- & and long-term aggregate utility or something, even when strongly discounted if you wish, of reigning in our short-term self. Discuss under what conditions the conclusion holds, then take away: without crazy assumptions, we often see benefits from supporting not short-termie. I actually think we might even find plenty of existing theory of commitment, discounting etc., doing just that or things close to doing just that.
- I can personally not work on that in detail atm, though, and I think in practice the case appears so blatantly obvious when looking a ton of stylized facts (see post where I mention only few most obvious ones) in that domain that it's worthwhile to start thinking about markets differently now, to start searching for solutions, while some shiny theoretical underpinning remain still pending.
- Moreover, I think we societally already accept the case for that thing.
- For example, I think paternalistic policies might have much a harder time to force or press us into, say, saving for later (or maybe also into not smoking by prohibitions or taxes etc.) if it wasn't for many of us to silently agree that actually we're happy for state to force us (or even the dear ones around us) to do something that actually part of us internally (that is of course long-termie) prefers while short-termie might just blow it instead.
- In that paternalistic domain we currently indeed mainly rely on (i) external coercion and officially explain it as (ii) "state has to clean up if I get old, that's why he forces me to save for pension", but note how we already have some policies that may even more specifically be best be explained by implicitly acknowledging superiority of long-term self: While multiple compulsory pension schemes keep me save by covering my basic old age expenses, the state strongly incentivizes me to do voluntary pension contributions beyond what's necessary to cover my basic living costs. If we wouldn't, in practice, as a society, somehow agree with the idea that long-termie should have more of a say than he naturally has, I think it could be particularly difficult to get society to just stand by while I 'evade' taxes by using that scheme.[1]
- ^
That voluntary savings scheme incentivizes the saving-until-retirement by removing earnings & wealth taxes. It is on top of the compulsory schemes that are meant to cover basic living costs while at age (this has become a bit harder today but used to be, I think, simpler in the past when the voluntary policy also existed already).
Spot on! Let's zoom out and see we have (i) created a never before seen food industry that could feed us healthily at unprecedentedly low cost, yet (ii) we end up systematically killing us with all that. We're super dumb as society to continue doing as if nothing, nothing on a societal level, had to be done.
Btw, imho a more interesting, but not really much more challenging, extension of your case is, if overall what the orphans produce is actually very valuable, say creating utility of 500 $/day for ultimate consumers, but mere market forces, competition between the firms or businessmen, means market prices for the goods produced become still only 50.01c/day, while the labor market clearing wage for the destitute orphans is 50c/day.
Even in this situation, commonsense 'exploitation' is straightforward applicable and +- intelligible a concept:
- To a degree, the firms or businessmen become a bit irrelevant intermediaries. One refuses to do the trade? Another one will jump in anyway... Are they exploitative or not? Depends a bit on subtle details, but individually they have little leeway to change anything in the system.
- The rich society as an aggregate who enjoys the 500 $/day worth items as consumers, while having, via their firms, had them produced for 50.01c/day by the poor orphans with no outside options, is of course an exploitative society in common usage of the term. Yes, the orphans may be better off than without it, but commoners do have an uneasy feeling if they see our society doing that, and I don't see any surprise in it; indeed, we're a 'bad' society if we just leave it like that and don't think about doing something more to improve the situation.
- The fact that some in society take the wrong conclusion from the feeling of unease about exploitation, and think we ought to stop buying the stuff from the orphans, is really not the 'fault' of the exploitation concept, it is the failure of us to imagine (or be willing to bite the bullet of) a beyond-the-market solution, namely the bulk sharing of riches with those destitute orphan workers or what have you. (I actually now wonder whether that may be where the confusion that imho underlies the OP's article is coming from: Yes, people do take weird econ-101-igoring conclusions when they detect exploitation, but this doesn't mean they interpret the wrong things as exploitation. It means their feel-good 'solution' might backfire; instead they should track consequences of alternatives and see that the real solution to the indeed existing exploitation problem isn't as simple as to go to the next, overpriced pseudo-local pseudo-sustainable hipster shop, but is to start doing something more directly about the sheer poverty of their fellow beings far or near).
If there's a situation where a bunch of poor orphans are employed for 50c per grueling 16 hour work day plus room and board, then the fact that it might be better than starving to death on the street doesn't mean it's as great as we might wish for them. We might be sad about that, and wish they weren't forced to take such a deal. Does that make it "exploitation?" in the mind of a lot of people, yeah. Because a lot of people never make it further than "I want them to have a better deal, so you have to give it to them" -- even if it turns out they're only creating 50.01c/day worth of value, the employer got into the business out of the goodness of his heart, and not one of the people crying "exploitation!" cares enough about the orphans to give them a better deal or even make they're not voting them out of a living. I'd argue that this just isn't exploitation, and anyone thinking it is just hasn't thought things through.
Notice how you had to create a strawman of what people commonsensically call exploitation. The person you describe does exactly NOT seem to be employing the workers merely to "gaining disproportionate benefit from someone’s work because their alternatives are poor". In your example, informed about the situation, with about 0 sec of reflection, people would understand him to NOT be exploitative. Of course, people usually would NOT blame Mother Theresa for having poor people work in her facilities and earning little, IF Mother Theresa did so just out of good heart, without ulterior motives, without deriving disproportionate benefit, and while paying 99.98% of receipts to staff, even if that was little.
Note, me saying exploitation is 'simple' and is just what it is even if there is a sort of tension with econ 101, doesn't mean every report about supposed exploitation would be correct, and I never maintained it wouldn't be easy - with usual one paragraph newspaper reports - to mislead the superficial mob into seeing something as exploitation even when it isn't.
It remains really easy to make sense of usual usage of 'exploitation' vis a vis econ 101 also in your example:
- The guy is how you describe? No hint of exploitation, and indeed a good deal for the poor.
- The situation is slightly different, the guy would earn more and does it such as to merely to get as rich as possible? He's an exploitative business man. Yes, the world is better off with him doing his thing, but of course he's not a good* man. He'd have to e.g. share his wealth one way or another in a useful way if he really wanted to be. Basta. (*usual disclaimer about the term..)
If a rich person wants to help the poor, it will be more effective so simply help the poor -- i.e. with some of their own resources. Trying to distort the market leads to smaller gains from trade which could be used to help the poor. So far so good.
I think we agree on at least one of the main points thus.
Regarding
"Should" is a red flag word
I did not mean to invoke a particularly heavy philosophical absolutist 'ought' or anything like that, with my "should". It was instead simply a sloppy shortcut - and you're right to call that out - to say the banal: the rich considering whether she's exploiting the poor and/or whether it's a win win, might want to consider - what tends to be surprisingly often overseen - that the exploitation vs. beneficial trade may have no easily satisfying solution as long as she keeps the bulk of her riches to herself vis a vis the sheer poverty of her potential poor interlocutant.
But with regards to having to (I add the emphasis):
That's not to say we have to give up on caring about all exploitation and just do free trade, but it does mean that if we want to have both we have to figure out how to update our understanding of exploitation/economics until the two fit.
I think there's not much to update. "Exploitation" is a shortcut for a particular, negative feeling we humans tend to naturally get from certain type of situation, and as I tried to explain, it is a rather simple thing. We cannot just define that general aversion away just to square everything we like in a simple way. 'Exploitation' simply is exploitation even if it is (e.g. slightly) better for the poor than one other unfair counterfactual (non-exploitation without sharing the unfairly* distributed riches), nothing can change that. Only bulk sharing of our resources may lead to a situation we may wholeheartedly embrace with regards to (i) exploitation and (ii) economics. So if we're not willing to bite the bullet of bulk-sharing of resources, we're stuck with either being unhappy about exploitation or about foregoing gains of trade (unless we've imbibed econ 101 so strongly that we've grown insensitive to 'exploitation' at least as long as we don't use simple thought experiments to remind ourselves how exploitative even some win-win trades can be).
*Before you red-flag 'unfair' as well: Again, I'm simply referring to the way people tend to perceive things, on average or so.
Your post introduces a thoughtful definition of exploitation, but I don’t think narrowing the definition is necessary. The common understanding — say "gaining disproportionate benefit from someone’s work because their alternatives are poor" or so — is already clear and widely accepted. The real confusion lies in how exploitation can coexist with voluntary, mutually beneficial trade. This coexistence is entirely natural and doesn’t require resolution — they are simply two different questions. Yet neither Econ 101 nor its critics seem to recognize this.
Econ 101 focuses entirely on the mutual benefit of trade, treating it as a clear win-win, and dismisses concerns about exploitation as irrelevant. Critics, by contrast, are so appalled by the exploitative aspect of such relationships that they often deny the mutual benefit altogether. Both sides fail to see that trade can improve lives while still being exploitative. These are not contradictions; they are two truths operating simultaneously.
For (stylized) example, when rich countries (or thus their companies) offshore to places like Bangladesh or earlier South Korea, they often offer wages that are slightly better than local alternatives — a clear improvement for workers. However, those same companies leverage their stronger bargaining position to offer the bare minimum necessary to secure labor, stopping far short of providing what might be considered fair compensation. This is both a win-win in economic terms and exploitative in a moral sense. Recognizing this duality doesn’t require redefining exploitation — it simply requires acknowledging it.
This misunderstanding leads to counterproductive responses. Economists too quickly dismiss concerns about exploitation, while critics focus on measures like boycotts or buying expensive domestic products, which may (net) harm poor offshore workers. I think also Will MacAskill noted in Doing Good Better this issue, and that the elephant in the room is that the rich should help the poor independently of the question of the labor exchange itself, i.e. that the overwhelming moral point is that, if we care, we should simply donate some of our resources.
Exploitation isn’t about minor adjustments to working conditions or wages. It’s about recognizing how voluntary trade, while beneficial, can still be exploitative if the party with the excessively limited outside options has to put in unjustifiably much while gaining unjustifiably little. This applies to sweatshop factories just as much as to surrogate mother-ship or mineral resource mining - and maybe to Bob in your example, independently of they phone call details.
Would you personally answer Should we be concerned about eating too much soy? with "Nope, definitely not", or do you just find it's a reasonable gamble to take to eat the very large qty of soy you describe?
Btw, thanks a lot for the post; MANY parallels with my past as more-serious-but-uncareful-vegan until body showed clear signs of issues that I realized only late as I'd have never believed anyone that healthy vegan diet is that tricky.
Not all forms of mirror biology would even need to be restricted. For instance, there are potential uses for mirror proteins, and those can be safely engineered in the lab. The only dangerous technologies are the creation of full mirror cells, and certain enabling technologies which could easily lead to that (such as the creation of a full mirror genome or key components of a proteome).
Once we get used to create and deal with mirror proteins, and once we get used to designing & building cells, which I don't know when it happens, maybe adding 1+1 together will also become easy. This suggests that, assuming upsides are limited enough (?), maybe better already to try to halt even any form of mirror biology research.
Taking what you write as excuse to nerd a bit about Hyperbolic Discounting
One way to paraphrase esp. some of your ice cream example:
Hyperbolic discounting - the habit of valuing this moment a lot while abruptly (not smoothly exponentially) discounting everything coming even just a short while after - may in a technical sense be 'time inconsistent', but it's misguided to call it 'irrational' in the common usage of the term: My current self may simply care about itself distinctly more than about the future selves, even if some of these future selves are forthcoming relatively soon. It's my current self's preference structure, and preferences are not rational or irrational, basta.
I agree and had been thinking this, and I find it an interesting counterpoint to the usual description of hyperbolic discounting as 'irrational'.
It is a bit funny also as we have plenty of discussions trying to explain when/why some hyperbolic discounting may actually be "rational" (ex. here, here, here), but I've not yet seen any so fundamental (and simple) rejection of the notion of irrationality (though maybe I've just missed it so far).
(Then, with their dubious habits of using common terms in subtly misleading ways, fellow economists may rebut that we have simply defined irrationality in this debate as meaning to have non-exponential alias time-inconsistent preferences, justifying the term 'irrationality' here quasi by definition)
Spurious correlation here, big time, imho.
Give me the natural content of the field and I bet I easily predict whether it may or may not have replication crisis, w/o knowing the exact type of students it attracts.
I think it's mostly that the fields where bad science may be sexy and less-trivial/unambiguous to check, or, those where you can make up/sell sexy results independently of their grounding, may, for whichever reason, also be those that attract the non-logical students.
Agree though with the mob overwhelming the smart outliers, but I just think how much that mob creates a replication crises is at least in large part dependent on the intrinsic nature of the field rather than due to the exact IQs.
Wouldn't automatically abolish all requirements; maybe I'm not good enough in searching but to the degree I'm not an outlier:
- With internet we have reviews, but they're not always trustworthy, and even if they are, understanding/checking/searching reviews is costly, sometimes very costly.
- There is value in being able to walk up to the next-best random store for a random thing and being served by a person with a minimum standard of education in the trade. Even for rather trivial things.
This seems underappreciated here.
Flower safety isn't a thing. But having the next-best florist to for sure be a serious florist person to talk to, has serious value. So, I'm not even sure for something like flowers I'm entirely against any sort of requirements.
So it seems to me more a question of balance what exactly to require in which trade, and that's a tricky one, but in some places I lived seems to have been handled mostly +- okay. Admittedly simply according to my shallow glance at things.
Lived also in countries that seem more permissive, requiring less job training, but clearly prefer the customer experience in those that regulate, despite higher prices.
Then, I wouldn't want the untrained/unexamined florist to starve or even simply become impoverished. But at least in some countries, social safety net mostly prevents that.
Great you bring up Hoffman; I think he deserves serious pushback.
He proofs exactly two things:
- Reality often is indeed not how it seems to us - as by much too many, his nonsense is taken at face value. I would normally not use such words but there are reasons in his case.
- In as far as he has come to truly believe all he claims (not convinced!), he'd be a perfect example of self-serving beliefs: how his overblown claims manage to take over his brain, just as it has realized he can sell it with total success to the world, despite absurdity.
Before I explain this harsh judgement, caveat: I mean not to defend what we perceive. Let's be open to a world very different to how it seems. Also, maybe Hoffman has many interesting points. But this doesn't mean, his claims are not completely overblown - which I'm convinced they, are after having listened to a range of interviews with him and having gone to lengths for reading his original papers.
Here three simple points I find compelling to put his stuff into perspective:
- You find a moaning gap between his claims and what he has really 'proven' in his papers. Speech: "We have mathematically proven there's absolutely zero chance blabla". Reality: Used a trivial evolutionary toy model and found a reduced form representation of a very specific process may be more economical/probable than a more complex representation of the 'real' process. It nicely underlines that evolution may take shortcuts. Yes, we're crazy about sex instead of about "creating children", or we want to eat sugary stuff as an ancient proxy for actually healthy diet which in our today's world doesn't function anymore, and many more things where we've not evolved to perceive/account for all the complexity. Problem? Is of course nothing new, and, more importantly, it doesn't proof anything more than that.
- I like the following analogies:
- Room-Mapping Robots vs. Non-Mapping Robot cleaners (Roomba stuff). A not too far-fetched interpretation to Hoffman would be: A (efficient) vacuum robot cannot map the room, it's always more efficient to simply have reduced-form rules/heuristics for where to move next. Well, it's nice to see how the market has evolved: Semi-random moving robots made the start, but it turns out if you want to have robots efficient, you make them actually map the territory, hence today LiDAR/SLAM become more dominating.
- Being exposed to a cat, I realize she seems much more Hoffmanesque than us. When she pees on the ground, or smells another weird thing, she does her 'heap earth/sand over it' leg moves, not realizing there's just parquet so her move doesn't actually cover the stink. It's a bit funny then, with Hoffman the species that has overcome reliance on un-understanding instincts in so many (not all) domains, is the one that ends up claiming it would not ever be possible to overcome mere reduced-form instincts in whatsoever domain.
- Trivial disproof by contradiction of Hofmann's claim of having absolutely proven the world could not be the way we think: Assume the world WAS just how it looks to us. Imagine there WAS then the billion-year evolutionary process that we THINK has happened. Is there anything in Hoffman's proofs showing that, then there could be only dumans, like humans but perceiving in 2d instead of 3d, or in some other wrong-way-with-no-resemblance-to-reality? Nope, absolutely not. His claims just obviously don't hold up.
Broader discussions highlighting I think in part some fraudulent aspect of Hoffman: The Case For Reality, or also Quora Is Donald Hoffman’s interface theory of perception true?
In sum: His popularity proves an evolutionary theory for information where what floats around is not what is shown to be correct, but what is appealing; distracting voices debunking it being entirely ignored. I imagine him laughing about this fact when thinking about his own success: "After all, my claim seems to not be that wrong, they do not perceive reality, mahahaaa". According to google there are not merely a million people reading him, but literally millions of webpages featuring him.
Happy to be debunked in my negative reading of him :)
Musings about whether we should have a bit more sympathy for skepticism re price gauging, despite all. Admittedly with no particular evidence to point to; keen to see whether my basic skepticism could easily be dismissed.
Scott Sumner points out that customers very much prefer ridesharing services that price gouge and have flexible pricing to taxis that have fixed prices, and very much appreciate being able to get a car on demand at all times. He makes the case that liking price gouging and liking the availability of rides during high demand are two sides of the same coin. The problem is (in addition to ‘there are lots of other differences so we have only weak evidence this is the preference’), people reliably treat those two sides very differently, and this is a common pattern – they’ll love the results, but not the method that gets those results, and pointing out the contradiction often won’t help you.
I think as economists we can be too confident about how obvious it'd be that allowing 'price gouging' should be the standard in all domains. Yes, price controls often hugely problematic. But could full liberty here not also be disastrous for the standard consumer? It depends on a lot of factors; maybe in many domains full liberty works just fine. Maybe not everywhere at all.
Yes, "Prices aren’t just a transfer between buyer and seller." - but they're also that. And in some areas, it is easily imaginable how an oligopoly or a cartel, or simply dominant local supplier(s) benefit from the possibility to supply at any price without alleviating scarcity - really instead by creating scarcity.
The sort of cynical behavior of Enron comes to mind; can such firms not more easily create havoc on markets if they have full freedom to set prices at arbitrary levels? I'd not be surprised if we have to be rather happy about power sellers [in many locations] not being allowed to arbitrarily increase prices (withhold capacity) the way they'd like. Yes, in the long term we could theoretically entry of new capacity (or storage) into the market if prices were often too high, and that could prevent capacity issues, but the world is too heterogeneous to expect smoothly functioning markets in such a scenario; maybe it's easier to organize backup capacities in different ways. Similar for gasoline reserves; it's a simple to organize thing. Yes politicians will make it expensive, inefficient, wrongly sized; but in many locations in the world maybe still better than having no checks and balances at all in the market just for the hope the private market might create more reserves.
And, do we really need the toilet paper sellers[1] plausibly stirring up toilet paper supply fears in the slightest crisis of anything, if they know they can full-on exploit the ensuing self-fulfilling prophecy of the toilet-paper-run, while instead everything might have played out nicely in the absence of any scarcity-propaganda?
Or put differently with a slightly far-fetched but maybe still intuiting example: We hear Putin makes/made so much money from high gas prices, theoretically it could be an entire rational for the war in the first place. Now this will not have been quite the case, but still: we do not know how many times individual micro putin events - where an exploitative someone would have had their incentive to create havoc in their individual markets to benefit from the troubles he stirred - the anti gouging laws may have prevented. Maybe few, maybe many?
These points make me wonder whether the population is once again not as stupid as we think with their intuitions, and our theory a bit too simple. Yes we all like the always-available taxis, but I'm not sure it practically works out just so smoothly with all other goods/market structures. But maybe I'm wrong, and in the end it's so obvious price controls themselves have so bad repercussions anyway.
- ^
Placeholder. May replace with other goods that fit the story.
Appreciate actually the overall take (although not sure how many would not have found most of it simply common sense anyway), but: A bit more caution with the stats would have been great
- Just-about-significant 'insignificant and basta'. While you say the paper shows up to incl. 27 there's no 'effect' (and concluding on causality is anyway problematic here, see below), all data provided in the graph you show and in the table of the paper suggest BMI 27 has a significant or nearly significant (on 95%..) association with death even in this study. You may instead want to say the factor is not huge (or small compared to much larger BMI variations), although the all-cause point-estimate mortality factor of roughly 1.06 for already that BMI is arguably not trivial at all: give me something that, as central-albeit-imprecise estimate, increases my all-cause mortality by 6%, and I hope you'd accept if I politely refused, explaining you propose something that seems quite harmful, maybe even in those outcomes where I don't exactly die from it.
- Non-significance No-Effect. Even abstracting from the fact that the BMI 27 data is actually significant or just about so: "not significant" reduction in deaths on BMI 18-27 in the study wouldn't mean as you claim "will not extend your life". It means, the study was too imprecise to be exactly 95% or more sure that there's a relationship. Without strong prior to the contrary, the point estimate, or even any value to the upper CI bound, cannot be excluded at all as describing the 'real' relationship.
- Stats lesson 0: Association Causality. The paper seems to purposely talk about association, mentioning some major potential issues with interfering unobserved factors already in the Abstract, and there are certainly a ton of confounding factors that may well bias the results (it would seem rather unnatural to expect people who work towards having a supposedly-healthy BMI to behave not differently on average in any other health-releveant way than people who may be working less towards such BMI).
Agree that cued FNs would often be useful innovation I've not yet seen. Nevertheless, this statement
So, if you wonder whether you'd care for the content of a note, you have to look at the note, switching to the bottom of the page and breaking your focus. Thus the notion that footnotes are optional is an illusion.
ends with a false conclusion; most footnotes in text I have read were optional and I'm convinced I'm happy to not have read most of them indeed. FNs, already as they are, are thus indeed highly "optional" and potentially very helpful - in many, maybe most, cases, for many, maybe most, readers.
That could help explain the wording. Though the way the tax topic is addressed here I have the impression - or maybe hope - the discussion is intended to be more practical in the end.
A detail: I find the "much harder" in the following unnecessarily strong, or maybe also simply the 'moral claim' yes/no too binary (all emphasizes added):
If the rich generally do not have a moral claim to their riches, then the only justification needed to redistribute is a good affirmative reason to do so: perhaps that the total welfare of society would improve [..]
If one believes that they generally do have moral claim, then redistributive taxation becomes much harder to justify: we need to argue either that there is a sufficiently strong affirmative reason to redistribute that what amounts to theft is nevertheless acceptable, or that taxation is not in fact theft under certain circumstances.
What we want to call 'harder' or 'much harder' is of course a matter of taste, but to the degree that it reads like meaning 'it becomes (very) hard', I'd say instead:
It appears to be rather intuitive to agree to some degree of redistributive taxation even if one assumed the rich had mostly worked hard for their wealth and therefore supposedly had some 'moral claim' to it.
For example, looking at classical public finance 101, I see textbooks & teachers (some definitely not so much on the 'left') readily explaining their students (definitely not systematically full utilitarians) why concave utility means we'd want to tax the rich, without even hinting at the rich not 'deserving' their incomes, and the overwhelming majority of student rather intuitively agreeing with the mechanism, as it seems to me from observation.
Core claim in my post is that the 'instantaneous' mind (with its preferences etc., see post) is - if we look closely and don't forget to keep a healthy dose of skepticism about our intuitions about our own mind/self - sufficient to make sense of what we actually observe. And given this instantaneous mind with its memories and preferences is stuff we can most directly observe without much surprise in it, I struggle to find any competing theories as simple or 'simpler' and therefore more compelling (Occam's razor), as I meant to explain in the post.
As I make very clear in the post, nothing in this suggests other theories are impossible. For everything there can of course be (infinitely) many alternative theories available to explain it. I maintain the one I propose has a particular virtue of simplicity.
Regarding computationalism: I'm not sure whether you meant a very specific 'flavor' of computationalism in your comment; but for sure I did not mean to exclude computationalist explanations in general; in fact I've defended some strong computationalist position in the past and see what I propose here to be readily applicable to it.
I'm sorry but I find you're nitpicking on words out of context, rather than to engage with what I mean. Maybe my EN is imperfect but I think not that unreadable:
A)
The word "just" in the sense used here is always a danger sign. "X is just Y" means "X is Y and is not a certain other thing Z", but without stating the Z.
... 'just' might sometimes be used in such abbreviated way, but here, the second part of my very sentence itself readily says what I mean with the 'just' (see "w/o meaning you're ...").
B)
You quoting me: "It is equally all too natural for me to still keep my specific (and excessive) focus & care on the well-being of my 'natural' successors, i.e. on what we traditionally call"
You: Too natural? Excessive focus and care? What we traditionally call? This all sounds to me like you are trying not to know something.
Recall, as I wrote in my comment, I try to support "why care [under my stated views], even about 'my' own future". I try to rephrase the sentence you quote, in a paragraph that avoids the 3 elements you criticize. I hope the meaning becomes clear then:
Evolution has ingrained into my mind with a very strong preference to care for the next-period inhabitant(s) X of my body. This deeply ingrained preference to preserve the well-being of X tends to override everything else. So, however much my reflections suggest to me that X is not as unquestionably related to me as I instinctively would have thought before closer examination, I will not be able to give up my commonly observed preferences for doing (mostly) the best for X, in situations where there is no cloning or anything of the like going on.
(you can safely ignore "(and excessive)". With it, I just meant to casually mention also we tend to be too egoistic; our strong specific focus on (or care for) our own body's future is not good for the world overall. But this is quite a separate thing.)
Thanks! In particular also for your more-kind-than-warranted hint at your original w/o accusing me of theft!! Especially as I now realize (or maybe realize again) your sleep-clone-swap example, which indeed I love as an perfectly concise illustration, had also come along with at least an "I guess"-caveated "it is subjective", i.e. which some sense is really already included a core part of the conclusion/claim here.
I should have also picked up your 'stream-of-consciousness continuity' vs. 'substrate/matter continuity' terminology. Finally, the Ship of Theseus question thus, with "Temporal parts" vs. "Continued identity" would also be good links, although I guess I'd be spontaneously inclined to dismiss part of the discussion of these as questions 'merely of definition' - just until we get to the question of the mind/consciousness, where it seems to me indeed to become more fundamentally relevant (although, maybe, and ironically, after relegating the idea of a magical persistent self, maybe one could say, also here in the end it becomes slightly closer to that 'merely question of definition/preference' domain).
Btw, I'll now take your own link-to-comment and add it to my post - thanks if you can let me know where I can create such links; I remembered looking and not finding it ANYWHERE even on your own LW profile page.
Btw, regarding:
it would not seem to have made any difference and was just a philosophical recreation
Mind, in this discussion about cloning thought experiments I'd find it natural that there are not many currently tangible consequences, even if we did find a satisfying answer to some of the puzzling questions around that topic.
That said, I guess I'm not the only one here with a keen intrinsic interest in understanding the nature of self even absent tangible & direct implications, or if these implications may remain rather subtle at this very moment.
I obviously still care for tomorrow, as is perfectly in line with the theory.
I take you to imply that, under the here emphasized hypothesis about self not being a unified long-term self the way we tend to imagine, one would have to logically conclude sth like: "why care then, even about 'my' own future?!". This is absolutely not implied:
The questions around which we can get "resolving peace" (see context above!) refers to things like: If someone came along proposing to clone/transmit/... you, what to do? We may of course find peace about that question (which I'd say I have for now) without giving up to care about the 'natural' successors of ours in standard live.
Note how you can still have particular care for your close kin or so after realizing your preferential care about these is just your personal (or our general cultural) preference w/o meaning you're "unified" with your close kin in any magical way. It is equally all too natural for me to still keep my specific (and excessive) focus & care on the well-being of my 'natural' successors, i.e. on what we traditionally call "my tomorrow's self", even if I realize that we have no hint at anything magical (no persistent super-natural self) linking me to it; it's just my ingrained preference.
The original mistake is that feeling of a "carrier for identity across time" - for which upon closer inspection we find no evidence, and which we thus have to let go of. Once you realize that you can explain all we observe and all you feel with merely, at any given time, your current mind, including its memories, and aspirations for the future, but without any further "carrier for identity", i.e. without any super-material valuable extra soul, there is resolving peace about this question.
The upload +- by definition inherits your secret plan and will thus do your jumps.
Good decisions need to be based on correct beliefs as well as values.
Yes, but here the right belief is the realization that what connects you to what we traditionally called your future "self", is nothing supernatural i.e. no super-material unified continuous self of extra value: we don't have any hint at such stuff; too well we can explain your feeling about such things as fancy brain instincts akin to seeing the objects in the 24FPS movie as 'moving' (not to say 'alive'); and too well we know we could theoretically make you feel you've experienced your past as a continuous self while you were just nano-assembled a mirco-second ago with exactly the right memory inducing this beliefs/'feeling'. So due to the absence of this extra "self": "You" are simply this instant's mind we currently observe from you. Now, crucially, this mind has, obviously, a certain regard, hopes, plans, for, in essence, what happens with your natural successor. In the natural world, it turns out to be perfectly predictable from the outside, who this natural successor is: your own body.
In situations like those imagined with cloning thought experiments instead, it suddenly is less obvious from the outside, whom you'll consider your most dearly cared for 'natural' (or now less obviously 'natural') successor. But as the only thing that in reality connects you with what we traditionally would have called "your future self", is your own particular preferences/hopes/cares to that elected future mind, there is no objective rule to tell you from outside, which one you have to consider the relevant future mind. The relevant is the one you find relevant. This is very analogous for, say, when you're in love, the one 'relevant' person in a room for you to save first in a fire (if you're egoistic about you and your loved one) is the one you (your brain instinct, your hormones, or whatever) picked; you don't have to ask anyone outside about whom that should be.
so if there is some fact of the matter that you don't survive destructive teleportation, you shouldn't go for it, irrespective of your values
The traditional notion of "survival" as in invoking a continuous integrated "self" over and above the succession of individual ephemeral minds with forward-looking preferences, must be put into perspective just as that of that long-term "self" itself indeed.
There's a theory that personal identity is only ever instantaneous...an "observer. moment"... such that as an objective fact, you have no successors. I don't know whether you believe it. If it's true , you epistemically-should believe it, but you don't seem to believe in epistemic norms.
There's another, locally popular , theory that the continuity of personal identity is only about what you care about. (It either just is that, or it needs to be simplified to that...it's not clear which). But it's still irrational to care about things that aren't real...you shouldn't care about collecting unicorns...so if there is some fact of the matter that you don't survive destructive teleportation, you shouldn't go for it, irrespective of your values.
Thanks. I'd be keen to read more on this if you have links. I've wondered to which degree the relegation of the "self" I'm proposing (or that may have been proposed in a similar way in Rob Bensinger's post and maybe before) is related to what we always hear about 'no self' from the more meditative crowd, though I'm not sure there's a link there at all. But I'd be keen to read of people who have proposed theoretical things in a similar direction.
There's a theory that personal identity is only ever instantaneous...an "observer. moment"... such that as an objective fact, you have no successors. I don't know whether you believe it.
On the one hand, 'No [third-party] objective successor' makes sense. On the other hand: I'm still so strongly programmed to absolutely want to preserve my 'natural' [unobjective but engrained in my brain..] successors, that the lack of 'outside-objective' successor doesn't impact me much.[1]
- ^
I think a simple analogy here, for which we can remain with the traditional view of self, is: Objectively, there's no reason I should care about myself so much, or about my closed ones; my basic moral theory would ask me to be a bit less kind to myself and kinder to others, but given my wiring I just don't manage to behave so perfectly.
Oh, it's much worse. It is epistemic relativism. You are saying that there is no one true answer to the question and we are free to trust whatever intuitions we have. And you do not provide any particular reason for this state of affairs.
Nice challenge! There's no "epistemic relativism" here, even if I see where you're coming from.
First recall the broader altruism analogy: Would you say it's epistemic relativisim if I tell you, you can simply look inside yourself and see freely, how much you care, how closely connected you feel about people in a faraway country? You sure wouldn't reproach that to me; you sure agree it's your own 'decision' (or intrinsic inclination or so) that decides how much weight or care you personally put on these persons.
Now, remember the core elements I posit. "You" are (i) your mind of right here and now, including (ii) it's tendency for deeply felt care & connection to the 'natural' successors of yours, and that's about what there is to be said about you (+ there's memory). From this everything follows. It is evolution that has shaped us to shortcut the standard physical 'continuation' of you in coming periods, as a 'unique entity' in our mind, and has made you typically care sort of '100%' about your first few sec worth of forthcoming successors of yours [in analogy: Just as nature has shaped you to (usually) care tremendously also for your direct children or siblings]. Now there are (hypothetically) cases, where things are so warped and that are so unusual evolutionarily, that you have no clear tastes: that clone or this clone, if you are/are not destroyed in the process/while asleep or not/blabla - all the puzzles we can come up with. For all these cases, you have no clear taste as to which of the 'successors' of yours you care much and which you don't. In our inner mind's sloppy speak: we don't know "who we'll be". Equally importantly, you may see it one way, and your best friends may see it very differently. And what I'm explaining is that, given the axiom of "you" being you only right here and now, there simply IS no objective truth to be found about who is you later or not, and so there is no objective answer as to whom of those many clones in all different situations you ought to care how much about: it really does only boil down to how much you care about these. As, on a most fundamental level, "you" are only your mind right now.
And if you find you're still wondering about how much to care about which potential clone in which circumstances, it's not the fault of the theory that it does not answer it to you. You're asking to the outside a question that can only be answered inside you. The same way that, again, I cannot tell you how much you feel (or should feel) for third person x.
I for sure can tell you you ought to behaviorally care more from a moral perspective, and there I might use a specific rule that attributes each conscious clone an equal weight or so, and in that domain you could complain if I don't give you a clear answer. But that's exactly not what the discussion here is about.
I can imagine a universe with such rules that teleportation kills a person and a universe in which it doesn't. I'd like to know how does our universe work.
I propose a specific "self" is a specific mind at a given moment. The usual-speak "killing" X and the relevant harm associated with it means to prevent X's natural successors, about whom X cares so deeply, from coming into existence. If X cares about his physical-direct-body successors only, disintegrating and teleporting him means we destroy all he cared for, we prevented all he wanted to happen from happening, we have so-to-say killed him, as we prevented his successors from coming to live. If he looked forward to a nice trip to Mars where he is to be teleported to, there's no reason to think we 'killed' anyone in any meaningful sense, as "he"'s a happy space traveller finding 'himself' (well, his successors..) doing just the stuff he anticipated for them to be doing. There's nothing more objective to be said about our universe 'functioning' this or that way. As any self is only ephemeral, and a person is a succession of instantaneous selves linked to one another with memory and with forward-looking preferences, it really is these own preferences that matter for the decision, no outside 'fact' about the universe.
As I write, call it a play on words; a question of naming terms - if you will. But then - and this is just a proposition plus a hypothesis - try to provide a reasonable way to objectively define what one 'ought' to care about in cloning scenarios; and contemplate all sorts of traditionally puzzling thought experiments about neuron replacements and what have you, and you'll inevitable end up with hand-waving, stating arbitrary rules that may seem to work (for many, anyhow) in one though experiment, just to be blatantly broken by the next experiment... Do that enough and get bored and give up - or, 'realize', eventually, maybe: There is simply not much left of the idea of a unified and continuous, 'objectively' traceable self. There's a mind here and now and, yes of course, it absolutely tends to care about what it deems to be its 'natural' successors in any given scenario. And this care is so strong, it feels as if these successors were one entire, inseparable thing, and so it's not a surprise we cannot fathom there are divisions.
Very interesting question to me coming from the perspective I outline in the post - sorry a bit lengthy answer again:
According to the basic take from the post, we're actually +- in your universe, except that the self is even more ephemeral than you posit. And as I argue, it's relative, i.e. up to you, which future self you end up caring about in any nontrivial experiment.
Trying to re-frame your experiment from that background as best as I can, I imagine a person having an inclination to think of 'herself' (in sloppy speak; more precisely: she cares about..) as (i) her now, plus (ii) her natural successors, as which she, however, qualifies only those that carry the immediate succession of her currently active thoughts before she falls asleep. Maybe some weird genetic or cultural tweak or drug in her brain has made her - or maybe all of us in that universe - like that. So:
Is expecting to die as soon as you sleep a rational belief in such a universe?
I'd not call it 'belief' but simply a preference, and a basic preference is not rational or irrational. She may simply not care about the future succession of selves coming out at the other end of her sleep, and that 'not caring' is not objectively faulty. It's a matter of taste, of her own preferences. Of course, we may have good reasons to speculate that it's evolutionarily more adaptive to have different preferences - and that's why we do usually have them indeed - but we're wrong to call her misguided; evolution is no authority. From a utilitarian perspective we might even try to tweak her behavior, in order for her to become a convenient caretaker for her natural next-day successors, as from our perspective they're simply usual, valuable beings. But it's still not that we'd be more objectively right than her when she says she has no particular attachment for the future beings inhabiting what momentarily is 'her' body.
Yep.
And the clue is, the exceptional one refusing, saying "this won't be me, I dread the future me* being killed and replaced by that one", is not objectively wrong. It might quickly become highly impractical for 'him'** not to follow the trend, but if his 'self'-empathy is focused only on his own direct physical successors, it is in some sense actually killing him if we put him in the machine. We kill him, and we create a person that's not him in the relevant sense, as he's currently not accepting the successor; if his empathic weight is 100% on his own direct physical successor and not the clone, we roughly 100% kill him in the relevant sense of taking away the one future life he cares about.
*'me being destroyed' here in sloppy speak; it's the successor he considers his natural successor which he cares about.
**'him' and his natural successors as he sees it.