Posts
Comments
Spurious correlation here, big time, imho.
Give me the natural content of the field and I bet I easily predict whether it may or may not have replication crisis, w/o knowing the exact type of students it attracts.
I think it's mostly that the fields where bad science may be sexy and less-trivial/unambiguous to check, or, those where you can make up/sell sexy results independently of their grounding, may, for whichever reason, also be those that attract the non-logical students.
Agree though with the mob overwhelming the smart outliers, but I just think how much that mob creates a replication crises is at least in large part dependent on the intrinsic nature of the field rather than due to the exact IQs.
Wouldn't automatically abolish all requirements; maybe I'm not good enough in searching but to the degree I'm not an outlier:
- With internet we have reviews, but they're not always trustworthy, and even if they are, understanding/checking/searching reviews is costly, sometimes very costly.
- There is value in being able to walk up to the next-best random store for a random thing and being served by a person with a minimum standard of education in the trade. Even for rather trivial things.
This seems underappreciated here.
Flower safety isn't a thing. But having the next-best florist to for sure be a serious florist person to talk to, has serious value. So, I'm not even sure for something like flowers I'm entirely against any sort of requirements.
So it seems to me more a question of balance what exactly to require in which trade, and that's a tricky one, but in some places I lived seems to have been handled mostly +- okay. Admittedly simply according to my shallow glance at things.
Lived also in countries that seem more permissive, requiring less job training, but clearly prefer the customer experience in those that regulate, despite higher prices.
Then, I wouldn't want the untrained/unexamined florist to starve or even simply become impoverished. But at least in some countries, social safety net mostly prevents that.
Great you bring up Hoffman; I think he deserves serious pushback.
He proofs exactly two things:
- Reality often is indeed not how it seems to us - as by much too many, his nonsense is taken at face value. I would normally not use such words but there are reasons in his case.
- In as far as he has come to truly believe all he claims (not convinced!), he'd be a perfect example of self-serving beliefs: how his overblown claims manage to take over his brain, just as it has realized he can sell it with total success to the world, despite absurdity.
Before I explain this harsh judgement, caveat: I mean not to defend what we perceive. Let's be open to a world very different to how it seems. Also, maybe Hoffman has many interesting points. But this doesn't mean, his claims are not completely overblown - which I'm convinced they, are after having listened to a range of interviews with him and having gone to lengths for reading his original papers.
Here three simple points I find compelling to put his stuff into perspective:
- You find a moaning gap between his claims and what he has really 'proven' in his papers. Speech: "We have mathematically proven there's absolutely zero chance blabla". Reality: Used a trivial evolutionary toy model and found a reduced form representation of a very specific process may be more economical/probable than a more complex representation of the 'real' process. It nicely underlines that evolution may take shortcuts. Yes, we're crazy about sex instead of about "creating children", or we want to eat sugary stuff as an ancient proxy for actually healthy diet which in our today's world doesn't function anymore, and many more things where we've not evolved to perceive/account for all the complexity. Problem? Is of course nothing new, and, more importantly, it doesn't proof anything more than that.
- I like the following analogies:
- Room-Mapping Robots vs. Non-Mapping Robot cleaners (Roomba stuff). A not too far-fetched interpretation to Hoffman would be: A (efficient) vacuum robot cannot map the room, it's always more efficient to simply have reduced-form rules/heuristics for where to move next. Well, it's nice to see how the market has evolved: Semi-random moving robots made the start, but it turns out if you want to have robots efficient, you make them actually map the territory, hence today LiDAR/SLAM become more dominating.
- Being exposed to a cat, I realize she seems much more Hoffmanesque than us. When she pees on the ground, or smells another weird thing, she does her 'heap earth/sand over it' leg moves, not realizing there's just parquet so her move doesn't actually cover the stink. It's a bit funny then, with Hoffman the species that has overcome reliance on un-understanding instincts in so many (not all) domains, is the one that ends up claiming it would not ever be possible to overcome mere reduced-form instincts in whatsoever domain.
- Trivial disproof by contradiction of Hofmann's claim of having absolutely proven the world could not be the way we think: Assume the world WAS just how it looks to us. Imagine there WAS then the billion-year evolutionary process that we THINK has happened. Is there anything in Hoffman's proofs showing that, then there could be only dumans, like humans but perceiving in 2d instead of 3d, or in some other wrong-way-with-no-resemblance-to-reality? Nope, absolutely not. His claims just obviously don't hold up.
Broader discussions highlighting I think in part some fraudulent aspect of Hoffman: The Case For Reality, or also Quora Is Donald Hoffman’s interface theory of perception true?
In sum: His popularity proves an evolutionary theory for information where what floats around is not what is shown to be correct, but what is appealing; distracting voices debunking it being entirely ignored. I imagine him laughing about this fact when thinking about his own success: "After all, my claim seems to not be that wrong, they do not perceive reality, mahahaaa". According to google there are not merely a million people reading him, but literally millions of webpages featuring him.
Happy to be debunked in my negative reading of him :)
Musings about whether we should have a bit more sympathy for skepticism re price gauging, despite all. Admittedly with no particular evidence to point to; keen to see whether my basic skepticism could easily be dismissed.
Scott Sumner points out that customers very much prefer ridesharing services that price gouge and have flexible pricing to taxis that have fixed prices, and very much appreciate being able to get a car on demand at all times. He makes the case that liking price gouging and liking the availability of rides during high demand are two sides of the same coin. The problem is (in addition to ‘there are lots of other differences so we have only weak evidence this is the preference’), people reliably treat those two sides very differently, and this is a common pattern – they’ll love the results, but not the method that gets those results, and pointing out the contradiction often won’t help you.
I think as economists we can be too confident about how obvious it'd be that allowing 'price gouging' should be the standard in all domains. Yes, price controls often hugely problematic. But could full liberty here not also be disastrous for the standard consumer? It depends on a lot of factors; maybe in many domains full liberty works just fine. Maybe not everywhere at all.
Yes, "Prices aren’t just a transfer between buyer and seller." - but they're also that. And in some areas, it is easily imaginable how an oligopoly or a cartel, or simply dominant local supplier(s) benefit from the possibility to supply at any price without alleviating scarcity - really instead by creating scarcity.
The sort of cynical behavior of Enron comes to mind; can such firms not more easily create havoc on markets if they have full freedom to set prices at arbitrary levels? I'd not be surprised if we have to be rather happy about power sellers [in many locations] not being allowed to arbitrarily increase prices (withhold capacity) the way they'd like. Yes, in the long term we could theoretically entry of new capacity (or storage) into the market if prices were often too high, and that could prevent capacity issues, but the world is too heterogeneous to expect smoothly functioning markets in such a scenario; maybe it's easier to organize backup capacities in different ways. Similar for gasoline reserves; it's a simple to organize thing. Yes politicians will make it expensive, inefficient, wrongly sized; but in many locations in the world maybe still better than having no checks and balances at all in the market just for the hope the private market might create more reserves.
And, do we really need the toilet paper sellers[1] plausibly stirring up toilet paper supply fears in the slightest crisis of anything, if they know they can full-on exploit the ensuing self-fulfilling prophecy of the toilet-paper-run, while instead everything might have played out nicely in the absence of any scarcity-propaganda?
Or put differently with a slightly far-fetched but maybe still intuiting example: We hear Putin makes/made so much money from high gas prices, theoretically it could be an entire rational for the war in the first place. Now this will not have been quite the case, but still: we do not know how many times individual micro putin events - where an exploitative someone would have had their incentive to create havoc in their individual markets to benefit from the troubles he stirred - the anti gouging laws may have prevented. Maybe few, maybe many?
These points make me wonder whether the population is once again not as stupid as we think with their intuitions, and our theory a bit too simple. Yes we all like the always-available taxis, but I'm not sure it practically works out just so smoothly with all other goods/market structures. But maybe I'm wrong, and in the end it's so obvious price controls themselves have so bad repercussions anyway.
- ^
Placeholder. May replace with other goods that fit the story.
Appreciate actually the overall take (although not sure how many would not have found most of it simply common sense anyway), but: A bit more caution with the stats would have been great
- Just-about-significant 'insignificant and basta'. While you say the paper shows up to incl. 27 there's no 'effect' (and concluding on causality is anyway problematic here, see below), all data provided in the graph you show and in the table of the paper suggest BMI 27 has a significant or nearly significant (on 95%..) association with death even in this study. You may instead want to say the factor is not huge (or small compared to much larger BMI variations), although the all-cause point-estimate mortality factor of roughly 1.06 for already that BMI is arguably not trivial at all: give me something that, as central-albeit-imprecise estimate, increases my all-cause mortality by 6%, and I hope you'd accept if I politely refused, explaining you propose something that seems quite harmful, maybe even in those outcomes where I don't exactly die from it.
- Non-significance No-Effect. Even abstracting from the fact that the BMI 27 data is actually significant or just about so: "not significant" reduction in deaths on BMI 18-27 in the study wouldn't mean as you claim "will not extend your life". It means, the study was too imprecise to be exactly 95% or more sure that there's a relationship. Without strong prior to the contrary, the point estimate, or even any value to the upper CI bound, cannot be excluded at all as describing the 'real' relationship.
- Stats lesson 0: Association Causality. The paper seems to purposely talk about association, mentioning some major potential issues with interfering unobserved factors already in the Abstract, and there are certainly a ton of confounding factors that may well bias the results (it would seem rather unnatural to expect people who work towards having a supposedly-healthy BMI to behave not differently on average in any other health-releveant way than people who may be working less towards such BMI).
Agree that cued FNs would often be useful innovation I've not yet seen. Nevertheless, this statement
So, if you wonder whether you'd care for the content of a note, you have to look at the note, switching to the bottom of the page and breaking your focus. Thus the notion that footnotes are optional is an illusion.
ends with a false conclusion; most footnotes in text I have read were optional and I'm convinced I'm happy to not have read most of them indeed. FNs, already as they are, are thus indeed highly "optional" and potentially very helpful - in many, maybe most, cases, for many, maybe most, readers.
That could help explain the wording. Though the way the tax topic is addressed here I have the impression - or maybe hope - the discussion is intended to be more practical in the end.
A detail: I find the "much harder" in the following unnecessarily strong, or maybe also simply the 'moral claim' yes/no too binary (all emphasizes added):
If the rich generally do not have a moral claim to their riches, then the only justification needed to redistribute is a good affirmative reason to do so: perhaps that the total welfare of society would improve [..]
If one believes that they generally do have moral claim, then redistributive taxation becomes much harder to justify: we need to argue either that there is a sufficiently strong affirmative reason to redistribute that what amounts to theft is nevertheless acceptable, or that taxation is not in fact theft under certain circumstances.
What we want to call 'harder' or 'much harder' is of course a matter of taste, but to the degree that it reads like meaning 'it becomes (very) hard', I'd say instead:
It appears to be rather intuitive to agree to some degree of redistributive taxation even if one assumed the rich had mostly worked hard for their wealth and therefore supposedly had some 'moral claim' to it.
For example, looking at classical public finance 101, I see textbooks & teachers (some definitely not so much on the 'left') readily explaining their students (definitely not systematically full utilitarians) why concave utility means we'd want to tax the rich, without even hinting at the rich not 'deserving' their incomes, and the overwhelming majority of student rather intuitively agreeing with the mechanism, as it seems to me from observation.
Core claim in my post is that the 'instantaneous' mind (with its preferences etc., see post) is - if we look closely and don't forget to keep a healthy dose of skepticism about our intuitions about our own mind/self - sufficient to make sense of what we actually observe. And given this instantaneous mind with its memories and preferences is stuff we can most directly observe without much surprise in it, I struggle to find any competing theories as simple or 'simpler' and therefore more compelling (Occam's razor), as I meant to explain in the post.
As I make very clear in the post, nothing in this suggests other theories are impossible. For everything there can of course be (infinitely) many alternative theories available to explain it. I maintain the one I propose has a particular virtue of simplicity.
Regarding computationalism: I'm not sure whether you meant a very specific 'flavor' of computationalism in your comment; but for sure I did not mean to exclude computationalist explanations in general; in fact I've defended some strong computationalist position in the past and see what I propose here to be readily applicable to it.
I'm sorry but I find you're nitpicking on words out of context, rather than to engage with what I mean. Maybe my EN is imperfect but I think not that unreadable:
A)
The word "just" in the sense used here is always a danger sign. "X is just Y" means "X is Y and is not a certain other thing Z", but without stating the Z.
... 'just' might sometimes be used in such abbreviated way, but here, the second part of my very sentence itself readily says what I mean with the 'just' (see "w/o meaning you're ...").
B)
You quoting me: "It is equally all too natural for me to still keep my specific (and excessive) focus & care on the well-being of my 'natural' successors, i.e. on what we traditionally call"
You: Too natural? Excessive focus and care? What we traditionally call? This all sounds to me like you are trying not to know something.
Recall, as I wrote in my comment, I try to support "why care [under my stated views], even about 'my' own future". I try to rephrase the sentence you quote, in a paragraph that avoids the 3 elements you criticize. I hope the meaning becomes clear then:
Evolution has ingrained into my mind with a very strong preference to care for the next-period inhabitant(s) X of my body. This deeply ingrained preference to preserve the well-being of X tends to override everything else. So, however much my reflections suggest to me that X is not as unquestionably related to me as I instinctively would have thought before closer examination, I will not be able to give up my commonly observed preferences for doing (mostly) the best for X, in situations where there is no cloning or anything of the like going on.
(you can safely ignore "(and excessive)". With it, I just meant to casually mention also we tend to be too egoistic; our strong specific focus on (or care for) our own body's future is not good for the world overall. But this is quite a separate thing.)
Thanks! In particular also for your more-kind-than-warranted hint at your original w/o accusing me of theft!! Especially as I now realize (or maybe realize again) your sleep-clone-swap example, which indeed I love as an perfectly concise illustration, had also come along with at least an "I guess"-caveated "it is subjective", i.e. which some sense is really already included a core part of the conclusion/claim here.
I should have also picked up your 'stream-of-consciousness continuity' vs. 'substrate/matter continuity' terminology. Finally, the Ship of Theseus question thus, with "Temporal parts" vs. "Continued identity" would also be good links, although I guess I'd be spontaneously inclined to dismiss part of the discussion of these as questions 'merely of definition' - just until we get to the question of the mind/consciousness, where it seems to me indeed to become more fundamentally relevant (although, maybe, and ironically, after relegating the idea of a magical persistent self, maybe one could say, also here in the end it becomes slightly closer to that 'merely question of definition/preference' domain).
Btw, I'll now take your own link-to-comment and add it to my post - thanks if you can let me know where I can create such links; I remembered looking and not finding it ANYWHERE even on your own LW profile page.
Btw, regarding:
it would not seem to have made any difference and was just a philosophical recreation
Mind, in this discussion about cloning thought experiments I'd find it natural that there are not many currently tangible consequences, even if we did find a satisfying answer to some of the puzzling questions around that topic.
That said, I guess I'm not the only one here with a keen intrinsic interest in understanding the nature of self even absent tangible & direct implications, or if these implications may remain rather subtle at this very moment.
I obviously still care for tomorrow, as is perfectly in line with the theory.
I take you to imply that, under the here emphasized hypothesis about self not being a unified long-term self the way we tend to imagine, one would have to logically conclude sth like: "why care then, even about 'my' own future?!". This is absolutely not implied:
The questions around which we can get "resolving peace" (see context above!) refers to things like: If someone came along proposing to clone/transmit/... you, what to do? We may of course find peace about that question (which I'd say I have for now) without giving up to care about the 'natural' successors of ours in standard live.
Note how you can still have particular care for your close kin or so after realizing your preferential care about these is just your personal (or our general cultural) preference w/o meaning you're "unified" with your close kin in any magical way. It is equally all too natural for me to still keep my specific (and excessive) focus & care on the well-being of my 'natural' successors, i.e. on what we traditionally call "my tomorrow's self", even if I realize that we have no hint at anything magical (no persistent super-natural self) linking me to it; it's just my ingrained preference.
The original mistake is that feeling of a "carrier for identity across time" - for which upon closer inspection we find no evidence, and which we thus have to let go of. Once you realize that you can explain all we observe and all you feel with merely, at any given time, your current mind, including its memories, and aspirations for the future, but without any further "carrier for identity", i.e. without any super-material valuable extra soul, there is resolving peace about this question.
The upload +- by definition inherits your secret plan and will thus do your jumps.
Good decisions need to be based on correct beliefs as well as values.
Yes, but here the right belief is the realization that what connects you to what we traditionally called your future "self", is nothing supernatural i.e. no super-material unified continuous self of extra value: we don't have any hint at such stuff; too well we can explain your feeling about such things as fancy brain instincts akin to seeing the objects in the 24FPS movie as 'moving' (not to say 'alive'); and too well we know we could theoretically make you feel you've experienced your past as a continuous self while you were just nano-assembled a mirco-second ago with exactly the right memory inducing this beliefs/'feeling'. So due to the absence of this extra "self": "You" are simply this instant's mind we currently observe from you. Now, crucially, this mind has, obviously, a certain regard, hopes, plans, for, in essence, what happens with your natural successor. In the natural world, it turns out to be perfectly predictable from the outside, who this natural successor is: your own body.
In situations like those imagined with cloning thought experiments instead, it suddenly is less obvious from the outside, whom you'll consider your most dearly cared for 'natural' (or now less obviously 'natural') successor. But as the only thing that in reality connects you with what we traditionally would have called "your future self", is your own particular preferences/hopes/cares to that elected future mind, there is no objective rule to tell you from outside, which one you have to consider the relevant future mind. The relevant is the one you find relevant. This is very analogous for, say, when you're in love, the one 'relevant' person in a room for you to save first in a fire (if you're egoistic about you and your loved one) is the one you (your brain instinct, your hormones, or whatever) picked; you don't have to ask anyone outside about whom that should be.
so if there is some fact of the matter that you don't survive destructive teleportation, you shouldn't go for it, irrespective of your values
The traditional notion of "survival" as in invoking a continuous integrated "self" over and above the succession of individual ephemeral minds with forward-looking preferences, must be put into perspective just as that of that long-term "self" itself indeed.
There's a theory that personal identity is only ever instantaneous...an "observer. moment"... such that as an objective fact, you have no successors. I don't know whether you believe it. If it's true , you epistemically-should believe it, but you don't seem to believe in epistemic norms.
There's another, locally popular , theory that the continuity of personal identity is only about what you care about. (It either just is that, or it needs to be simplified to that...it's not clear which). But it's still irrational to care about things that aren't real...you shouldn't care about collecting unicorns...so if there is some fact of the matter that you don't survive destructive teleportation, you shouldn't go for it, irrespective of your values.
Thanks. I'd be keen to read more on this if you have links. I've wondered to which degree the relegation of the "self" I'm proposing (or that may have been proposed in a similar way in Rob Bensinger's post and maybe before) is related to what we always hear about 'no self' from the more meditative crowd, though I'm not sure there's a link there at all. But I'd be keen to read of people who have proposed theoretical things in a similar direction.
There's a theory that personal identity is only ever instantaneous...an "observer. moment"... such that as an objective fact, you have no successors. I don't know whether you believe it.
On the one hand, 'No [third-party] objective successor' makes sense. On the other hand: I'm still so strongly programmed to absolutely want to preserve my 'natural' [unobjective but engrained in my brain..] successors, that the lack of 'outside-objective' successor doesn't impact me much.[1]
- ^
I think a simple analogy here, for which we can remain with the traditional view of self, is: Objectively, there's no reason I should care about myself so much, or about my closed ones; my basic moral theory would ask me to be a bit less kind to myself and kinder to others, but given my wiring I just don't manage to behave so perfectly.
Oh, it's much worse. It is epistemic relativism. You are saying that there is no one true answer to the question and we are free to trust whatever intuitions we have. And you do not provide any particular reason for this state of affairs.
Nice challenge! There's no "epistemic relativism" here, even if I see where you're coming from.
First recall the broader altruism analogy: Would you say it's epistemic relativisim if I tell you, you can simply look inside yourself and see freely, how much you care, how closely connected you feel about people in a faraway country? You sure wouldn't reproach that to me; you sure agree it's your own 'decision' (or intrinsic inclination or so) that decides how much weight or care you personally put on these persons.
Now, remember the core elements I posit. "You" are (i) your mind of right here and now, including (ii) it's tendency for deeply felt care & connection to the 'natural' successors of yours, and that's about what there is to be said about you (+ there's memory). From this everything follows. It is evolution that has shaped us to shortcut the standard physical 'continuation' of you in coming periods, as a 'unique entity' in our mind, and has made you typically care sort of '100%' about your first few sec worth of forthcoming successors of yours [in analogy: Just as nature has shaped you to (usually) care tremendously also for your direct children or siblings]. Now there are (hypothetically) cases, where things are so warped and that are so unusual evolutionarily, that you have no clear tastes: that clone or this clone, if you are/are not destroyed in the process/while asleep or not/blabla - all the puzzles we can come up with. For all these cases, you have no clear taste as to which of the 'successors' of yours you care much and which you don't. In our inner mind's sloppy speak: we don't know "who we'll be". Equally importantly, you may see it one way, and your best friends may see it very differently. And what I'm explaining is that, given the axiom of "you" being you only right here and now, there simply IS no objective truth to be found about who is you later or not, and so there is no objective answer as to whom of those many clones in all different situations you ought to care how much about: it really does only boil down to how much you care about these. As, on a most fundamental level, "you" are only your mind right now.
And if you find you're still wondering about how much to care about which potential clone in which circumstances, it's not the fault of the theory that it does not answer it to you. You're asking to the outside a question that can only be answered inside you. The same way that, again, I cannot tell you how much you feel (or should feel) for third person x.
I for sure can tell you you ought to behaviorally care more from a moral perspective, and there I might use a specific rule that attributes each conscious clone an equal weight or so, and in that domain you could complain if I don't give you a clear answer. But that's exactly not what the discussion here is about.
I can imagine a universe with such rules that teleportation kills a person and a universe in which it doesn't. I'd like to know how does our universe work.
I propose a specific "self" is a specific mind at a given moment. The usual-speak "killing" X and the relevant harm associated with it means to prevent X's natural successors, about whom X cares so deeply, from coming into existence. If X cares about his physical-direct-body successors only, disintegrating and teleporting him means we destroy all he cared for, we prevented all he wanted to happen from happening, we have so-to-say killed him, as we prevented his successors from coming to live. If he looked forward to a nice trip to Mars where he is to be teleported to, there's no reason to think we 'killed' anyone in any meaningful sense, as "he"'s a happy space traveller finding 'himself' (well, his successors..) doing just the stuff he anticipated for them to be doing. There's nothing more objective to be said about our universe 'functioning' this or that way. As any self is only ephemeral, and a person is a succession of instantaneous selves linked to one another with memory and with forward-looking preferences, it really is these own preferences that matter for the decision, no outside 'fact' about the universe.
As I write, call it a play on words; a question of naming terms - if you will. But then - and this is just a proposition plus a hypothesis - try to provide a reasonable way to objectively define what one 'ought' to care about in cloning scenarios; and contemplate all sorts of traditionally puzzling thought experiments about neuron replacements and what have you, and you'll inevitable end up with hand-waving, stating arbitrary rules that may seem to work (for many, anyhow) in one though experiment, just to be blatantly broken by the next experiment... Do that enough and get bored and give up - or, 'realize', eventually, maybe: There is simply not much left of the idea of a unified and continuous, 'objectively' traceable self. There's a mind here and now and, yes of course, it absolutely tends to care about what it deems to be its 'natural' successors in any given scenario. And this care is so strong, it feels as if these successors were one entire, inseparable thing, and so it's not a surprise we cannot fathom there are divisions.
Very interesting question to me coming from the perspective I outline in the post - sorry a bit lengthy answer again:
According to the basic take from the post, we're actually +- in your universe, except that the self is even more ephemeral than you posit. And as I argue, it's relative, i.e. up to you, which future self you end up caring about in any nontrivial experiment.
Trying to re-frame your experiment from that background as best as I can, I imagine a person having an inclination to think of 'herself' (in sloppy speak; more precisely: she cares about..) as (i) her now, plus (ii) her natural successors, as which she, however, qualifies only those that carry the immediate succession of her currently active thoughts before she falls asleep. Maybe some weird genetic or cultural tweak or drug in her brain has made her - or maybe all of us in that universe - like that. So:
Is expecting to die as soon as you sleep a rational belief in such a universe?
I'd not call it 'belief' but simply a preference, and a basic preference is not rational or irrational. She may simply not care about the future succession of selves coming out at the other end of her sleep, and that 'not caring' is not objectively faulty. It's a matter of taste, of her own preferences. Of course, we may have good reasons to speculate that it's evolutionarily more adaptive to have different preferences - and that's why we do usually have them indeed - but we're wrong to call her misguided; evolution is no authority. From a utilitarian perspective we might even try to tweak her behavior, in order for her to become a convenient caretaker for her natural next-day successors, as from our perspective they're simply usual, valuable beings. But it's still not that we'd be more objectively right than her when she says she has no particular attachment for the future beings inhabiting what momentarily is 'her' body.
Yep.
And the clue is, the exceptional one refusing, saying "this won't be me, I dread the future me* being killed and replaced by that one", is not objectively wrong. It might quickly become highly impractical for 'him'** not to follow the trend, but if his 'self'-empathy is focused only on his own direct physical successors, it is in some sense actually killing him if we put him in the machine. We kill him, and we create a person that's not him in the relevant sense, as he's currently not accepting the successor; if his empathic weight is 100% on his own direct physical successor and not the clone, we roughly 100% kill him in the relevant sense of taking away the one future life he cares about.
*'me being destroyed' here in sloppy speak; it's the successor he considers his natural successor which he cares about.
**'him' and his natural successors as he sees it.
All agreeable. Note, this is perfectly compatible with the relativity theory I propose, i.e. with the 'should' being entirely up to your intuition only. And, actually, the relativity theory, I'd argue, is the only way to settle debates you invoke, or, say, to give you peace of mind when facing these risky uploading situations.
Say, you can overnight destructively upload, with 100% reliability your digital clone will be in a nicely replicated digital world for 80 years (let's for simplicity assume for now the uploadee can be expected to be a consciousness comparable to us at all), while 'you' might otherwise overnight be killed with x% probability. Think of x as a concrete number. Say we have a 50% chance. For that X, will you want to upload.
I'm pretty certain you have no (i) clear-cut answer as to threshold x% from which on you'd prefer upload (although some might have a value of roughly 100%). And, clearly (ii) that threshold x% would vary a lot across persons.
Who can say? Only my relativity theory: There is no objective answer, from your self-regarding perspective.
Just like it's your intrinsic taste who determines how much or whether at all you care a lot about the faraway poor, or the not so faraway not so poor or for anyone really: it's a matter of your taste and nothing else. You're right now imagining going from you to inside the machine, and feel like that's simply you being you there w/o much dread and no worries, and looking forward to that being - or sequence of beings - 'living' essentially with nearly certainty another 80 years, then yes, you're right, go for it if the physical killing probability x% is more than a few %. After all, there will be that being in the machine, and for all intents and purposes you might call it 'you' in sloppy speak. You dread the future sequence of your physical you being destroyed and to 'only' be replaced by what feels like 'obviously a non-equivalent future entity that merely has copied traits, even if it behaves just as if it was the future you', then you're right to refuse the upload for any x% not close enough to 0%. It really is, relative. You only are the current you, including weights-of-care for different potential future successors on which there's no outside authority to tell you which ones are right or wrong.
The point is, "you" are exactly the following and nothing else: You're (i) your mind right now, (ii) including its memory, and (iii) its forward-looking care, hopes, dreams for, in particular, its 'natural' successor. Now, in usual situations, the 'natural successor' is obvious, and you cannot even think of anything else: it's the future minds that inhabit your body, your brain, that's why you tend to call the whole series a unified 'you' in common speak.
Now, with cloning, if you absolutely care for a particular clone, then, for every purpose, you can extend that common speak to the cloning situation, if you want, and say my 'anticipation will be borne out'/'I'll experience...'. But, crucially, note, that you do NOT NEED to; in fact, it's sloppy speak. As in fact, these are separate future units, just tied in various (more or less 'natural') ways to you, which offers vagueness, and choice. Evolution leaves you dumbfounded about it, as there is no strictly speaking 'natural' successor anymore. Natural has become vague. It'll depend on how you see it.
Crucially, there will be two persons, say after a 'usual' after cloning, and you may 'see yourself' - in sloppy speak - in either of these two. But it's just a matter of perspective. Strictly speaking, again, you're you right now, and your anticipation of one or future person.
It's a bit like evolution makes you confused about how much you care for strangers. Do you go to a philosophers to ask how much you want to give to the faraway poor? No! You have your inner degree of compassion to them, it may change at any time, and it's not wrong or right.*[1]
- ^
Of course, on another level, from a utilitarian perspective, I'd love you to love these faraway beings more, and its not okay that we screw up the world because of not caring, but that's a separate point.
I wonder whether, if sheer land mass really was the single dominant bottleneck for whatever your aims, you could potentially find a particular gov't or population from whom you'd buy the km2 you desire - say, for a few $ bn - as new sovereign land for you, for a source of potentially (i) even cheaper and (ii) more robust land to reign over?
Difficult to overstate the role of signaling as a force in human thinking, indeed, few random examples:
- Expensive clothes, rings, cars, houses: Signalling 'I've got a lot of spare resources, it's great to know me/don't mess with me/I won't rob you/I'm interesting/...'
- Clothes of particular type -> signals your politica/religious/... views/lifestyle
- Talking about interesting news/persons -> signals you can be a valid connection to have as you have links
- In basic material economics/markets: All sorts of ways to signal your product is good (often economists refer to e.g.: insurance, public reviewing mechanism, publicity)
- LW-er liking to get lots of upvotes to signal his intellect or simply for his post to be a priori not entirely unfounded
- Us dumbly washing or ironing clothes or buying new clothes while stained-but-non-smelly or unironed or worn clothes would be functionally just as valuable - well if a major functionality would not exactly be, to signal wealth, care, status..
- Me teaching & consulting in a suit because the university uses an age old signalling tool to show: we care about our clients
- Doc having his white suit to spread an air of professional doctor-hood to the patient he tricks into not questioning his suggestions and actions
- Genetically: many sexually attractive traits have some origin in signaling good quality genes: directly functional body (say strong muscles) and/or 'proving spare resources to waste on useless signals' such as most egregiously for the Peacocks/Birds of paradise <- I think humans have the latter capacity too, though I might be wrong/no version comes to mind right now
- Intellect etc.! There's lots of theory that much of our deeper thinking abilities were much less required for basic material survival (hunting etc.), than for social purposes: impress with our stories etc.; signal that what we want is good and not only self-serving. (ok, the latter maybe that is partly not pure 'signaling' but seems at least related).
- Putting solar panels, driving Tesla, vegtarian ... -> we're clean and modern and care about the climate
- I see this sort of signaling more and more by individuals and commercial entities, esp. in places where there is low-cost for it. The café that sells "Organic coffee" uses a few cents to buy organic coffee powder to pseudo-signal care and sustainability while it sells you the dirtiest produced chicken sandwich, saving many dollars compared to organic/honestly produced produce.
- Of course, shops do this sort of stuff commercially all the time -> all sorts of PR is signaling
- Companies do all sorts of psychological tricks to signal they're this or that to motivate its employees too
- Politics. For a stylized example consider: Trump or so with his wall promises signalling to those receptive to it that he'd be caring to reduce illegal immigation (while knowing he does/cannot change the situation so much so easily)
- Biden or so with his stopping-the-wall promises signalling a leaner treatment of illegal immigrants (while equally knowing he does/cannot change the situation so much so easily)
- ... list doesn't stop... but I guess I better stop here :)
I guess the vastness of signaling importantly depends on how narrowly or broadly we define it in terms of: Whether we consciously have in mind to signal something vs. whether we instinctively do/like things that serve for us to signal quality/importance... But both signalling domains seem absolutely vast - and sometimes with actual value for society, but often zero-sum effects i.e. a waste of resources.
I read this as saying we’re somehow not ‘true’ to ourselves as we’re doing stuff nature didn’t mean us to do when it originally implanted our emotions.
Indeed, we might look ridiculous from the outside, but who’s there to judge - imho, nature is no authority.
- Increasing the odometer may be wrong from the owner’s perspective – but why should the car care about the owner? Assume the car, or the odometer itself desires really to show a high mile count, just for the sake of it. Isn’t the car making progress if it magically could put itself on a block?
- In the human case: Ought we to respect any ‘owner’ of us? A God? Nature who built us? Maybe not! Whatever creates happiness – I reckon it’s one of the emotions you mean – is good, however ridiculous the means to get to that. Of course, better if it creates widespread & long-term happiness, but that’s another question.
- Not gaming nature’s system – what would that mean? Could it be to try to have as many children or something like that? After that this is what nature wanted to ensure when it endowed us with our proxy emotions. I’m not sure it’s better.
- Think exactly of the last point. Imagine we were NOT gaming nature’s original intents. Just as much as we desire sex, we’d desire to have actually the maximal number of children instead! The world would probably be much more nightmarish than it is!
Now, if you’re saying, we’re in a stupid treadmill, trying to increase our emotion of (long-term) happiness by following the most ridiculous proxy (short-term) emotions for that, and creating a lot of externalized suffering at the same time, and that we forget that besides our individual shallow ‘happiness’ there are some deeper emotional aims, like general human progress etc., I couldn’t agree more!
Or if you're saying the evolutionary market system creates many temptations that exploit small imperfections in our emotional setup to trick us into behaving ridiculous and strongly against our long-term emotional success, again, all with you, and we ought to reign in markets more to limit such things.
One consequence that seems to flow from this, and which I personally find morally counter-intuitive, and don't actually believe, but cannot logically dismiss, is that if you're going to lie you have a moral obligation to not get found out. This way, the damage of your lie is at least limited to its direct effects.
With widespread information sharing, the 'can't foll all the people all the time'-logic extends to this attempt to lie without consequences: We'll learn people 'hide well but lie still so much', so we'll be even more suspicious in any situation, undoing the alleged externality-reducing effect of the 'not get found out' idea (in any realistic world with imperfect hiding, anyway).
Thanks for the useful overview! Tiny point:
It is also true that Israel has often been more aggressive and warmongering than it needs to be, but alas the same could be said for most countries. Let’s take Israel’s most pointless and least justified war, the Lebanon war. Has the USA ever invaded a foreign country because it provided a safe haven for terrorist attacks against them? [...] Yes - Afghanistan. Has it ever invaded a country for what turns out to be spurious reasons while lying to its populace about the necessity? Yes [... and so on]
Comparing Israel to the US might not be effective since critics often already view the US (or its foreign policy) just as negatively as Israel anyway (or view the US even as the evil driver behind Israel!). Perhaps a different example(s) could strengthen the argument.
Might be worth adding your blog post's subtitle or so, to hint at what Georgism is about (assuming I'm not an exception in not having known "Georgism" is the name for the idea of shifting taxation from labor etc. to natural resources).
Worth adding imho: Feels like a most natural way to do taxation in a world with jobs automated away.
Three related effects/terms:
1. Malthusian Trap as the maybe most famous example.
2. In energy/environment we tend to refer to such effects as
- "rebound" when behavioral adjustment compensates part of the originally enable saving (energy consumption doesn't go down so much as better window insulation means people afford to keep the house warmer) and
- "backfiring" when behavioral adjustment means we overcompensate (let's assume flights become very efficient, and everyone who today wouldn't have been flying because of cost or environmental conscience, starts to fly all the time, so even more energy is consumed in the end)
3. In economics (though more generally than only the compensation effects you mention): "equilibrium" effects; indeed famously often offsetting effects in the place where an original perturbation occurred, although as mentioned by Gunnar_Zarncke, maybe overall there is often simply a diffusion of the benefits to overall society. Say, with competitive markets in labor & goods, and making one product becomes more efficient: Yes, you as a worker in that sector won't benefit specifically from the improvement in the long run, but as society overall we slightly expanded our Pareto frontier of how much stuff we like we can produce.
No reason to believe safety-benefits are typically offset 1:1. Standard preferences structures would suggest the original effect may often only be partly offset, or in other cases even backfire by being more-than offset. And net utility for the users of a safety-improved tool might increase in the end in either case.
Started trying it now; seems great so far. Update after 3 days: Super fast & easy. Recommend!
Dear Yan LeCun, dear all,
Time to reveal myself: I'm actually just a machine designed to minimize cost. It's a sort of weighted cost of deviation from a few competing aims I harbor.
And, dear Yan LeCun, while I wish it was true, it's absolutely laughable to claim I'd be unable do implement things none of you like, if you gave me enough power (i.e. intelligence).
∎.
I mean to propose this as a trivial proof by contradiction against his proposition. Or am I overlooking sth?? I guess 1. I can definitely be implemented by what we might call cost minimizationf[1], and sadly, however benign my today's aims in theory, 2. I really don't think anyone can fully trust me or the average human if any of us got infinitely powerful.[2] So, suffices to think about us humans to see the supposed "Engineers"' (euhh) logic falter, no?
- ^
Whether with or without a strange loop making me (or if you want making it appear to myself that I would be) sentient doesn't even matter for the question.
- ^
Say, I'd hope I'd do great stuff, be a huge savior, but who really knows, and, either way, still rather plausible that I'd do things a large share of people might find rather dystopian.
Neither entirely convinced nor entirely against the idea of defining 'root cause' essentially with respect to 'where is intervention plausible'. Either way, to me that way of defining it would not have to exclude "altruism" as a candidate: (i) there could be scope to re-engineer ourselves to become more altruistic, and (ii) without doing that, gosh how infinitely difficult does it feel to improve the world truly systematically (as you rightly point out).
That is strongly related to Unfit for the Future - The Need for Moral Enhancement (whose core story is spot on imho, even though I find quite some of the details in the book substandard)
Interesting read, though I find it not easy to see exactly what your main message is. Two points strike me as potentially relevant regarding
what do I think the real root of all evil is? As you might have guessed from the above, I believe it’s our inability to understand and cooperate with each other at scale. There are different words for the thing we need more of: trust. social fabric. asabiyyah. attunement. love.
The first more relevant, the second a term I'd simply find naturally core in a discussion on the topic:
- Even increased "trust. social fabric." is not so clear to bring us forward. Let's assume people remain similarly self-interested, similarly egoistic, but they are able to cooperate better in limited groups: easy to imagine circumstances in which dominant effects could include: (i) easier for hierarchies in tyrannic dictatorships to cooperate to oppress their population and/or (ii) easier for firms to cooperate to create & exploit market power, replacing some reasonably well-working markets by, say, crazily exploitative oligopolies and oligarchies.
- Altruism: simply the sheer limitation to our degree of altruism*[1] with the wider population, might one call that one out as a single most dominant root of the tree of evil? Or, say, lack of altruism, combined with the necessary imperfection in self-interested positive collaboration given our world features at the time (i) our limited rationality and (ii) a hugely complex natural and economic environment? Increase our altruism, and most of today's billions of bad incentives we're exposed to become a bit less disastrous...
- ^
Along with self-serving bias, i.e. our brain's sneaky way to reduce our actual behavioral/exhibited altruism to levels even below our (already limited) 'heartfelt' degree of altruistic interest, where we often think we try to act in other people's interests while in reality pursuing our own interests.
Don't fully disagree, but still inclined to not view non-upvoted-but-neither-downvoted things too harshly:
If I'm no exception, not upvote may often mean: 'Still enjoyed the quick thought-stimulation even if it seems ultimately not a particularly pertinent message'. One can always down vote, if one really feels its warranted.
Also: If one erratically reads LW, and hence comments on old posts: recipe for fewer upvotes afaik. So one'd have to adjust for this quite strongly.
Fair! Yes. I guess I mainly have issues with the tone in the article, which in turn then makes me fear there's little empathy the other way round: i.e. it's going too strongly in the direction dismissing all superficial care as greedy self-serving display or something, while I find the underlying motivation - however imperfect - is often kind of a nice trait, coming out of genuine care, and it's mainly a lack of understanding (and yes, admittedly some superficiality) of the situation that creates the issue.
It seems to me that many even not so close acquaintances may - simply out of genuine concern for a fellow human being that (in their conviction*) seems to be suffering - want to offer support, even if they may be clumsy in it as they're not used to the situation. I find that rather adorable; for once the humans show a bit of humaneness, even if I'd not be surprised if you're right that often it does not bring much (and even if I'd grant that they might do it mostly as long as it doesn't cost them much).
*I guess I'm not in a minority if I didn't know how extremely curable balls cancer apparently is.
I think the post nicely points out how some stoicism can be a sort of superpower in exactly such situations, but I think we should appreciate how the situation looks from the outside for normal humans who don't expect the victim to be as stoically protected as you were.
From what you write, Acemoglu's suggestions seem unlikely to be very successful in particular given international competition. I paint a bit b/w, but I think the following logic remains salient also in the messy real world:
- If your country unilaterally tries to halt development of the infinitely lucrative AI inventions that could automate jobs, other regions will be more than eager to accommodate the inventing companies. So, from the country's egoistic perspective, might as well develop the inventions domestically and at least benefit from being the inventor rather than the adopter
- If your country unilaterally tries to halt adoption of the technology, there are numerous capable countries keen to adopt and to swamp you with their sales
- If you'd really be able to coordinate globally to enable 1. or 2. globally - extremely unlikely in the current environment and given the huge incentives for individual countries to remain weak in enforcement - then it seems you might as well try to impose directly the economic first best solution w.r.t. robots vs. labor: high global tax rates and redistribution.
Separately, I at least spontaneously wonder: How would one even want to go about differentiating what is the 'bad automation' to be discouraged, from legit automation without which no modern economy could competitively run anyway? For a random example, say if Excel wouldn't yet exist (or, for its next update..), we'd have to say: Sorry, cannot do such software, as any given spreadsheet has the risk of removing thousands of hours of work...?! Or at least: Please, Excel, ask the human to manually confirm each cell's calculation...?? So I don't know how we'd in practice enforce non-automation. Just 'it uses a large LLM' feels weirdly arbitrary condition - though, ok, I could see how, due to a lack of alternatives, one might use something like that as an ad-hoc criterion, with all the problems it brings. But again, I think points 1. & 2. mean this is unrealistic or unsuccessful anyway.
Commenting on the basis of lessons from some experience doing UBI analysis for Switzerland/Europe:
The current systems has various costs (time and money, but maybe more importantly, opportunities wasted by perverse incentives) associated with proving that you are eligible for some benefit.
On the one hand, yes, and its a key reason why NIT/UBI systems are often popular on the right; even Milton Friedman already advocated for a NIT. That said, there are also discussions that suggest the poverty trap - i.e. overwhelmingly strong labor disincentives for poor, from outrageously high effective marginal tax rates from benefits fade-out/tax kicking-in - may be partly overrated, so smoothing the earned-to-net income function may not help as much as some may hope. And, what tends to be forgotten, is that people with special needs may not be able to live purely from a UBI, so not all current social security benefit mechanisms can usually be replaced by a standard UBI.
On the other hand, once you have a conditional welfare system that does not have crazily strong/large poverty traps, labor incentives might overall still be mostly stronger than under a UBI (assumed sufficiently generous to allow a reasonable life from it), once you also take into account the high marginal tax rates required to finance that UBI. This seems to hold in even relatively rich countries (we used to calculate it for Switzerland).
Of course, with AI Joblessness all this might change anyway, in line with the underlying topic of the post here.
Plus you need to pay the people who verify all this evidence.
This tends to be overrated; when you look at the stats, this staff cost is really small compared to the total traditional social security or the UBI costs (we looked at #s in Switzerland but I can only imagine it's exactly similar orders of magnitudes in other developed countries).
I see there might be limits to what is possible.
On the other hand, I have the impression often the limits to what the students can learn (in economics) come more from us teaching in absurdly simplified cases too remote from reality and from what's plausible so that the entire thing we teach remains a purely abstract empty analytical beast. While I only guess even young ones are capable of understanding the more subtle mechanisms - in their few individual steps often not really complicated! - if only we taught them with enough empathy for the students & for the reality we're trying to model.
As you write, with as little math as absolutely necessary.
Would really really love to replace curricula by what you describe, kudos for proposing a reasonably simple yet consistent high-level plan that at least to my mostly uneducated eyes seems rather ideal!
Maybe unnecessary detail here but fwiw, in economics in the Core Civilizational Requirements,
an understanding of supply and demand, specialization and trade, and how capitalism works
I'd try to make sure to provoke them with enough not-so-standard market cases to allow them develop intuitions of where what intervention might be required/justified for which reasons (or from which points of view) and where not. I teach that subject, and deplore how our teaching tends to remain on the surface of things without opportunity to really sharpen students' minds w.r.t. the slightly more intricate econ policy questions where too shallow a demand-supply thinking just isn't much better than no econ at all.
Assuming you're the first to explicitly point out that lemon market type of feature of 'random social interaction', kudos, I think it's a great way to express certain extremely common dynamics.
Anecdote from my country, where people ride trains all the time, fitting your description, although it takes a weird kind of extra 'excuse' in this case all the time: It would often feel weird to randomly talk to your seat neighbor, but ANY slightest excuse (sudden bump in the ride; info speaker malfunction; grumpy ticket collector; one weird word from a random person in the wagon, ... any smallest thing) will an extremely frequently make the silent start conversation, and then easily for hours if the ride lasts that long. And I think some sort of social lemon market dynamics may help explain it indeed.
Funny is jot the only adjective this anecdote deserves. Thanks for sharing this great wisdom/reminder!
I would not search for smart ways to detect it. Instead look at it from the outside - and there I don't see why we should have large hope for it to be detectable:
Imagine you create your simulation. Imagine you are much more powerful than you are, to make the simulation as complex as you want. Imagine in your coolest run, your little simulatees start wondering: how could we trick Suzie so her simulation reveals the reset?!
I think you agree their question will be futile; once you reset your simulation, surely they'll not be able to detect it: while setting up the simulation might be complex, reinitialize at a given state successfully, with no traces within the simulated system, seems like the simplest task of it all.
And so, I'd argue, we might well expect it to be also in our (potential) simulation, however smart your reset-detection design might be.
My impression is, what you propose to supersede Utilitarianism with, is rather naturally already encompassed by utilitarianism. For example, when you write
If someone gains utility from eating a candy bar, but also gains utility from not being fat, raw utilitarianism is stuck. From a desire standpoint, we can see that the optimal outcome is to fulfill both desires simultaneously, which opens up a large frontier of possible solutions.
I disagree that typical concepts of utilitarianism - not strawmans thereof - are in anyway "stuck" here at all: "Of course," a classical utilitarian might well tell you, "we'll have to trade-off between the candy bar and the fatness it provides, that is exactly what utilitarianism is about". And you can extend that to also other nuances you bring: whatever, ultimately, we desire or prefer or what-have-you most: As classical utilitarians we'd aim exactly at that, quasi by definition.
Thanks for the link to the interesting article!
If I understand you correctly, what you describe seems a bit atypical, or at least not similar in all other people, indeed.
Fwiw, pure speculation: Maybe you learned very much from working on/examining advanced/codes type codes. So you learned to understand advanced concepts etc. But you mostly learned to code on the basis of already existing code/solutions.
Often, instead, when we systematically learn to code, we may learn bit by bit indeed from the most simple examples, and we don't just learn to understand them, but - a bit like when starting to learn basic math - we constantly are challenged to put the next element learned directly into practice, on our own. This ensures we master all that knowledge in a highly active way, rather than only passively.
This seems to suggest there's a mechanistically simple yet potentially tedious path, for you to learn to more actively create solutions from scratch: Force yourself to start with the simplest things to code actively, from scratch, without looking at the solution first. Just start with a simple problem that 'needs a solution' and implement it. Gradually increase to more complexity. I guess it might require a lot of such training. No clue whether there's anything better.
The irony in Wenar's piece is: In all he does, he just outs himself as... an EA himself :-). He clearly thinks its important to think through net impact and to do the things that do have great overall impact. Sad he caricatures the existing EA ecosystem in such an uncompelling and disrespectful way.
Fully agree with your take of him being "absurdly" unconvincing here. I guess nothing is too blatant to be printed in this world, as long as the writer makes bold & enraging enough claims on a popular scapegoat and has a Prof title from a famous uni.
I can only imagine (or hope), the traction the article got, which you mention (though I have not seen it myself), being mainly limited to usual suspects for whom EA anyway, quasi by definition, is simply all stupid, if not outright evil.
Unconvinced. Bottom line seems to be an equation of Personal Care with Moral Worth.
But I don't see how the text really supports that: Just because we feel more attached to entities we interact with, it doesn't inherently elevate their sentience i.e. their objective moral worth.
Example: Our lesser emotional attachment or physical distance to chickens in factory farms does not diminish their sentience or moral worth, I'd think. Same for (future) AIs too.
At best I could see this equation to +- work out in a perfectly illusionist reality, where there is no objective moral relevance. But then I'd rather not invoke the concept of moral relevance at all - instead we'd have to remain with mere subjective care as the only thing there might be.