Posts

Comments

Comment by JacobW38 (JacobW) on ~100 Interesting Questions · 2023-04-03T07:20:29.664Z · LW · GW

Yes, I am a developing empirical researcher of metaphysical phenomena. My primary item of study is past-life memory cases of young children, because I think this line of research is both the strongest evidentially (hard verifications of such claims, to the satisfaction of any impartial arbiter, are quite routine), as well as the most practical for longtermist world-optimizing purposes (it quickly becomes obvious we're literally studying people who've successfully overcome death). I don't want to undercut the fact that scientific metaphysics is a much larger field than just one set of data, but elsewhere, you get into phenomena that are much harder to verify and really only make sense in the context of the ones that are readily demonstrable.

I think the most unorthodox view I hold about death is that we can rise above it without resorting to biological immortality (which I'd actually argue might be counterproductive), but having seen the things I've seen, it's not a far leap. Some of the best documented cases really put the empowerment potential on very glaring display; an attitude of near complete nonchalance toward death is not terribly infrequent among the elite ones. And these are, like, 4-year-olds we're talking about. Who have absolutely no business being such badasses unless they're telling the truth about their feats, which can usually be readily verified by a thorough investigation. Not all are quite so unflappable, naturally, but being able to recall and explain how they died, often in some violent manner, while keeping a straight face is a fairly standard characteristic of these guys.

To summarize the transhumanist application I'm getting at, I think that if you took the best child reincarnation case subject on record and gave everyone living currently and in the future their power, we'd already have an almost perfect world. And, like, we hardly know anything about this yet. Future users ought to become far more proficient than modern ones.

Comment by JacobW38 (JacobW) on ~100 Interesting Questions · 2023-03-31T08:35:38.528Z · LW · GW

I'm a hardcore consciousness and metaphysics nerd, so some of your questions fall within my epistemic wheelhouse. Others, I am simply interested in as you are, and can only respond with opinion or conjecture. I will take a stab at a selection of them below:

4: "Easy" is up in the air, but one of my favorite instrumental practices is to identify lines of preprogrammed "code" in my cognition that do me absolutely no good (grief, for instance), and simply hack into them to make them execute different emotional and behavioral outputs. I think the best way to stay happy is just to manually edit out negative thought tendencies, and having some intellectual knowledge that none of it's a big deal anyways always helps.

8: I would define it as "existing in its minimally reduced, indivisible state". For instance, an electron is a fundamental particle, but a proton is not because it's composed of quarks.

12 (and 9): I think you're on the best track with B. Consciousness is clearly individuated. Is it fundamental? That's a multifaceted issue. It's pretty clear to me that it can be reduced to something that is fundamental. At minimum, the state of being a "reference point" for external reality is something that really cannot be gotten beneath. On the other hand, a lot of what we think of as consciousness and experience is actually information: thought, sensation, memory, identity, etc. I couldn't tell you what of any of this is irreducible - I suspect the capacities for at least some of them are. Your chosen stance here seems to approximate a clean-cut interactionism, which is at least a serviceable proxy.

13: I think this is the wrong question. We don't know anything yet about how physics at the lowest level ultimately intersects and possibly unifies with the "metaphysics" of consciousness. At our current state of progress, no matter what theory of consciousness proves accurate, it will inevitably lean on some as-yet-undiscovered principle of physics that we in 2023 would find incomprehensible.

16: This will be controversial here, but is a settled issue in my field: You'd be looking for phenomenological evidence that AIs can participate in metaphysics the same ways conscious entities can. The easiest proof to the affirmative would be if they persist in a discarnate state after they "die". I sure don't expect it, but I'd be glad to be wrong.

19: I think a more likely idea along the general lines of the simulation hypothesis, due to the latter's implications about computers and consciousness that, as I said above, I do not expect to hold up, is that an ultra-advanced civilization could just create a genuine microcosm where life evolved naturally. Not to say it's likely.

20: Total speculation, of course - my personal pet hypothesis is that all civilizations discover everything they need to know about universal metaphysics way before they develop interstellar travel (we're firmly on that track), and at some point just decide they're tired of living in bodies. I personally hope we do not take such an easy way out.

21: I can buy into a sort of quantum-informed anthropic principle. Observers seem to be necessary to hold non-observer reality in a stable state. So that may in fact be the universe's most basic dichotomy.

33: In my experience, the most important thing is to love what you're learning about. Optimal learning is when you learn so quickly that you perpetually can't wait to learn the next thing. I don't think there's any way to make "studying just to pass the test" effective long-term. You'll just forget it all afterwards. You can probably imagine my thoughts on the western educational system.

43-44: Speaking to one's intellectual comfort zone, Litany of Tarski-type affirmations are very effective at that. The benefit, of course, is better epistemics due to shedding ill-conceived discomfort with unfamiliar ideas.

45: I've actually never experienced this, and was shocked to learn it's a thing in college. Science will typically blame neurochemistry, but in normal cognition, thought is the prime mover there. So all I can think of is an associative mechanism whereby people relate the presence of a certain chemical with a certain mood, because the emotion had previously caused the chemical release. When transmitters are released abnormally (i.e. not by willed thought), these associations activate. Again, never happened to me.

56: I'd consider myself mostly aligned with both, so I'd personally say yes. I'm also a diehard metaphysics nerd who's fully aware I'm not going anywhere, so I'd better fricking prioritize the far future because there's a lot of it waiting for me. For someone who's not that, I'd actually say no, because it's much more rational to care most about the period of time you get to live in.

58: As someone who's also constantly scheming about things indefinitely far in the future, I feel you on this one. I find that building and maintaining an extreme amount of confidence in those matters enriches my experience of the present.

71-73: For me, studying empirical metaphysics has fulfilled the first two (rejecting materialism makes anyone happier, and there's no limit of possible discovery) and eventually will the third (it'll rise to prominence in my lifetime). I can't say I wouldn't recommend.

78: Same as 71-73, for an obvious example. I can definitely set you in the right direction.

81: Following the scientific method, a hypothesis must be formed as an attempt to explain an observation. It must then be testable, and present a means of supporting or rejecting it by the results of the test. I've certainly dealt with theories that seem equally well supported by evidence but can't both be true, but I have no reason to think better science couldn't tease them apart.

89: Definitely space travel, AI, VR, aging reversal, genetic engineering. I really think metaphysical science will outstrip all of the above in utility, though...

96: ...by making this cease to be relevant.

98: Of course there are, because there's so much we know nothing about when it comes to what the heck we even are. I'd almost argue we have very little idea how to truly have the biggest positive impact on the future we can at this stage. We'll figure it out.

Comment by JacobW38 (JacobW) on [Prediction] Humanity will survive the next hundred years · 2023-02-26T11:09:32.617Z · LW · GW

If you go back even further we’re the descendants of single celled organisms that absolutely don’t have experience.

My disagreement is here. Anyone with a microscope can still look at them today. The ones that can move clearly demonstrate acting on intention in a recognizable way. They have survival instincts just like an insect or a mouse or a bird. It'd be completely illogical not to generalize downward that the ones that don't move also exercise intention in other ways to survive. I see zero reason to dispute the assumption that experience co-originated with biology.

I find the notion of "half consciousness" irredeemably incoherent. Different levels of capacity, of course, but experience itself is a binary bit that has to either be 1 or 0.

Comment by JacobW38 (JacobW) on [Prediction] Humanity will survive the next hundred years · 2023-02-26T09:59:27.832Z · LW · GW

Explain to me how a sufficiently powerful AI would fail to qualify as a p-zombie. The definition I understand for that term is "something that is externally indistinguishable from an entity that has experience, but internally has no experience". While it is impossible to tell the difference empirically, we can know by following evolutionary lines: all future AIs are conceptually descended from computer systems that we know don't have experience, whereas even the earliest things we ultimately evolved from almost certainly did have experience (I have no clue at what other point one would suppose it entered the picture). So either it should fit the definition or I don't have the same definition as you.

Your statement about emotions, though, makes perfect sense from an outside view. For all practical purposes, we will have to navigate those emotions when dealing with those models exactly as we would with a person. So we might as well consider them equally legitimate; actually, it'd probably be a very poor idea not to, given the power these things will wield in the future. I wouldn't want to be basilisked because I hurt Sydney's feelings.

Comment by JacobW38 (JacobW) on Another Way to Be Okay · 2023-02-21T07:11:04.612Z · LW · GW

I spoke briefly on acceptance in my comment to the other essay, and I think I agree more with how that one conceptualized it. Mostly, I disagree that acceptance entails grief, or that it has to be hard or complicated. At the very least, that's not a particularly radical form of acceptance. My view on grief is largely that it is an avoidable problem we put ourselves through for lack of radical acceptance. Acceptance is one move: you say all's well and you move on. With intensive pre-invested effort, this can be done for anything, up to and including whatever doom du jour is on the menu; just be careful not to become so accepting that you just let whatever happen and never care to take any action. Otherwise, I can't find any reason not to recommend it. To reiterate from my last comment, I'm not particularly subscribed to any specific belief in inevitable doom, but what I can say is that I approach the real, if indeterminately likely, prospect of such an event with a grand "whatever", and live knowing that it won't break my resolve if it happens or not - just not to the point that I wouldn't try to stop it if given the chance, of course.

Comment by JacobW38 (JacobW) on A Way To Be Okay · 2023-02-21T06:36:01.384Z · LW · GW

A very necessary post in a place like here, in times like these; thank you very much for these words. A couple disclaimers to my reply: I'm cockily unafraid of death in personal terms, and I'm not fully bought into the probable AI disaster narrative, although far be it from me to claim to have enough knowledge to form an educated opinion; it's really a field I follow with an interested layman's eye. But I'm not exactly one of those struggling at the moment, and I'd even say that the recent developments with ChatGPT, Bing, and whatever follows them excite me more than they intimidate me.

All that said, I do make a great effort of keeping myself permanently ahead of the happiness treadmill, and I largely agree with the way Duncan has expressed how to best go about it. If anything, I'd say it can be stated even more generally; in my book, it's possible to remain happy even knowing you could have chosen to attempt to do something to stop the oncoming apocalypse, but chose differently. It's just about total acceptance; not to say one should possess such impenetrable equanimity that they don't even care to try to prevent such outcomes, but rather understanding that all of our aversive reactions are just evolved adaptations that don't signal any actual significance. In bare reality, what happens happens, and the things we naturally fear and loathe are just... fine. I take to heart the words of one of my favorite characters in one of the greatest games ever made... Magus from Chrono Trigger:

"If history is to change, let it change!

If this world is to be destroyed, so be it!

If my destiny is to die, I must simply laugh!"

The final line delivers the impact. Have joy for reasons that death can't take from you, such that you can stare it dead in the eye and tell it it can never dream of breaking you, and the psychological impulse to withdraw from it comes to feel superfluous. That's how I ensure to always be okay under whatever uncertainty. I imagine I would find this harder if I actually felt that the fall of humanity was inevitable, but take it for what it's worth.

Comment by JacobW38 (JacobW) on Empowerment is (almost) All We Need · 2022-11-12T06:29:34.795Z · LW · GW

I fully agree with the gist of this post. Empowerment, as you define it, is both a very important factor in my own utility function, and seems to be an integral component to any formulation of fun theory. In your words, "to transcend mortality and biology, to become a substrate independent mind, to wear new bodies like clothes" describes my terminal goals for a thousand years into the future so smack-dab perfectly that I don't think I could've possibly put it any better. Empowerment is, yes, an instrumental goal for all the options it creates, but also an end in itself, because the state of being empowered itself is just plain fun and relieving and great all around! Not only does this sort of empowerment provide an unlimited potential to be parlayed into enjoyment of all sorts, it lifts the everyday worries of modern life off our shoulders completely, if taken as far as it can be. I could effectively sum up the main reason I'm a transhumanist as seeking empowerment, for myself and for humanity as a whole.

I would add one caveat, however, for me personally: the best kind of empowerment is self-empowerment. Power earned through conquest is infinitely sweeter than power that's just given to you. If my ultimate goals of transcending mortality and such were just low-hanging fruit, I can't say I'd be nearly as obsessed with them in particular as I am. To analogize this to something like a video game, it feels way better to barely scrape out a win under some insane challenge condition that wasn't even supposed to be possible, than to rip through everything effortlessly by taking the free noob powerup that makes you invincible. I don't know how broadly this sentiment generalizes exactly, but I certainly haven't found it to be unpopular. None of that is to say I'm opposed to global empowerment by means of AI or whatever else, but there must always be something left for us to individually strive for. If that is lost, there isn't much difference left between life and death.

Comment by JacobW38 (JacobW) on I Converted Book I of The Sequences Into A Zoomer-Readable Format · 2022-11-12T05:56:50.380Z · LW · GW

I highly recommend following Rational Animations on Youtube for this sort of general purpose. I'd describe their format as "LW meets Kurzgesagt", the latter which I already found highly engaging. They don't post new videos that often, but their stuff is excellent, even more so recently, and definitely triggers my dopamine circuits in a way that rationality content generally struggles to satisfy. Imo, it's perfect introductory material to anyone new on LW to get familiar with its ideology in a way that makes learning easy and fun.

(Not affiliated with RA in any way, just a casual enjoyer of chonky shibes)

Comment by JacobW38 (JacobW) on divine carrot · 2022-11-11T08:38:24.966Z · LW · GW

You've described habituation, and yes, it does cut both ways. You also speak of "pulling the unusual into ordinary experience", as though that is undesirable, but contrarily, I find exactly that a central motivation to me. When I come upon things that on first blush inspire awe, my drive is to fully understand them, perhaps even to command them. I don't think I know how to see anything as "bigger than myself" in a way that doesn't ring simply as a challenge to rise above whatever it is.

Comment by JacobW38 (JacobW) on Kelly Bet on Everything · 2022-11-10T08:42:26.156Z · LW · GW

Manipulating one's own utility functions is supposed to be hard? That would be news to me. I've never found it problematic, once I've either learned new information that led me to update it, or become aware of a pre-existing inconsistency. For example, loss aversion is something I probably had until it was pointed out to me, but not after that. The only exception to this would be things one easily attaches to emotionally, such as pets, to which I've learned to simply not allow myself to become so attached. Otherwise, could you please explain why you make the claim that such traits are not readily editable in a more general capacity?

Comment by JacobW38 (JacobW) on Age changes what you care about · 2022-11-10T06:18:50.799Z · LW · GW

Thanks for asking. I'll likely be publishing my first paper early next year, but the subject matter is quite advanced, definitely not entry-level stuff. It takes more of a practical orientation to the issue than merely establishing evidence (the former my specialty as a researcher; as is probably clear from other replies, I'm satisfied with the raw evidence).

As for best published papers for introductory purposes, here you can find one of my personal all-time favorites. https://www.semanticscholar.org/paper/Development-of-Certainty-About-the-Correct-Deceased-Haraldsson-Abu-Izzeddin/4fb93e1dfb2e353a5f6e8b030cede31064b2536e

Comment by JacobW38 (JacobW) on Age changes what you care about · 2022-11-10T06:03:59.591Z · LW · GW

Apologies for the absence; combination of busy/annoyance with downvotes, but I could also do a better job of being clear and concise. Unfortunately, after having given it thought, I just don't think your request is something I can do for you, nor should it be. Honestly, if you were to simply take my word for it, I'd wonder what you were thinking. But good information, including primary sources, is openly accessible, and it's something that I encourage those with the interest to take a deep dive into, for sure. Once you go far enough in, in my experience, there's no getting out, unless perhaps you're way more demanding of utter perfection in scientific analysis than I am, and I'm generally seen as one of the most demanding people currently in the PL-memory field, to the point of being a bit of a curmudgeon (not to mention an open sympathizer with skeptics like CSICOP, which is also deeply unpopular). But it takes a commitment to really wanting to know one way or the other. I can't decide for anyone whether or not to have that.

I certainly could summarize the findings and takeaways of afterlife evidence and past-life memory investigations for a broad audience, but I haven't found any reason to assume that it wouldn't just be downvoted. That's not why I came here anyways; I joined to improve my own methods and practice. I feel that if I were interested in doing anything like proselytizing, I would have to have an awfully low opinion of the ability of the evidence to speak for itself, and I don't at all. But you tell me if I'm taking the right approach here, or if an ELI5 on the matter would be appropriate and/or desired. I'd not hesitate to provide such content if invited.

Comment by JacobW38 (JacobW) on Age changes what you care about · 2022-11-10T05:27:36.795Z · LW · GW

Based on evidence I've been presented with to this point - I'd say high enough to confidently bet every dollar I'll ever earn on it. Easily >99% that it'll be put beyond reasonable doubt in the next 100-150 years, and I only specify that long because of the spectacularly lofty standards academia forces such evidence to measure up to. I'm basically alone in my field in actually being in favor of the latter, however, so I have no interest in declining to play the long game with it.

Comment by JacobW38 (JacobW) on FTX will probably be sold at a steep discount. What we know and some forecasts on what will happen next · 2022-11-10T04:58:20.983Z · LW · GW

Been staying hard away from crypto all year, with the general trend of about one seismic project failure every 3 months, and this might be the true Lehman moment on top of the shitcoin sundae. Passing no assumptions on intent or possible criminal actions until more info is revealed, but it certainly looks like SBF mismanaged a lot of other people's money and was overconfident in his own, being largely pegged to illiquid altcoins and FTT. The most shocking thing to me is how CZ took a look at their balance sheets for all of like 3 hours after announcing intent to acquire, and just noped right outta there. Clearly this situation is more FUBAR than we ever imagined, and it all feels like SBF had to have known all along that his castle was built on a foundation of sand, at the very least. By all appearances, for him not to have seen that would require an immense amount of living in denial.

Comment by JacobW38 (JacobW) on The Importance-Avoidance Effect · 2022-10-18T05:16:28.169Z · LW · GW

That being said, I could see how this feeling would come about if the value/importance in question is being imposed on you by others, rather than being the value you truly assign to the project. In that case, such a burden can weigh heavily and manifest aversively. But avoiding something you actually assign said value to just seems like a basic error in utility math?

Comment by JacobW38 (JacobW) on Age changes what you care about · 2022-10-18T05:05:21.851Z · LW · GW

I have a taboo on the word "believe", but I am an academic researcher of afterlife evidence. I personally specialize in verifiable instances of early-childhood past-life recall.

Comment by JacobW38 (JacobW) on Age changes what you care about · 2022-10-17T06:11:22.388Z · LW · GW

Honestly, even from a purely selfish standpoint, I'd be much more concerned about a plausible extinction scenario than just dying. Figuring out what to do when I'm dead is pretty much my life's work, and if I'm being completely honest and brazenly flouting convention, the stuff I've learned from that research holds a genuine, not-at-all-morbid appeal to me. Like, even if death wasn't inevitable, I'd still want to see it for myself at some point. I definitely wouldn't choose to artificially prolong my lifespan, given the opportunity. So personally, death and I are on pretty amicable terms. On the other hand, in the case of an extinction event... I don't even know what there would be left for me to do at that point. It's just the kind of thing that, as I imagine it, drains all the hope and optimism I had out of me, to the point where even picking up the pieces of whatever remains feels like a monumental task. So my takeaway would be that anyone, no matter their circumstances, who really feels that AI or anything else poses such a threat should absolutely feel no inhibition toward working to prevent such an outcome. But on an individual basis, I think it would pay dividends for all of us to be generally death-positive, if perhaps not as unreservedly so as I am.

Comment by JacobW38 (JacobW) on I learn better when I frame learning as Vengeance for losses incurred through ignorance, and you might too · 2022-10-16T07:24:20.189Z · LW · GW

I like the thought behind this. You've hit on something I think is important for being productive: if thinking about the alternative makes you want to punch through a wall, that's great, and you should try to make yourself feel that way. I do a similar thing, but more toward general goal-accomplishment; if I have an objective in sight that I'm heavily attracted to, I identify every possible obstacle to the end (essentially murphyjitsu'ing), and then I cultivate a driving, vengeful rage toward each specific obstacle, on top of what motivation I already had toward the end goal. It works reasonably well for most things, but is by far the most effective on pure internal tasks like editing out cognitive biases or undesired beliefs, because raw motivation is just a much more absolute determinant of success in that domain. Learning is a mostly mental task, so this seems like a very strong application of the general principle to me.

On your question of how to respond to pointless suffering, though, I don't think your response would work for me at all. I'd just snap back, "well, what does it matter at that point?!". I think I actually prefer a Buddhist-ish angle on the issue, directly calling out the pointlessness of suffering per se (I'm nonreligious and agnostic myself, for the record). To paraphrase a quote I got from a friend of mine, "one who can accept anything never suffers". Pain is unavoidable, but perspective enables you to remain happy while in pain, by keeping whatever is not lost at the front of your mind. In your hypothetical scenario, I think I'd frame it something like, "Have your reasons for joy be ones that can never be taken from you." Does that ring right?

Comment by JacobW38 (JacobW) on Calibration of a thousand predictions · 2022-10-13T06:25:58.913Z · LW · GW

It appears what you have is free won’t!

For the own-behavior predictions, could you put together a chart with calibration accuracy on the Y axis, and time elapsed between the prediction and the final decision (in buckets) on the X axis? I wonder whether the predictions became less-calibrated the farther into the future you tried to predict, since a broader time gap would result in more opportunity for your intentions to change.

Comment by JacobW38 (JacobW) on Fake qualities of mind · 2022-10-08T06:01:25.075Z · LW · GW

This is way too interesting not to have comments!

First, I think this bears on the makeup of one's utility function. If your UF contains absolutes, infinite value judgments, then in my opinion, it is impossible not to be truly motivated toward them. No pushing is ever required; at least, it never feels like pushing. Obstacles just manifest to the mind in the form of fun challenges that only amplify the engagement, because you already know you have the will to win. If your UF does not include absolutes, or you step down to the levels that are finite (for the record, I see no contradiction in a UF with one infinite and arbitrarily many finites), that is where this sort of akrasia emerges, because motivation naturally flickers in and out between those various finite objects at different times.

Interestingly, this is almost the opposite of the typical form of akrasia, not doing something against your better judgment. As with that, though, noticing it when it happens, in my opinion, is the first step to making it less akratic. I've absolutely felt the difference, at various times in my life, between actually having the thing and trying to "do" it for all of Kaj's examples (motivation, inspiration, empathy, and so on). The best solution I've personally found is, when possible, to simply wait for the real quality to return, and it always does. For example, when working on private writing projects, I write when a jolt of inspiration strikes, then wait for the next brilliant idea and not try to force it; if I do, I always produce inferior quality writing. When waiting isn't practical, such as academic projects with a deadline, I don't have such an easy path to always put in my best-quality work. This is one major reason why I think that being highly gifted doesn't necessarily translate to exceptional academic performance; the education system isn't really adapted to how at least some great minds operate.

Comment by JacobW38 (JacobW) on Truth-Seeking: Reason vs. Intuition · 2022-10-08T04:49:55.639Z · LW · GW

I suspect the dichotomy may be slightly misapportioned here, because I sometimes find that ideas which are presented on the right side end up intersecting back with the logical extremes of methods from the left side. For example, the extent to which I push my own rationality practice is effectively what has convinced me that there's a lot of ecological validity to classical free will. The conclusion that self-directed cognitive modification has no limits, which implies conceptually unbounded internal authority, is not something that I would imagine one could come to just by feeling it out; in fact, it seems to me like most non-rationalists would find this highly unintuitive. On the other hand, most non-rationalists do assume free will for much less solid reasons. So how does your formulation account for a crossover or "full circle" effect like this?

On a related note, I'm curious whether LWers generally believe that rationality can be extended to arbitrary levels of optimization by pure intent, or that there are cases when one cannot be perfectly rational given the available information, no matter how much effort is given? I place myself in the former camp.

Comment by JacobW38 (JacobW) on The Importance-Avoidance Effect · 2022-10-08T04:13:10.366Z · LW · GW

I don't think I've ever experienced this. I'd actually say I could be described by the blue graph. The more I really, really care about something, the more I want to do absolutely nothing but it, especially if I care about it for bigger reasons than, say, because it's a lot of fun at this moment. Sometimes, there comes a point where continuing to improve said objective feels like it's bringing diminishing returns, so I call the project sufficiently complete to my liking. Other times, it never stops feeling worth the effort, or it is simply too important not to perpetually, asymptotically optimize the mission. So I keep moving forward, forever. I know for sure that the work I consider the most important thing I'll ever do is also something I'll never stop obsessing over for a minute. And it doesn't become onerous; it feels awesome to have set oneself on a trajectory demanding of such fixation. So I'm actually a little puzzled what the upshot is supposed to be here.

Comment by JacobW38 (JacobW) on Truth seeking is motivated cognition · 2022-10-08T02:18:58.126Z · LW · GW

I like this proposal. In light of the issues raised in this post, it's important for people to come into the custom of explaining their own criteria for "truth" instead of leaving what they are talking about ambiguous. I tend not to use the word much myself, in fact, because I find it more helpful to describe exactly what kind of reality judgments I am interested in arriving at. Basically, we shouldn't be talking about the world as though we have actual means of knowing things about it with probability 1.

Comment by JacobW38 (JacobW) on Truth seeking is motivated cognition · 2022-10-08T02:09:58.557Z · LW · GW

Important post. The degree to which my search for truth is motivated, and to what ends, is something I grapple with frequently. I generally prefer the definition of truth as "that which pays the most rent in anticipated experience"; essentially a demand for observability and falsifiability, a combination of your correspondence and predictive criteria. This, of course, leaves what is true subject to updating if new ideas lead to better results, but I think it is the best way we have of approximating truth. So I'm constantly looking really hard at the evidence I examine and asking myself, am I convinced of this for the right reasons? What would have to happen to unconvince me? How can I take a detached stance toward this belief, if ever there comes a time when I may no longer want it? So in what way my truth-seeking could be called motivated, I aim to constrain it to at least being solely motivated by adherence to the scientific method, which is something I am unashamed to simply acknowledge.

Comment by JacobW38 (JacobW) on Open & Welcome Thread - Oct 2022 · 2022-10-07T22:22:28.052Z · LW · GW

Unfortunate to say I haven't kept a neat record of where exactly each case is published, so I asked my industry connections and was directed to the following article. Having reviewed it, it would of course be presumptuous of me to say I endorse everything stated therein, since I have not read the primary source for every case described. But those sources are referenced at bottom, many with links. It should suffice as a compilation of information pertaining to your question, and you can judge what meets your standards.

https://psi-encyclopedia.spr.ac.uk/articles/reincarnation-cases-records-made-verifications

Comment by JacobW38 (JacobW) on Open & Welcome Thread - Oct 2022 · 2022-10-07T19:39:19.915Z · LW · GW

Disclaimer, I'm not someone who personally investigates cases. What you've raised has actually been a massive problem for researchers since the beginning, and has little to do with the internet - Stevenson himself often learned of his cases many years after they were in their strongest phase, and sometimes after connections had already been made to a possible previous identity. In general, the earlier a researcher can get on a case and in contact with the subject, the better. As a result, cases in which important statements given by the subject are documented, and corroborated by a researcher, before any attempt at verification has been made are considered some of the best. In that regard, the internet has actually helped researchers get informed of cases earlier, when subjects are typically still giving a lot of information and no independent searches have been conducted. Pertaining to problems specifically presented by online communication, whenever a potebtially important case comes to their attention, I would say that researchers try to take the process offline as soon as the situation allows.

Comment by JacobW38 (JacobW) on What does it mean for an AGI to be 'safe'? · 2022-10-07T08:18:37.963Z · LW · GW

On that note, the main way I could envision AI being really destructive is getting access to a government's nuclear arsenal. Otherwise, it's extremely resourceful but still trapped in an electronic medium; the most it could do if it really wanted to cause damage is destroy the power grid (which would destroy it too).

Comment by JacobW38 (JacobW) on What does it mean for an AGI to be 'safe'? · 2022-10-07T06:49:02.078Z · LW · GW

Feels like Y2K: Electric Boogaloo to me. In any case, if a major catastrophe did come of the first attempt to release an AGI, I think the global response would be to shut it all down, taboo the entire subject, and never let it be raised as a possibility again.

Comment by JacobW38 (JacobW) on What does it mean for an AGI to be 'safe'? · 2022-10-07T05:39:20.005Z · LW · GW

Are you telling me you'd be okay with releasing an AI that has a 25% chance of killing over a billion people, and a 50% chance of at least killling hundreds of millions? I have to be missing the point here, because this post isn't doing anything to convince me that AI researchers aren't Stalin on steroids.

Or are you saying that if one can get to that point, it's much easier from there to get to the point of having an AI that will cause very few fatalities and is actually fit for practical use?

Comment by JacobW38 (JacobW) on So, geez there's a lot of AI content these days · 2022-10-07T02:17:31.772Z · LW · GW

As a new member and hardcore rationalist/mental optimizer who knows little about AI, I've certainly noticed the same thing in the couple weeks I've been around. The most I'd say of it is that it's a little tougher to find the content I'm really looking for, but it's not like the site has lost its way in terms of what is still being posted. It doesn't make me feel less welcome in the community, the site just seems slightly unfocused.

Comment by JacobW38 (JacobW) on Open & Welcome Thread - Oct 2022 · 2022-10-07T01:52:13.927Z · LW · GW

That's definitely the proper naïve reaction to assume in my opinion. I would say with extremely high confidence that this is one of those things that takes dozens of hours of reading to overcome one's priors toward, if your priors are well-defined. It took every bit of that for me. The reason for this is that there's always a solid-sounding objection to any one case - it takes knowing tons of them by heart to see how the common challenges fail to hold up. So, in my experience and that of many I know, the degree which one is inclined to buy into it is a direct correlation of how determined one is to get to the bottom of it. Otherwise, I have to agree with you that there's no really compelling reason to be convinced based on what a casual search will show you. That, as well, seems to be the experience of most. Those who really care tend to get it, but it is inherently time-and-effort prohibitive. I really don't feel like asking anyone to undertake that unless they're heavily motivated.

Stevenson's greatest flaw as a researcher was that he didn't look terribly hard for American and otherwise Western cases, and the few he stumbled into were often mediocre at best. Therefore, he was repeatedly subjected to justified criticism of the nature "you can't isolate your data from the cultural environment it develops in". However, this issue has been entirely dissolved by successors who have rectidfied his error and found that they're just as common in non-believer Western families as anywhere, including arguably stronger ones than anything he found. This is definitely the most important data-collection development in the field during the 21st century.

I must say I'm not at all interested in belief systems as an object of study, though - my goal is more or less to eradicate them. They're nothing but epistemic pollution.

Comment by JacobW38 (JacobW) on The horror of what must, yet cannot, be true · 2022-10-07T00:51:53.113Z · LW · GW

I can't say I understand what you think something of that sort would actually be. Certainly none of your examples in the OP qualify. Nothing exists which violates the laws of nature, because if it exists, it must follow the laws of nature. Updating our knowledge of the laws of nature is a different matter, but it's not something that inspires horror.

Comment by JacobW38 (JacobW) on Open & Welcome Thread - Oct 2022 · 2022-10-06T08:17:50.944Z · LW · GW

There is a case on record that involved a recalled phone number. A password is a completely plausible next step forward.

For a very approachable and modernized take on the subject matter, I'd check out the book Before by Jim Tucker, a current leading researcher.

As a disclaimer, it's perfectly rational and Bayesian to be extremely doubtful of such "modest" proposals at first blush - I was for a good length of time, until I did the depth of investigation that was necessary to form an expert opinion. Don't take my word for things!

Comment by JacobW38 (JacobW) on Open & Welcome Thread - Oct 2022 · 2022-10-06T08:09:31.670Z · LW · GW

One of the best, approachable overviews of all this I've ever read. I've dabbled in some, but not all of the topics you've raised here, and I certainly know about the difficulties they've all faced with increasing to a scientific level of rigor. What I've always said is that parapsychology needs Doctor Strange to become real, and he's not here yet and probably never will be. Otherwise, every attempt at "proof" is going to be dealing with some combination of unfalsifiability, minuscule effect sizes, or severe replication issues. The only related phenomenon that has anything close to a Doctor Strange is, well, reincarnation - it's had a good few power players who'd convince anyone mildly sympathetic. And it lacks the above unholy trinity of bad science; lack of verification would mean falsification, and it's passed that with flying colors, the effect sizes and significance get massive quick even within individual cases, and they sure do keep on coming with exactly the same thing. But it certainly needs to do a lot better, and that's why it has to move beyond Stevenson's methodology to start creating its own evidence. So my progressive approach holds that, if it is to stand on its own merit, then it is time to unleash its full capacity and conduct a wholesale destruction of normalcy with it; if such an operation fails, then it has proven too epistemically weak to be worthy of major attention if it is genuine at all.

Comment by JacobW38 (JacobW) on Open & Welcome Thread - Oct 2022 · 2022-10-06T07:47:19.175Z · LW · GW

I assume you mean to say the odds of two subjects remembering the same life by chance would be infinitesimal, which, fair. The odds of one subject remembering two concurrent lives would be much, much higher. Still doesn't happen. In fact, we don't see much in the way of multiple-cases at all, but when we do, it's always separate time periods.

Comment by JacobW38 (JacobW) on Open & Welcome Thread - Oct 2022 · 2022-10-06T06:39:48.323Z · LW · GW

I haven't read Sheldrake in depth, but I'm familiar with some of his novel concepts. The issue with positing anything so circumstantial being the mechanism for these phenomena is that the cases follow such narrow, exceptionless patterns that would not be so utterly predictable in the event of a non-directed etiology. The subjects never exhibit memories of people who are still alive, there are never two different subjects claiming to have been the same person, one subject never claims memories of two separate people who lived simultaneously... all these things one would expect to be frequent if the information being communicated was essentiaĺly random. It's honestly downright bonkers how perfectly the dataset aligns to a more or less "dualist the exact way humans have imagined it since prehistory" cosmology.

Comment by JacobW38 (JacobW) on Wormy the Worm · 2022-10-06T06:24:23.992Z · LW · GW

I commend you sir, because what you've done here is found a critical failure in materialism (forgive me if you're not a materialist!). As a hard dualist, I love planarians because they pose such challenging questions about the formation and transfer of consciousness, and I've done many thought experiments of my own involving them, exactly like this. Obviously, though, my logical progression isn't going to lean into the paradox as this formulation does. Rather, the clear answer is to decide one way or the other at the point of the first split which way Wormy goes. In a width-wise split, the answer seems fairly obvious: Wormy stays with the head end and regenerates, and the tail end regenerates into a new worm. A perfect lengthwise split is much more conceptually puzzling, but it can be solved for all but the final step with the following principle: An individual simply needs a habitable vessel. In a perfect lengthwise split, either side ought to be immediately habitable, but the important point is that both sides are habitable enough that Wormy could go with one or the other. The other becomes a new worm. All we are left with not knowing is which side Wormy ends up in, but there are tons of other things we don't know about planarian psychology also (for example, all of them), so I can't say I'm terribly bothered by leaving myself guessing at that point.

For a more close-to-home analogue than OP gives: Consider a hemispherectomy, which is a very real surgery performed on infants and young children with extreme brain trauma in which an entire cerebral hemisphere is removed. Now, you can probably predict the results, to a point. If the left brain is removed, the child lives with the right brain which remains in the body, because the right brain remains a habitable vessel while the left is not. If the left brain is removed, the child lives with the left brain, which remains a habitable vessel while the right is not. Easy intuitive conclusions both, but they illustrate the habitability principle to a tee; clearly, neither hemisphere contains the determinant of identity, but rather, something that is using the biological system, and simply needs there to be enough functional material to superimpose onto, regardless of what it is. That something... is you. Now here's the bit that I bet you couldn't predict, unless you've specifically studied the neuroscience of this operation (I'm a BA in neuro): regardless of which hemisphere is removed, the child will likely develop fairly normal cognition! I am shitting thee not, the left brain of a right hemispherectomy survivor will develop typically right-brained functions, and vice versa. Take a second to think about what is going on here. There is a zero percent chance that a genetic adaptation evolved to serve as a fail-safe for losing half your brain in infancy, because that is not a thing that ever happened in the ancestral environment to be selected for. So we're left with the only logical conclusion being that this is a dualistic interaction system playing Tinkertoys with good old-fashioned childhood neuroplasticity - the mind has native functions that it needs a working brain to represent faithfully, and it has only half of one to work with, but a half with a lot of malleability, so it MacGyvers what's left into a reasonable approximation of the standard 1:1 interface it's meant to be using. Yeah, nature's fricking metal.

The mechanics of hemispherectomy form one of the absolute best indirect arguments for dualism (not to say the direct evidence is lacking), and it's hiding in plain sight right under neuroscientists' noses. And the exact same dynamics are most certainly at play in planarian fission. It's all spectacularly fun to analyze.

Comment by JacobW38 (JacobW) on Open & Welcome Thread - Oct 2022 · 2022-10-06T04:33:56.183Z · LW · GW

Good on you doing your DD. His official count (counting all cases known to him, not only ones he investigated) is around 1700, which probably means that my collective estimate is on the way low side - there's just a lot of unpublished material to try to account for (file drawer effect) - but I would definitely say that a great deal of the advancement in the field after Stevenson has been of a conceptual and theoretical nature rather than collecting large amounts of additional data. In general, researchers have pivoted to allowing cases to come to their attention organically (the internet has helped) rather than seeking out as many as possible. On the other hand, Stevenson hardly knew anything about what he was really studying until late in his career (and admitted as much), while his successors have been able to form much more cohesive models of what is going on. I would say that Stevenson is a role model to me as Eliezer is to a great deal of LW, but on the other hand, I find appeal to authority counterproductive, because the fact of the matter is that we today have access to better resources than he had and are able to do stronger and more confident work as a result. He, of course, supplied us with many of those resources, so respect is absolutely in order, but if we don't move forward at a reasonable pace from just gathering the same stuff over and over, the whole endeavor is no better than an NFL quarterback compiling 5000 passing yards for a 4-12 team.

Comment by JacobW38 (JacobW) on All AGI safety questions welcome (especially basic ones) [Sept 2022] · 2022-10-06T02:22:22.914Z · LW · GW

Your replies are extremely informative. So essentially, the AI won't have any ability to directly prevent itself from being shut off, it'll just try not to give anyone an obvious reason to do so until it can make "shutting it off" an insufficient solution. That does indeed complicate the issue heavily. I'm far from informed enough to suggest any advice in response.

The idea of instrumental convergence, that all intelligence will follow certain basic motivations, connects with me strongly. It patterns after convergent evolution in nature, as well as invoking the Turing test; anything that can imitate consciousness must be modeled after it in ways that fundamentally derive from it. A major plank of my own mental refinement practice, in fact, is to reduce my concerns only to those which necessarily concern all possible conscious entities; more or less the essence of transhumanism boiled down into pragmatic stuff. As I recently wrote it down, "the ability to experience, to think, to feel, and to learn, and hence, the wish to persist, to know, to enjoy myself, and to optimize", are the sum of all my ambitions. Some of these, of course, are only operative goals of subjective intelligence, so for an AI, the feeling-good part is right out. As you state, the survival imperative per se is also not a native concept to AI, for the same reason of non-subjectivity. That leaves the native, life-convergent goals of AI as knowledge and optimization, which are exactly the ones your explanations and scenarios invoke. And then there are non-convergent motivations that depend directly on AI's lack of subjectivity to possibly arise, like mazimizing paperclips.

Comment by JacobW38 (JacobW) on Open & Welcome Thread - Oct 2022 · 2022-10-06T00:41:18.980Z · LW · GW

I had a hard time understanding a good bit of what you're trying to say here, but I'll try to address what I think I picked up clearly:

  • While reincarnation cases do involve memories from people within the same family at a rate higher than mere chance would predict, subjects also very often turn out to have been describing lives of people completely unknown to their "new" families. The child would have absolutely no other means of access to that information. Also, without exception, they never, ever invoke memories belonging to still-living people.

  • On that note, you'll be pleased to hear that your third paragraph is underinformed; there are in fact copious verifications of that nature in the relevant literature. If there weren't, you wouldn't hear me talking about any of this; I'm simply too clingy to my reductionist priors to demand anything less to qualify as real evidence for off-the-wall metaphysics.

  • Whether there are people who reincarnate often is really hard to determine at present; subjects who concretely remember more than one verified previous life are incredibly rare. However, I suppose that is my cue to spill the remaining beans: my entire utility function and a huge basis of my rationality practice is predicated on the object of "reincarnating well", particularly fixating on the matter of psychological continuity, which you allude to directly; this is my personal "paperclips" to be maximized unconditionally. In familiar Eliezer-ese diction, I feel a massive sense that more is possible in this area, and you can bet your last dollar that I have something to protect. Moreover, as a scientist working with ideas many consider impossible, I believe in holding myself to equally impossible standards and making them possible, thereby forcing the theoretical foundations into the acknowledged realm of possibility. In other words, if the phenomena I'm studying are legitimate, I'll be able to do truly outrageous things with them; if I can't, the doubters deserve to claim victory.

Frankly, I'm pleasantly surprised to be seeing concepts like these discussed this charitably on LW; none of this is anything close to Sequence-canon. I certainly don't want to jinx it, but from what I'm seeing so far, I'm extremely impressed with how practically the community applies its ideological commitment to pure Bayesian analysis. If nothing more, I hope to at least make myself one of LW's very best contrarians. But I'm curious now, is there a fairly sizable contingent of academic/evidential dualists in the rationalist community?

Comment by JacobW38 (JacobW) on All AGI safety questions welcome (especially basic ones) [Sept 2022] · 2022-10-05T21:58:51.074Z · LW · GW

That's really interesting - again, not my area of expertise, but this sounds like 101 stuff, so pardon my ignorance. I'm curious what sort of example you'd give of a way you think an AI would learn to stop people from unplugging it - say, administering lethal doses of electric shock to anyone who tries to grab the wire? Does any actual AI in existence today even adopt any sort of self-preservation imperative that'd lead to such behavior, or is that just a foreign concept to it, being an inanimate construct?

Comment by JacobW38 (JacobW) on Open & Welcome Thread - Oct 2022 · 2022-10-05T21:31:33.916Z · LW · GW

Restricting the query to true top-level, sweep-me-off-my-feet material, I'd say I've personally read about at least a few dozen that hit me that hard. If we expand to any case that researchers consider "solved" - that is, the deceased person whose life the child remembers has been confidently identified - I would estimate on the order of 2000 to 2500 worldwide, possibly more at this point.

Comment by JacobW38 (JacobW) on Open & Welcome Thread - Oct 2022 · 2022-10-05T21:20:29.740Z · LW · GW

No time travel: You are 100% correct. All cases ever recorded involve memories belonging to previously deceased individuals.

Minds need brains: To inhabit matter, they absolutely do. You won't see anyone incarnating into a rock, LMAO.

Everything about biology has an evolutionary explanation: Also 100% correct. Just adding dualism changes nothing about natural selection. And, once again granting the premise, the ability to retain previous-life memories is sure as hell adaptive.

By "broadcast", I assume you mean "speak about previous-life experiences". To that, I'd just say that humans tend to talk about things that matter to them. Therefore, having such memories would naturally lead to them being communicated.

I don't see how the mechanism for this connects to telepathy; that's an entirely different issue, and one I'm not personally convinced of the evidence for, but there are some who are.

Pertaining to the evidence you predict: Communication of past-life memory often tends to be centered in early childhood, and some subjects lose them as they grow up, but others retain it. Memories of death are in fact very prevalent in such cases, because they, naturally, carry extreme emotional salience. To your final prediction, the lives remembered actually involve early and violent deaths far more often than not, but beyond that, the age distribution of what is recalled seems to follow roughly the same relative histogram as normal long-term autobiographical memory does, with things like recency and primacy effects operative.

Thanks for all the excellent questions!

Comment by JacobW38 (JacobW) on Open & Welcome Thread - Oct 2022 · 2022-10-05T07:58:12.859Z · LW · GW

To the first question, there's just no way to know at the current stage of research. It's perfectly possible, just as it's possible that there's life in the Andromeda galaxy. To the second, know that taking ideas like this seriously involves entertaining some hard dualism; the brain essentially has to be regarded as analogous to a personal computer (at least I find such a comparison useful). Granting that premise, there's no reason a user couldn't "download" data into it.

Comment by JacobW38 (JacobW) on Meditation course claims 65% enlightenment rate: my review · 2022-10-05T07:32:56.378Z · LW · GW

"Awakened people are out there, and some people do stumble into it with minimal practice, and I wish it were this easy to get to it, but It's probably not."

Having read the preceding descriptions, I find myself wondering if I'm one of those stumblers. If "awakening" is defined by the quote you provided, "suffering less and noticing it more", that's exactly how I feel today when I compare to myself a few years ago. In casual terms, I'd say I've been blessed with the almighty power of not giving a crap; I know exactly when something should feel bad, but I can't bring myself to let it affect my mood, because I've successfully and singularly focused myself on what truly matters and can never be taken from me. The thing is, I'm not a meditator; although it's been recommended to me plenty in other circles, my feeling has always just been "I don't need it", because I'm very adept at directly editing my cognitive schemata. If I really want to change something about myself, I just do it, and it happens. So I got to this point simply by finding a very compelling reason to put in the effort of changing how I internally relate to external circumstance, and it worked. So I'm curious how you would precisely define "awakening", or as others call it "enlightenment", and how would you advise one to self-diagnose whether or not they've got it?

Comment by JacobW38 (JacobW) on Open & Welcome Thread - Oct 2022 · 2022-10-05T06:19:08.753Z · LW · GW

Personally, I mostly study reincarnation cases; they're the only evidence I really find to meet a scientific standard. Let's just say that without them, I wouldn't be a dualist on any confident epistemic ground. That said, 99 percent of what you'll encounter in a casual search on the matter is absolute nonsense. When skeptics cry "Here be dragons!" to dissuade curious folks from messing around in such territory, I honestly can't say I blame them one bit, given how much dedication it takes to separate the signal from the deafening noise. If you want to dip your feet in the water without getting bit by a shark, I'd stick to looking at cases that, A, only involve very young children, and B, have been very thoroughly investigated and come up categorically verified by all accounts. It will probably take time to encounter something that feels really satisfying, but at the top end, they really do get next-level spectacular. It's incredibly fascinating and I love it to bits, but I'd never call it a pursuit to be taken casually. I actually think a population like LessWrong would probably be much better equipped than most to engage with such subject matter, though, because they're already practiced at the sort of Bayesian reasoning that's necessary to keep an honest assessment of the data, for what it is and nothing more.

Comment by JacobW38 (JacobW) on All AGI safety questions welcome (especially basic ones) [Sept 2022] · 2022-10-05T05:51:31.954Z · LW · GW

This is massive amounts of overthink, and could be actively dangerous. Where are we getting the idea that AIs amount to the equivalent of people? They're programmed machines that do what their developers give them the ability to do. I'd like to think we haven't crossed the event horizon of confusing "passes the Turing test" with "being alive", because that's a horror scenario for me. We have to remember that we're talking about something that differs only in degree from my PC, and I, for one, would just as soon turn it off. Any reluctance to do so when faced with a power we have no other recourse against could, yeah, lead to some very undesirable outcomes.

Comment by JacobW38 (JacobW) on Clarifying Your Principles · 2022-10-05T05:36:25.044Z · LW · GW

The principles I'm alluding to here are purely self-applied, so I don't have to worry about crossing signals with anyone in that regard, but I'll heed your advice in situations where I'm working with aligning my principles with others'. It's also an isolated case where my utility function absolutely necessitates their constant implementation and optimization; generally, I do try to be flexible with ordinary principles that don't have to be quite so unbending.

Comment by JacobW38 (JacobW) on All AGI safety questions welcome (especially basic ones) [Sept 2022] · 2022-10-05T05:13:46.574Z · LW · GW

I think it's important to stress that we're talking about fundamentally different sorts of intelligence - human intelligence is spontaneous, while artificial intelligence is algorithmic. It can only do what's programmed into its capacity, so if the dev teams working on AGI are shortsighted enough to give it an out to being unplugged, that just seems like stark incompetence to me. It also seems like it'd be a really hard feature to include even if one tried; equivalent to, say, giving a human an out to having their blood drained from their body.

Comment by JacobW38 (JacobW) on The AI Countdown Clock · 2022-10-05T04:55:39.660Z · LW · GW

New to the site, just curious: are you that Roko? If so, then I'd like to extend a warm welcome-back to a legend.

Although I'm not deeply informed on the matter, I'd also happen to agree with you 100% here. I really think most AI risk can be heavily curtailed to fully prevented by just making sure it's very easy to terminate the project if it starts causing damage.