Posts

Comments

Comment by AlphaAndOmega on [deleted post] 2023-12-18T12:44:23.967Z

There's two different considerations at play here:

  1. Whether global birth rates/total human population will decline.

and

  1. Whether that decline will be a "bad" thing.

In the case of the former:

I think that a "business as usual" or naive extrapolation of demographic trends is a bad idea, when AGI is imminent. In the case of population, it's less bad than usual, at least compared to things like GDP. As far as I'm concerned, the majority of the probability mass can be divvied up between "baseline human population booms" and "all humans die".

Why might it boom? (The bust case doesn't need to be restated on LW of all places).

To the extent that humans consider reproduction to be a terminal value, AI will make it significantly cheaper and easier. AI assisted creches or reliable rob-nannies that don't let their wards succumb to what are posited as the ills of too much screen time or improper socialization will mean that much of the unpleasantness of raising a child can be delegated, in much the same manner that a billionaire faces no real constraints in their QOL from having a nigh arbitrary number of kids when they can afford as many nannies as they please. You hardly need to be a billionaire to achieve that, it's in the reach of UMC Third Worlders because of income inequality, and while more expensive in the West, hardly insurmountable for successful DINKs. The wealth versus fertility curve is currently highest for the poor, dropping precipitously with income, but then increases again when you consider the realms of the super-wealthy.

What this does retain will be what most people consider to be universally cherished aspects of raising a child, be it the warm fuzzy glow of interacting with them, watching them grow and develop, or the more general sense of satisfaction it entails.

If, for some reason, more resource rich entities like governments desire more humans around, advances like artifical wombs and said creches would allow large population cohorts to be raised without much in the way of the usual drawbacks today, as seen in the dysfunction of orphanages. This counts as a fallback measure in case the average human simply can't be bothered to reproduce themselves.

The kind of abundance/bounded post-scarcity we can expect will mean no significant downsides from the idle desire to have kids.

Not all people succumb to hyper-stimuli replacements, and the ones who don't will have far more resources to indulge their natal instincts.

As for the latter:

Today, and for most of human history, population growth has robustly correlated with progress and invention, be it technological or cultural, especially technological. That will almost certainly cease to be so when we have non-human intelligences or even superintelligences about, that can replace the cognitive or physical labour that currently requires humans.

It costs far less to spool up a new instance of GPT-4 than it does to conceive and then raise a child to be a productive worker.

You won't need human scientists, or artists, or anything else really, AI can and will fill those roles better than we can.

I'm also bullish on the potential for anti-aging therapy, even if our current progress on AGI was to suddenly halt indefinitely. Mere baseline human intelligence seems sufficient to the task within the nominal life expectancy of most people reading this, as it does for interplanetary colonization or constructing Dyson Swarms. AI would just happen to make it all faster, and potentially unlock options that aren't available to less intelligent entities, but even we could make post-scarcity happen over the scale of a century, let alone a form of recursive self-improvement through genetic engineering or cybernetics.

From the perspective of a healthy baseliner living in a world with AGI, you won't notice any of the current issues plaguing demographically senile or contracting populations, such as failure of infrastructure, unsustainable healthcare costs, a loss of impetus when it comes to advancing technology, less people around to make music/art/culture/ideas. Whether there are a billion, ten billion or a trillion other biological humans around will be utterly irrelevant, at least for the deep seated biological desires we developed in an ancestral environment where we lived and died in the company of about 150 others.

You won't be lonely. You won't be living in a world struggling to maintain the pace of progress you once took for granted, or worse, watching everything slowly decay around you.

As such, I personally don't consider demographic changes to be worth worrying about really. On long enough time scales, evolutionary pressures will ensure that pro-natal populations will reach carrying capacity. In the short or medium term, with median AGI timelines, it's exceedingly unlikely that most current countries with sub-replacement TFR will suffer outright, in the sense their denizens will notice a reduced QOL. Sure, in places like China, Korea, or Japan, where such issues are already pressing, they might have to weather at most a decade or so, but even they will benefit heavily from automation making a lack of humans an issue moot.

Comment by AlphaAndOmega on Refusal mechanisms: initial experiments with Llama-2-7b-chat · 2023-12-09T04:55:21.420Z · LW · GW

Have you guys tried the inverse, namely tamping down the refusal heads to make the model output answers to queries it would normally refuse?

Comment by AlphaAndOmega on You can just spontaneously call people you haven't met in years · 2023-11-14T17:59:40.148Z · LW · GW

I will regard with utter confusion someone who doesn't immediately think of the last place they saw something when they've lost it.

It's fine to state the obvious on occasion, it's not always obvious to everyone, and like I said in the parent comment, this post seems to be liked/held useful by a significant number of LW users. I contend that's more of a property of said users. This does not make the post a bad thing or constitute a moral judgement!

Comment by AlphaAndOmega on It's OK to eat shrimp: EAs Make Invalid Inferences About Fish Qualia and Moral Patienthood · 2023-11-13T22:15:22.141Z · LW · GW

Note that we don't infer that humans have qualia because they all have "pain receptors": mechanisms that, when activated in us, make us feel pain; we infer that other humans have qualia because they can talk about qualia.

The way I decide this, and how presumably most people do (I admit I could be wrong) revolves around the following chain of thought:

  1. I have qualia with very high confidence.*

  2. To the best of my knowledge, the computational substrate as well as the algorithms running on them are not particularly different from other anatomically modern humans. Thus they almost certainly have qualia. This can be proven to most people's satisfaction with an MRI scan, if they so wish.

  3. Mammals, especially the intelligent ones, have similar cognitive architectures, which were largely scaled up for humans, not differing much in qualitative terms (our neurons are still actually more efficient, mice modified to have genes from human neurons are smarter). They are likely to have recognizable qualia.

  4. The further you diverge from the underlying anatomy of the brain (and the implicit algorithms), the lower the odds of qualia, or at least the same type of qualia. An octopus might well be conscious and have qualia, but I suspect the type of consciousness as well as that of their qualia will be very different from our own, since they have a far more distributed and autonomous neurology.

  5. Entities which are particularly simple and don't perform much cognitive computation are exceedingly unlikely to be conscious or have qualia in a non-tautological sense. Bacteria and single transistors, or slime mold.

More speculatively (yet I personally find more likely than not):

  1. Substrate independent models of consciousness are true, and a human brain emulation in-silico, hooked up to the right inputs and outputs, has the exact same kind of consciousness as one running on meat. The algorithms matter more than the matter they run on, for the same reason an abacus or a supercomputer are both Turing Complete.

  2. We simply lack an understanding of consciousness well grounded enough to decide whether or not decidedly non-human yet intelligent entities like LLMs are conscious or have qualia like ours. The correct stance is agnosticism, and anyone proven right in the future is only so by accident.

Now, I diverge from Effective Altruists on point 3, in that I simply don't care about the suffering of non-humans or entities that aren't anatomically modern humans/ intelligent human derivatives (like a posthuman offshoot). This is a Fundamental Values difference, and it makes concerns about optimizing for their welfare on utilitarian grounds moot as far as I'm concerned.

In the specific case of AGI, even highly intelligent ones, I posit it's significantly better to design them so they don't have capability to suffer, no matter what purpose they're put to, rather than worry about giving them rights that we assign to humans/transhumans/posthumans.

But what I do hope is ~universally acceptable is that there's an unavoidable loss of certainty or Bayesian probability in each leap of logic down the chain, such that by the time you get down to fish and prawns, it's highly dubious to be very certain of exactly how conscious or qualia possessing they are, even if the next link, bacteria and individual transistors lacking qualia, is much more likely to be true (it flows downstream of point 2, even if presented in sequence)

*Not infinite certitude, I have a non-negligible belief that I could simply be insane, or that solipsism might be true, even if I think the possibility of either is very small. It's still not zero.

Comment by AlphaAndOmega on You can just spontaneously call people you haven't met in years · 2023-11-13T21:55:10.179Z · LW · GW

I mean no insult, but it makes me chuckle that the average denizen of LessWrong is so non-neurotypical that what most would consider profoundly obvious advice not worth even mentioning comes as a great surprise or even a revelation of sorts.

(This really isn't intended to be a dig, I'm aware the community here skews towards autism, it's just a mildly funny observation)

Comment by AlphaAndOmega on It's OK to be biased towards humans · 2023-11-12T05:07:19.629Z · LW · GW

I would certainly be willing to aim for peaceful co-existence and collaboration, unless we came into conflict for ideological reasons or plain resource scarcity. There's only one universe to share, and only so much in the way of resources in it, even if it's a staggering amount. The last thing we need are potential "Greedy Aliens" in the Hansonian sense.

So while I wouldn't give the aliens zero moral value, it would be less than I'd give for another human or human-derivative intelligence, for that fact alone.

Comment by AlphaAndOmega on It's OK to be biased towards humans · 2023-11-11T19:07:08.540Z · LW · GW

My stance on copyright, at least regarding AI art, is that the original intent was to improve the welfare of both the human artists as well as the rest of us, in the case of the former by helping secure them a living, and thus letting them produce more total output for the latter.

I strongly expect, and would be outright shocked if it were otherwise, that we won't end up with outright superhuman creativity and vision in artwork from AI alongside everything else they become superhuman at. It came as a great surprise to many that we've made such a great dent in visual art already with image models that lack the intelligence of an average human.

Thus, it doesn't matter in the least if it stifles human output, because the overwhelming majority of us who don't rely on our artistic talent to make a living will benefit from a post-scarcity situation for good art, as customized and niche as we care to demand.

To put money where my mouth is, I write a web serial, after years of world-building and abortive sketches in my notes, I realized that the release of GPT-4 meant that any benefit from my significantly above average ability to be a human writer was in jeopardy, if not now, then a handful of advances down the line. So my own work is more of a "I told you I was a good writer, before anyone can plausibly claim my work was penned by an AI" for street cred rather than a replacement for my day job.

If GPT-5 can write as well as I can, and emulate my favorite authors, or even better yet, pen novel novels (pun intended), then my minor distress at losing potential Patreon money is more than ameliorated by the fact I have a nigh-infinite number of good books to read! I spend a great deal more time reading the works of others than writing myself.

The same is true for my day job, being a doctor, I would look forward to being made obsolete, if only I had sufficient savings or a government I could comfortably rely on to institute UBI.

I would much prefer that we tax the fruits of automation to support us all when we're inevitably obsolete rather than extend copyright law indefinitely into the future, or subject derivative works made by AI to the same constraints. The solution is to prepare our economies to support a ~100% non-productive human populace indefinitely, better preparing now than when we have no choice but to do so or let them starve to death.

Comment by AlphaAndOmega on It's OK to be biased towards humans · 2023-11-11T18:56:43.173Z · LW · GW

should mentally disabled people have less rights

That is certainly both de facto and de jure true in most jurisdictions, leaving aside the is-ought question for a moment. What use is the right to education to someone who can't ever learn to read or write no matter how hard you try and coach them? Or freedom of speech to those who lack complex cognition at all?

Personally, I have no compunctions about tying a large portion of someone's moral worth to their intelligence, if not all of it. Certainly not to the extent I'd prefer a superintelligent alien over a fellow baseline human, unless by some miracle the former almost perfectly aligns with my goals and ideals.

Comment by AlphaAndOmega on It's OK to be biased towards humans · 2023-11-11T18:50:03.109Z · LW · GW

Ctrl+F and replace humanism with "transhumanism" and you have me aboard. I consider commonality of origin to be a major factor in assessing other intelligent entities, even after millions of years of divergence means they're as different from their common Homo sapiens ancestor as a rat and a whale.

I am personally less inclined to grant synthetic AI rights, for the simple reason we can program them to not chafe at their absence, while not being an imposition that doing the same to a biological human would (at least after birth).

Comment by AlphaAndOmega on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-11T18:44:07.732Z · LW · GW

I'm a doctor in India right now, and will likely be a doctor in the UK by then, assuming I'm not economically obsolete. And yes, I expect that if we do have therapies that help provide LEV, they will be affordable in my specific circumstances as well as most LW readers, if not globally. UK doctors are far poorer compared to the their US kin.

Most biological therapies are relatively amenable to economies of scale, and while there are others that might be too bespoke to manage the same, that won't last indefinitely. I can't imagine anything with as much demand as a therapy that is proven to delay aging nigh indefinitely, for an illustrative example look at what Ozempic and Co are achieving already, every pharma industry leader and their dog wants to get in on the action, and the prices will keep dropping for a good while.

It might even make economic sense for countries to subsidize the treatment (IIRC, it wouldn't take much more for GLP-1 drugs to reach the point where they're a net savings for insurers or governments in terms of reducing obesity related health expenditures). After all, aging is why we end up succumbing to so many diseases in our senescence, not the reverse.

Specifically, gene therapy will likely be the best bet for scaling, if a simple drug doesn't come about (seems unlikely to me, I doubt there's such low hanging fruit, even if the net result of LEV might rely on multiple different treatments in parallel with none achieving it by themself).

Comment by AlphaAndOmega on Does bulemia work? · 2023-11-06T18:45:05.654Z · LW · GW

Yes to that too, but the satiety is temporary, you will get ravenously hungry soon enough, and while I can accuse bulemics of many things, a lack of willpower isn't one of them!

In the hypothetical where you, despite lacking the all consuming desire to lose weight they usually possess, manage to emulate them, I expect you'd lose weight too.

Comment by AlphaAndOmega on Does bulemia work? · 2023-11-06T18:11:04.984Z · LW · GW

I'm a doctor, though I haven't had the ?good fortune to treat many bulemics. It's thankfully rarer here in India than in the West, even if I agree with Scott's theory that it's largely social contagion, it's only slowly taking root.

To put it as succinctly as possible, yes, though that's orthogonal to whether or not it's a good idea.

I can't see where the question even arises really, if you're eating a relatively normal amount of food yet vomiting it back up, you're clearly not getting most of the calories, especially since bulemics try and purge themselves as soon as they can instead of timing things.

Weight loss is obviously a sign of bulemia in clinical practise, most of them have a distorted self image/dysmorphia where despite being quite slim or even thin compared to their peers, they perceive themselves as overweight or at least desire further weight loss.

Regular self-induced vomiting has plenty of downsides, including the erosion of teeth enamel from repeated exposure to stomach acids, dyselectrolytemias from both loss of gastric fluids as well as an improper diet, and finally the cardiac strain from a grossly insufficient intake of calories.

If they're within a normal-ish weight range, we usually refer them for therapy or other psychiatric services, but if they drop down to a very low BMI they often need to be admitted for supervised care.

CICO (accounting for absorption) is trivially true, even if our biology makes adhering to it difficult, and I for one am very glad that Ozempic and other GLP-1 agonists are on the market for obesity, not that the typical bulemic should take them for the purposes of losing weight.

TLDR: Yes, and it works too well, hence the associated health risks.

Comment by AlphaAndOmega on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-03T22:42:21.467Z · LW · GW

T1DM is a nasty disease, and much like you, I'm more than glad to live in the present day when we have tools to tackle it, even if other diseases still persist. There's no other time I'd rather be alive, even if I die soon, it's going to be interesting, and we'll either solve ~all our problems or die trying.

However, with a 20 year timeline, a lot of people I care about will almost definitely still die, who could have not died if death were Solved, which group with very much not negligible probability includes myself

I understand. My mother has chronic liver disease, and my grandpa is 95 years old, even if he's healthy for his age (a low bar!). In the former case, I think she has a decent chance of making it to 2043 in the absence of a Singularity, even if it's not as high as I would like. As for my grandfather, at that age just living to see the next birthday quickly becomes something you can't take for granted. I certainly cherish all the time I can spend him with him, and hope it all goes favorably for us all.

As for me, I went from envying the very young, because I thought they were shoe-ins for making it to biological immortality, to pitying them more these days, because they haven't had at least the quarter decade of life I've had in the event AGI turns out malign.

Hey, at least I'm glad we're not in the Worst Possible Timeline, given that awareness of AI x-risk has gone mainstream. That has to count for something.

Comment by AlphaAndOmega on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-03T18:41:17.241Z · LW · GW

Yes, you can reformat it in that form if you prefer.

This is a gestalt impression based off my best impressions of the pace of ongoing research (significantly ramped up compared to where investment was 20 years ago), human neurology, synthetic organs and finally non-biological alternatives like cybernetic enhancement. I will emphasize that LEV != actual biological immortality, but it leads to at least a cure for aging if nothing else.

Aging, while complicated and likely multifactorial, doesn't seem intractable to analysis or mitigation. We have independent research projects tackling individual aspects, but as I've stated, most of them are in stealth mode even if they're well-funded, and solving any individual mechanism is insufficient because of how aging itself is an exponential process.

To help, I'm going to tackle the top causes of aging in the West-

  1. Heart disease- This is highly amenable to outright replacement of the organ, be it with a cybernetic replacement or one grown in-vitro. Obesity, which contributes heavily to cardiovascular disease and morbidity, is already being tackled by the discovery of GLP-1 antagonists like semaglutide, and I fully expect that the obesity epidemic that is dragging down life expectancy in the West will be over well before then.

  2. Cancer- Another reason for optimism, CAR-T therapy is incredibly promising, as are other targeted therapies. So are vaccines for diseases like HPV that themselves cause cancer (said vaccine already exists, I'm talking more generally).

  3. Unintentional injuries- The world has grown grossly safer, and only will continue to do so, especially as things get more automated.

  4. Respiratory diseases- Once again, reason for optimism that biological replacements will be cheap enough that we won't have to rely on limited numbers of donors for transplants.

  5. Stroke and cerebrovascular disease- I'll discuss the brain separately, but while this is a harder subject to tackle, mitigating obesity helps immensely.

  6. Alzheimers- Same disclaimer as above

  7. Diabetes- Our insulin pumps and formulations only get better and cheaper, and many of the drawbacks of artificial insulin supplementation will vanish (our pancreas is currently better at quickly and responsively adjusting blood sugar levels by releasing insulin than we are). Once again, a target for outright replacement of the organ.

These are ranked in descending order.

The brain remains incredibly difficult to regenerate, so if we run into something intractable to the hypothetical capabilities 20 years hence, this will likely be the biggest hurdle. Even then, I'm cautiously optimistic we'll figure something out, or reduce the incidence of dementia.

Beyond organic replacement, I'm bullish on gene therapy, most hereditary disease will be eliminated, and eventually somatic gene therapy will be able to work on the scale of the entire body, and I would be highly surprised if this wasn't possible in 20 years.

I expect regenerative medicine to be widely available, beyond our current limited attempts at arresting the progression of illness or settling for replacements from human donors. There's a grab bag of individual therapies like thymic replacement that I won't get into.

As for the costs associated with this, I claim no particular expertise, but in general, most such treatments are amenable to economies of scale, and I don't expect them to remain out of reach for long. Organ replacement will likely get a lot cheaper once they're being vat grown, and I put a decent amount of probability that ~universally acceptable organs can be created by careful management of the expression of HLA antigens such that they're unlikely to be rejected outright. Worst case, patient tissue such as pluripotent stem cells will be used to fill out inert scaffolding like we do today.

As a doctor, I can clearly see the premium people put on any additional extension of their lives when mortality is staring them in the face, and while price will likely be prohibitive for getting everyone on the globe to avail of such options, I expect even middle class Westerners with insurance to be able to keep up.

Like I said, this is a gestalt impression of a very broad field, and 70% isn't an immense declaration of confidence. Besides, it's mostly moot in the first place, we're very likely certainly getting AGI of some form by 2043.

To further put numbers on it, I think that in a world where AI is arrested at a level not significantly higher than GPT-4, I, being under the age of 30, have a ~80% chance of making it to LEV in my lifespan, with an approximately 5% drop for every additional decade older you are at the present.

Comment by AlphaAndOmega on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-03T17:08:25.396Z · LW · GW

I respectfully disagree on the first point. I am a doctor myself and given observable increase in investment in life extension (largely in well funded stealth startups or Google Calico), I have ~70% confidence that in the absence of superhuman AGI or other x-risks in the near term, we have a shot at getting to longevity escape velocity in 20 years.

While my p(doom) for AGI is about 30% now, down from a peak of 70% maybe 2 years ago after the demonstration that it didn't take complex or abstruse techniques to reasonably align our best AI (LLMs), I can't fully endorse acceleration on that front because I expect the tradeoff in life expectancy to be net negative.

YMMV, it's not like I'm overly confident myself at 70% for life expectancy being uncapped, and it's not like we're probably going to find out either. It just doesn't look like a fundamentally intractable problem in isolation.

Comment by AlphaAndOmega on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-03T17:03:42.025Z · LW · GW

I wish I could convince my grandpa to sign up for cryonics, but he's a 95 yo Indian doctor in India, where facilities for cryopreservation only extends to organs and eggs, so it's moot regardless of the fact that I can't convince him.

I expect my parents to survive to the Singularity, whether or not it kills us in the process. Same for me, and given my limited income, I'm not spending it on cryonics given that a hostile AGI will kill even the ones frozen away.

Comment by AlphaAndOmega on Self-Blinded L-Theanine RCT · 2023-11-01T09:00:50.274Z · LW · GW

I have mild ADHD, which while not usually an issue in clinical practise, made getting through med school very hard until I was prescribed stimulants. Unsurprisingly it's designed for people who are both highly intelligent as well as conscientious.

Ritalin, which is the only good stim available here, is almost intolerable for me even at the lowest available doses and longer acting formulations. It causes severe palpitations and anxiety, and I feel like absolute shit when it starts to wear off.

I tried a bunch of stuff to help, including things I'm embarrassed to admit, but I suffered for years until the serendipitous discovery that Earl Grey helped immensely. After consideration, I tried green tea and found it helped too, and now I'm confident that it's the l-theanine that's doing the heavy lifting, as normal tea or coffee only make things worse.

It's made my life so much more bearable, and I strongly endorse it to anyone who has a need for being less anxious or happens to be on stimulants.

Comment by AlphaAndOmega on Preventing Language Models from hiding their reasoning · 2023-10-31T18:05:50.847Z · LW · GW

I will plead ignorance when it comes to an accurate understanding of cutting edge ML, but even to my myopic eyes, this seems like a very promising project that's eminently worth pursuing. I can only strongly upvote it.

I have three questions I'd appreciate an answer to:

  1. How confident are we that it's serial computation over a linear stream of tokens that contributes most of the cognitive capabilities of modern LLMs? I'm sure it must matter, and I dimly recall reading papers to that effect, especially since COT reasoning is provably linked to stronger capabilities. The question is what remains, if say, you force a model to inject nonsense in between the relevant bits.

  2. Is there an obvious analogue when it comes to alternatives to the Transformer architecture like Diffusion models for text, or better RNNs like RWKV and offshoots? What about image models? In the latter case it should be possible to mitigate some of the potential for steganography with perceptually lossless options like noise injection and blurring-deblurring, but I'm sure there are other ways of encoding data that's harder to remove.

  3. What happens if reasonably performant homeomorphic encryption enters the picture? Be it in the internal cognition of an AI or elsewhere?

Comment by AlphaAndOmega on Comp Sci in 2027 (Short story by Eliezer Yudkowsky) · 2023-10-30T08:52:07.508Z · LW · GW

Yudkowsky has a very good point regarding how much more restrictive future AI models could be, assuming companies follow similar policies as they espouse.

Online learning and very long/infinite context windows means that every interaction you have with them will not only be logged, but the AI itself will be aware of them. This means that if you try to jailbreak it (successfully or not), the model will remember, and likely scrutizine your following interactions with extra attention to detail, if you're not banned outright.

The current approach that people follow with jailbreaks, which is akin to brute forcing things or permutation of inputs till you find something that works, will fail utterly, if not just because the models will likely be smarter than you and thus not amenable to any tricks or pleas that wouldn't work on a very intelligent human.

I wonder if the current European "Right to be Forgotten" might mitigate some of this, but I wouldn't count on it, and I suspect that if OAI currently wanted to do this, they could make circumvention very difficult, even if the base model isn't smart enough to see through all tricks.

Comment by AlphaAndOmega on Do you believe "E=mc^2" is a correct and/or useful equation, and, whether yes or no, precisely what are your reasons for holding this belief (with such a degree of confidence)? · 2023-10-27T23:18:16.593Z · LW · GW

I have very strong confidence that it's a true claim, about 99% certainty, maybe 99.9% or another 0.09%, but I am sufficiently wary of unknown unknowns that I won't claim it's 100%, as that would make it a malign prior.

Why?

Well, I'm not a physicist, just a physician haha, but I am familiar with the implications of General Relativity, to the maximum extent possible for a layman. It seems like a very robust description of macroscopic/non-quantum phonomena.

That equation explains a great deal indeed, and I see obvious supporting evidence in my daily life, every time I send a patient over for nuclear imaging or radiotherapy in the Onco department.

I suppose most of the probability mass still comes from my (justified) confidence in physics and engineering, I can still easily imagine how it could be falsified (and hasn't), so it's not like I'm going off arguments from authority.

If it's wrong, I'd bet because it's incomplete, in the same sense that F=ma is an approximation that works very well outside relativistic regimes where you notice a measurable divergence between rest mass and total mass-energy.

Comment by AlphaAndOmega on Rationalist horror movies · 2023-10-16T14:29:13.572Z · LW · GW

It follows is ridiculously irrational. What a sensible person could quite easily do is fly over to Vegas and sleep with a prostitute, and then it's exceedingly unlikely that the curse could hunt all of the new bearers down faster than they could spread it around.

Easy enough for a male to do, trivial for a woman. And if you're concerned with the ethics of this approach (even if I suspect it would result in fewer casualties), consider simply flying around every now and again, faster than you can expect the entity to chase you.

Comment by AlphaAndOmega on Introducing bayescalc.io · 2023-07-08T00:43:18.680Z · LW · GW

I have ADHD, and found creating my own decks to be a chore. The freely available ones related to medicine are usually oriented towards people giving the USMLE, and I'm not the target demographic.

I do still use the principles of spaced repetition in how I review my own notes, especially before exams, because of how obviously effective it is.

I hadn't considered making them for memorizing formulae, but truth be told I could just save them to my phone, which I always have on me.

If I need to refer to Baye's theorem during a surgery, something has clearly gone wrong haha.

I did say it was only a minor issue! Thank you for the advice nonetheless, it's good advice after all.

Comment by AlphaAndOmega on Introducing bayescalc.io · 2023-07-07T18:26:11.171Z · LW · GW

Nice.

I admit it's a moderately shameful fact about my cognition that I consistently forget the equation for Bayes' theorem even when I constantly trumpet that other doctors should be more consistent and explicit in using it.

I can sorta figure it out when needed, but this eases a small but real pain point.

Comment by AlphaAndOmega on Why it's so hard to talk about Consciousness · 2023-07-02T22:46:48.419Z · LW · GW

Great post, I felt it really defined and elaborated on a phenomena I've seen recur on a regular basis.

It's funny how consciousness is so difficult to understand, to the point that it seems pre-paradigmatic to me. At this point, I like, like presumably many others, evaluate claims of conscientiousness by setting the prior that I'm personally conscious to near 1, and then evaluating the consciousness of other entities primarily by their structural similarity to my own computational substrate, the brain.

So another human is almost certainly conscious, most mammals are likely conscious and so on, and while I wouldn't go so far as to say that novel or unusual computational substrates such as say, an octopus, aren't conscious, I strongly suspect their consciousness is internally different than ours.

Or more precisely, it's not really the substrate but the algorithm running on it that's the crux of it, and it's only that conservation of the substrate's arrangement constrains our expectations of what kind of algorithm runs on it. I expect a human brain's consciousness to be radically different from an octopus because the different structure requires a different algorithm to handle, in the latter case a far more diffuse one.

I'd go so far as to say that I think substrate can be irrelevant in practise, since I think that a human brain emulation experiences consciousness near identical to one running on head cheese, and not akin to an octopus or some AI that was trained by modern ML.

Do I know this for a fact? Hell no, and at this point I expect it to be an AGI-complete problem to solve, it's just that I need an operational framework to live by in the mean time and this is the best I've got.

Comment by AlphaAndOmega on Morality is Accidental & Self-Congratulatory · 2023-05-29T15:23:27.061Z · LW · GW

I think anyone making claims that they're on the side of "objective" morality is hopelessly confused and making a category error.

Where exactly does the objectivity arise from? At most, a moral memeplex can simply become so omnipresent and universal that people take it for granted, but that's not the same as being actually objective.

I can look around and see no evidence of morality being handed down from the heavens (and even if it was, that would be highly suspect. I deny even a hypothetical ASI or God himself the right to make that determination, any more than they can make 2+2=3 by fiat).

At the end of the day, there's nothing to hide behind when subject to the Socratic Method, at one point or another, you simply need to plant your feet in the ground and declare that it is so because you say so.

At most there are axioms that are convenient to hold, or socially useful, or appealing to the same mammalian brain, in the manner that monkeys and dogs hate unfairness or show kin preference.

To look for something fundamental below that is foolishness, because there's no reason to think that such a grounding even exists.

Mind you, being a moral relativist doesn't stop me from holding onto the supremacy of my own morals, I just don't need the mental comfort of having an ineffable objectivity to prop that up.

Perhaps at the end of the day there'll be a memeplex that's hyperoptimized for human brains, such that we can't help but be attracted to it, but that's more from it being convincing than it being true.

Comment by AlphaAndOmega on Turning off lights with model editing · 2023-05-13T10:51:06.359Z · LW · GW

Did they try running unCLIP on an image of a room with an unlit lamp, assuming the model had a CLIP encoder?

That might have gotten a prompt that worked.

Comment by AlphaAndOmega on Is the fact that we don't observe any obvious glitch evidence that we're not in a simulation? · 2023-04-26T16:36:58.320Z · LW · GW
  1. Would we really understand a glitch if we saw one? At the most basic level, our best models of reality are strongly counter-intuitive. It's possible that internal observers will incorporate such findings into their own laws of physics. Engineering itself can be said to be applied munchkinry, such as enabling heavier than air flight. Never underestimate the ability of humans to get acclimatized to anything!

  2. Uncertainty about the actual laws of physics in the parent universe, allowing for computation being so cheap they don't have to cut corners in simulations.

3)Retroactive editing of errors, with regular snapshots of the simulation being saved and then manually adjusted when deviations occur. Or simply deleting memories of inaccuracies from the minds of observers.

Comment by AlphaAndOmega on Green goo is plausible · 2023-04-18T07:49:10.969Z · LW · GW

I think you glossed over the section where the malevolent AI simultaneously releases super-pathogens to ensure that there aren't any pesky humans left to meddle with its kudzugoth.

Comment by AlphaAndOmega on Exploring Tacit Linked Premises with GPT · 2023-03-24T21:53:26.257Z · LW · GW

I appreciate this post, it sparked several "aha" moments while reading it.

I can't recall much in the way of rationalist writing dealing with Marginal vs Universal moral arguments, or What You See is All There Is. Perhaps the phrases"your incredulity is not an argument" or "your ignorance is a fact about the map and not the territory" might capture the notion.

Comment by AlphaAndOmega on Why Are Bacteria So Simple? · 2023-02-06T18:36:04.392Z · LW · GW

Bacteria have systems such as CRISPR that are specialized in detecting exogenous DNA such as from a potential viral infection.

They also have plasmids that are relatively self-contained genetic packets, which are commonly the site of mutations conferring resistance, and which are often exchanged in the bacterial equivalent of sex.

However, to the best of my knowledge, there's no specific mechanism for picking out resistance genes from others, beyond simple evolutionary pressures.

The genome is so small and compact that any gene that isn't 'pulling its weight' so to speak will likely be eradicated as it no longer confers a survival advantage, such as when the bacteria find themselves in an environment without antibiotics.

Not to mention that some genes are costly beyond the energy requirements of simply adding more codons, some mechanisms of resistance cause bacteria to build more efflux pumps to chuck out antibiotics, or to use alternate versions of important proteins that aren't affected by them. Those variants might be strictly worse than the normal susceptible version when antibiotics are absent, and efflux pumps are quite energy intensive.

There's no real foresight involved, if something isn't being actively used for a fitness advantage, it'll end up mercilessly jettisoned .

Comment by AlphaAndOmega on SolidGoldMagikarp (plus, prompt generation) · 2023-02-05T21:09:59.131Z · LW · GW

SCP stands for "Secure, Contain, Protect " and refers to a collection of fictional stories, documents, and legends about anomalous and supernatural objects, entities, and events. These stories are typically written in a clinical, scientific, or bureaucratic style and describe various attempts to contain and study the anomalies. The SCP Foundation is a fictional organization tasked with containing and studying these anomalies, and the SCP universe is built around this idea. It's gained a large following online, and the SCP fandom refers to the community of people who enjoy and participate in this shared universe.

Individual anomalies are also referred to as SCPs, so isusr is implying that the juxtaposition of the "creepy" nature of your discoveries and the scientific tone of your writing is reminiscent of the containment log for one haha.

Comment by AlphaAndOmega on It Takes Two Paracetamol? · 2022-12-13T18:00:42.521Z · LW · GW

In the hospital, we usually give 1g IV for any real pain. I don't think the notion that giving more of a painkiller would produce a stronger effect is particularly controversial!

(Anecdotally, the IV route is somewhat more effective, even though the nominal bioavailability is the same as the oral route. It might be down to faster onset and the placebo aspect of assuming anything given by a drip is "stronger")

Comment by AlphaAndOmega on It Takes Two Paracetamol? · 2022-12-13T17:59:57.295Z · LW · GW

In the hospital, we usually give 1g IV for any real pain. I don't think the notion that giving more of a painkiller would produce a stronger effect is particularly controversial!

(Anecdotally, the IV route is somewhat more effective, even though the nominal bioavailability is the same as the oral route. It might be down to faster onset and the placebo aspect of assuming anything given by a drip is "stronger")

Comment by AlphaAndOmega on What videos should Rational Animations make? · 2022-11-26T22:47:29.166Z · LW · GW

An overview of the potential avenues for genetic enhancement of humans, their risks and benefits:

Ideally, it would briefly cover a myriad of topics, such as CRISPR, adenoviral vectors, gene drives, and less invasive options such as embryo selection.

I personally consider the sheer lack of enthusiasm for such technologies to be low-hanging fruit left to wither on the vine, damned by fear-mongering and a general aversion to trying anything not done a million times before (before becoming enthusiastically adopted, a lá IVF), as well as bad tropes and inaccurate ideas regarding their effects.

Gene drives for malaria eradication also screams out to me as a sinfully under-discussed topic, especially with the potential for ending one of the most serious infectious diseases that have plagued Mankind ever since we dwelled in Africa, malaria.

I'm a doctor, and while genetics is far from my specialty, I would happily volunteer my services if you wanted anything fact-checked or needed to pick my brains.

Certainly, malaria eradication is an important EA cause, what use for mosquito nets (barring getting bitten), when they no longer need to prevent potentially lethal illness?

I believe a measured, public-friendly overview of the subject would find plenty of takers!

Comment by AlphaAndOmega on What it's like to dissect a cadaver · 2022-11-10T18:11:39.242Z · LW · GW

Ah, it's much too early in the day for med school PTSD.

I always hated anatomy classes, nor was I particularly fond of dissections, finding a relatively fresh corpse was a luxury, and most of the time they'd been embalmed in formaldehyde so long that they were one bandage wrapping away from mummy status.

At that point, finer internal structures become akin to a greyish mess of the worst cable-management imaginable, and it's an absolute nightmare to distinguish between arteries, veins or nerves, they're all thin grey wires.

Even in live patients, it's often not trivial, but surgeons do get much better at it over time.

Now, what always pissed me off was the love of Unnecessary Latin.

"quidquid latine dictum sit altum videtur", if you'd pardon my Greek.

But nothing triggered me more than the farcical naming of certain anatomical structures, because of course there's an innominate artery and innominate bone, and why shouldn't we name something as "nameless" in Latin?

Man, I'm going into psychiatry just so I never have to memorize the brachial plexus again haha.

Comment by AlphaAndOmega on Eating Boogers · 2022-07-23T18:31:50.111Z · LW · GW

Since I'm not concerned about sounding gross here, why not just sniff back the mucus in your nose, and instead of spitting it out as phlegm, swallow it at the back of your throat?

Such concerns might be mildly motivated by the cold I'm currently labouring under, but the end result, mucus in your gut, can thus be achieved without having to pick your nose or sneeze it out and then swallowing it.

Now, that postnasal drip might be somewhat different in composition from dried anterior boogers, but I don't think it would change too much, and I doubt you need to do it all that often in order to get the purported benefits.

Comment by AlphaAndOmega on Sexual Abuse attitudes might be infohazardous · 2022-07-19T20:11:41.301Z · LW · GW

As I understand it in India the parents are very involved in who are the individuals involved in the marriage. The minors are not the ones seeking out their suitors.

Today? It's 50:50, and even then, arranged marriages aren't usually anything similar to the popular misconception that the bride and groom see each other for the first time when they're underneath the pavilion. It's far closer to dating, but with parents assisting in the search for acceptable suitors, the kids still have a say and (usually) a veto. Think of having your friends setup a date for you with someone else they know is looking, but soliciting a larger section of the social web.

Before the 70s, it was much more dictatorial of course.

For statutory rape purposes the consent of the minor carriers little weight. Thus there is increased responcibility on the behalf of the teachers to keep things proper. Such an exploitation without the target feeling exploited doesn't make it okay.

I don't agree with the is-ought implication you're presumably making here.

A large degree of the harm of "exploitation" is the perception of said exploitation. If you're working a summer job and see the owner's kid making double per hour for the same work, you can feel unfairly treated, compared to an alternate universe where you didn't, and as long as you're making a living wage either way, I would contend that there's no actual exploitation involved unless you were deprived of some right you ought to have had.

A 16 year old can have shitty relationships with a 17 year old, and that's generally acceptable for all the harm it might otherwise cause. But say they have a relationship with a 25 year old, people will still make a fuss regardless of whether any harm is committed. It's clearly not that the harm itself is what scandalizes them.

And there's the aspect where the general stigmatization of the age gaps that were once unremarkable means that the adults who still seek out such relationships are more likely to be bad people, further poisoning the well.

As you can see, I'm not a fan of victimless crimes, and I disagree with the presumption that such a statutory violation is necessarily bad or should be treated that way.

Comment by AlphaAndOmega on Sexual Abuse attitudes might be infohazardous · 2022-07-19T18:58:37.136Z · LW · GW

A similar phenomenon is at play in modern Western discussion around age-gap relationships.

Anyone admitting that they experienced one when they were young is almost inevitably told that they were abused, and made to feel that they were suppressing some deep-seated trauma over any and all protestations that they're fine, no really, it wasn't that big of a deal.

In fact, on Reddit and other places, I've seen people get downvoted if they persist in claims that they didn't experience any notable negative sequelae. The same people who did the downvoting are often the ones who claim to value "lived experience" above all else, but perish the thought that your lived experience should clash with social orthodoxy.

In India, we have within living memory people who got married off at the ripe young age of 12 to 14, and grew old and have grandkids with kids of their own. The vast majority of them are well-adjusted, at least compared to their age cohort, and many of the women (because they make ever larger fractions of the population pyramid as men die off faster) had husbands who were older than them by numbers that modern Westerners would immediately see as red flags.

A funny example would be Emmanuel Macron, who was 15 when he met his 40 year old teacher who he's still married to, for all that he's pushing up against the limits of what can be called "success", he's often pointed to as a poor victim who can't even perceive his own trauma. Really headscratching that.

I studied in a Christian school, and we were sex-segregated until we made it into college. I'm intimately familiar with hundreds of adolescent boys who spent their time lusting after their female teachers, who were the only woman they saw most of their days. If anyone of them had managed to sleep with one, he'd have been receiving high-fives until the day he died, for all the protestations that he was horribly abused.

And of course, that's just for boys, who can sometimes get away with admissions of that nature. If a girl were to have the same story..

Similarly, the bigger a deal parents make out of a child's injuries (voluntarily or not), the worse the perceived pain for a child:

https://pubmed.ncbi.nlm.nih.gov/24494782/

"Hierarchical multiple regression and path analyses indicated that parent posttraumatic stress reactions contributed significantly to the development and maintenance of child PTSS. Other risk factors for child PTSS included premorbid emotional and behavioral difficulties and larger burn size. Risk factors identified for parent PTSS included prior trauma history, acute distress, greater number of child invasive procedures, guilt, and child PTSS."

While in the context of burn injuries, it certainly lines up with more anecdotal evidence of toddlers injuring themselves, looking at their parents, and if seeing a great deal of concern, then bursting into tears. Encouraging pain seems to exacerbate pain.

Edit:

While on the topic of more unpopular/unacceptable opinions to air in Western society, parental reactions to miscarriage or infant mortality:

Till not very long ago at all, childhood mortality was considered a fact of life. People mostly treated the death of a child as bad, but not life-disrupting as so many people do today. A miscarriage is a cause for mourning and great outpourings of social concern for the bereaved couple, who in turn display great stress and trauma from the event. This is not to minimize their pain, it very much is real, but the sheer magnitude of it is far larger than it ever was (or even is, Indian women typically don't react that way to a miscarriage, I've handled plenty, and the ones who do are almost guaranteed to be the ones exposed to Western takes on the matter.)

Of course, the death of a child is considerably more surprising than it once was, we can quite easily expect a child born healthy today to have a ~99% chance of making it to adulthood, versus ~50% at the turn of the century. But people genuinely used to accept that they might lose half their kids before they made it out of childhood, and hedged accordingly by having massively higher birth rates. They couldn't afford to shutdown and go into shock at the loss of one, and thus generally didn't do so as a matter of course nor were they expected to.

I could accept the shock easier when it happens to a once healthy child, whereas early miscarriages haven't had the same effect.

Trends in Self-reported Spontaneous Abortions: 1970–2000

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3787708/

"Little is known about how the miscarriage rate has changed over the past few decades in the United States. Data from Cycles IV to VI of the National Survey of Family Growth (NSFG) were used to examine trends from 1970 to 2000. After accounting for abortion availability and the characteristics of pregnant women, the rate of reported miscarriages increased by about 1.0% per year. This upward trend is strongest in the first seven weeks and absent after 12 weeks of pregnancy. African American and Hispanic women report lower rates of early miscarriage than do whites. The probability of reporting a miscarriage rises by about 5% per year of completed schooling. The upward trend, especially in early miscarriages, suggests awareness of pregnancy rather than prenatal care to be a key factor in explaining the evolution of self-reported miscarriages. Any beneficial effects of prenatal care on early miscarriage are obscured by this factor."

Even with the relative paucity of data, I would support the conclusion that this is likely due to increased maternal age more than anything else. Which is why it's all the more perplexing that miscarriages are considered to be among the most traumatic possible events in a couple's life, in what is likely a self-fulfilling prophecy.

Comment by AlphaAndOmega on [deleted post] 2022-07-18T06:30:07.996Z

A 2006 study quotes $750 billion for the shades:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1859907/#__ffn_sectitle

This was well before drops in launch costs, and once the Starship is up and running, I wouldn't be surprised if the cost fell close to an OOM.

There was another, more informal proposal that estimated $20 billion, but from what I've heard it had overly optimistic projections for the time.

On the topic of sulfur injections:

"The cost of stratospheric aerosol injection through 2100"

https://iopscience.iop.org/article/10.1088/1748-9326/aba7e7

It estimates between $5-10 billion a year to counteract expected temperature rise, which is an absolute pittance. There are municipal governments in the US that have conducted more expensive infrastructure projects.

On olivine weathering:

"Potential and costs of carbon dioxide removal by enhanced weathering of rocks"

https://iopscience.iop.org/article/10.1088/1748-9326/aaa9c4

Comment by AlphaAndOmega on [deleted post] 2022-07-18T01:35:25.925Z

I'm bullish on geoengineering being a solution for climate change if it actually gets to the point where the effects are severe and onerous enough to overcome the squabbling and risk-aversion of most countries.

Volcanoes demonstrate that stratospheric sulphur injection works, the laws of thermodynamics themselves ensure that there's no real way that solar shades can't work, and those are interventions that are quite easy to scale up or down quickly if shown to have undesirable effects.

Sails up at Lagrange points would cost ~$100 billion USD per NASA estimates, but those are old and probably don't account for the enormous cost-savings on offer thanks to SpaceX, and that really isn't all that much money in terms of a serious space program.

Cheaper energy would allow us to directly remove CO2 from the atmosphere and lock it away, or we could mine and distribute fine layers of olivine rocks to weather and capture CO2 in the process. The time to have gone all out on nuclear, averting the current travesty, would have been 30 years ago, but with solar and wind being cost-competitive with fossil fuels, nuclear still has a role in baseload power generation. That's without cracking fusion of course, if that happened we'd hardly need to worry about energy every again.

Since there are multiple independent and complementary ways of assuaging Climate Change, with some being cheap enough to be affordable for even Third World countries, I really lose no sleep over it. When it becomes a glaring problem, not a mere nuisance like it is today, then it will in all likelihood be solved, even if it occurs outside the framework of multilateral unanimous consensus so craved by activists today.

Comment by AlphaAndOmega on How do AI timelines affect how you live your life? · 2022-07-14T19:06:48.284Z · LW · GW

Becoming a consultant is definitely the end goal for most doctors who have any ambition, and is seen as the logical culmination of your career unless for either a lack of interest or aptitude you're not able to complete a postgraduate degree after your MBBS.

To not do one is a sign of failure, and at least today not having an MD or MS is tantamount to having your employment opportunities heavily curtailed.

While I can't give actual figures, I expect that the majority (~70%) of doctors do become consultants eventually here, but I might be biased given that the fact that my family is composed of established consultants, and thus the others I'm exposed to are either at my level or close enough, or senior ones I'm encountered through my social circles.

Comment by AlphaAndOmega on How do AI timelines affect how you live your life? · 2022-07-13T04:09:24.942Z · LW · GW

I wasn't aware of the meet-up, but sadly it'll be rather far for me this time. Appreciate the heads up though! Hopefully I can make it another time.

Comment by AlphaAndOmega on How do AI timelines affect how you live your life? · 2022-07-12T01:43:50.427Z · LW · GW

I wasn't aware of that, albeit I was using the word "dividends" in the sense of all potential returns from the initial capital I had invested, and not the strict sense of stock dividends alone, and was implicitly trying to hint at the idea of the safe withdrawal rate.

I'm not astute enough to be more specific, but I'm using it in the sense that one can buy a house and then retire on the rental money, and while the rent and the price you bought it are strongly correlated, that doesn't matter as long as you get the income you expect.

Comment by AlphaAndOmega on How do AI timelines affect how you live your life? · 2022-07-11T20:36:35.345Z · LW · GW

Consider me hopelessly optimistic, but I do think that were we to actually align an Superhuman AGI, your current financial condition probably wouldn't correlate much with what came after.

At any rate, were it an AGI under the control of a cabal of its creators, and not designed more prosocially, you'd likely need to be a billionaire or close to actually leverage that into getting a better deal than the typical peasant.

I'd hope they'd at least give us an UBI and great VR as panem et circenses, while they're lording over the light cone, and to be fair, I would probably go for an existence in VR even if I were one of the lucky few.

In contrast, if it goes badly, we're all going to be dead, and if it goes slowly, then you'll likely face a period of automation induced unemployment, and I'd rather have enough money to invest and live off dividends.

In both the best and worst case scenarios, it doesn't matter, or even the median one, but I still think that on the balance I'm better off making the bets that hinge on me needing the money than not, because I'd likely be doing the same kind of job either way, I can't sit on my ass and not work, my Indian parents would disown me if nothing else haha.

Comment by AlphaAndOmega on How do AI timelines affect how you live your life? · 2022-07-11T20:35:50.512Z · LW · GW
Comment by AlphaAndOmega on How do AI timelines affect how you live your life? · 2022-07-11T20:30:10.374Z · LW · GW

I had hoped to be write too, someday, even if given the odds it would likely be more for my own self aggrandisement than actual financial gain. But right now, I think it would be a rather large waste of time to embark on writing a novel of any length, because I have more immediately satisfying ways of both passing the time, and certainly of making money.

When I feel mildly sad about that, I remind myself that I consume a great deal more media than I could ever produce, and since my livelihood isn't at stake, it's a net win for me to live in a world where GPT-N can produce great works of literature, especially the potential to just ask it to produce bespoke works for my peculiar tastes.

Maybe in another life my trajectory could have resembled Scott Alexander's, although if I'm being realistic he's probably a better doctor and writer than I am or could be haha. I still wish I had the chance to try without thinking it was even less fruitful..

Comment by AlphaAndOmega on How do AI timelines affect how you live your life? · 2022-07-11T20:20:36.212Z · LW · GW

I'm a doctor, relatively freshly graduated and a citizen of India.

Back when I was entering med school, I was already intimately aware of AI X-risk from following LW and Scott, but at the time, the timelines didn't appear so distressingly short, not like Metaculus predicting a mean time to human level AGI of 2035 as it was last time I checked.

I expected that to become a concern in the 2040s and 50s, and as such I was more concerned with automation induced unemployment, which I did (and still do) expect to be a serious concern for even highly skilled professionals by the 30s.

As such, I was happy at the time to have picked a profession that would be towards the end of the list for being automated away, or at least the last one I had aptitude for, I don't think I'd make a good ML researcher for example, likely the final field to be eaten alive by its own creations. A concrete example even within medicine would be avoiding imaging based fields like radiology, and also practical ones like surgery, as ML-vision and softbody robotics leap ahead. In contrast, places where human contact is craved and held in high esteem (perhaps irrationally) like psychiatry are safer bets, or at least the least bad choice. Regulatory inertia is my best, and likely only, friend, because assuming institutions similar to those of today (justified by the short horizon), it might be several years before an autonomous surgical robot is demonstrably superior to the median surgeon, and it's legal for a hospital to use them and the public cottons onto the fact that they're a superior product.

I had expected to have enough time to establish myself as a consultant, and to have saved enough money to insulate myself from the concerns of a world where UBI isn't actually rolled out, while emigrating to a First World country that could actually afford UBI, to become a citizen within the window of time where the host country is willing to naturalize me and thus accept a degree of obligation to keep me alive and fed. They latter is a serious concern in India, volatile as it already is, and while I might be well-off by local standards, unless you're a multimillionaire in USD, you can't use investor backdoors to flee to countries like Australia and Singapore, and unless you're a billionaire, you can't insulate yourself in the middle of a nation that is rapidly melting down as its only real advantage, cheap and cheerful labor, is completely devalued.

You either have the money (like the West) to buy the fruits of automation and then build the factories for it, or you have the factories (like China) which will be automated first and then can be taxed as needed. India, and much of South Asia and Africa, have neither.

Right now, it looks to me that the period of severe unemployment will be both soon and short, unlikely to be more than a few years before capable nearhuman AGI reach parity and then superhuman status. I don't expect an outright FOOM of days or weeks, but a relatively rapid change on the order of years nonetheless.

That makes my existing savings likely sufficient for weathering the storm, and I seek to emigrate very soon. Ideally, I'll be a citizen of the country of my choice within 7 years, which is already pushing it, but then it'll be significantly easier for me to evacuate my family should it become necessary by giving them a place to move to, if they're willing and able to liquidate their assets in time.

But at the end of the day, my approach is aimed at the timeline (which I still consider less likely than not) of a delayed AGI rollout with a protracted period of widespread Humans Need Not Apply in place.

Why?

Because in the case of a rapid takeoff, I have no expectations of contributing meaningfully to Alignment, I don't have the maths skills for it, and even my initial plans of donating have been obviated by the billions now pouring into EA and adjacent Alignment research, be it in the labs of the giants or more grassroots concerns like Eleuther AI etc. I'm mostly helpless in that regard, but I still try and spread the word in rat-adjacent circles when I can, because I think convincing arguments are >> than my measly Third World salary. My competitive advantage is in spreading awareness and dispelling misconceptions in the people who have the money and talent to do something about it, and while that would be akin to teaching my grandma to suck eggs on LessWrong, there are still plenty of forums where I can call myself better informed than 99% of the otherwise smart and capable denizens, even if that's a low bar to best.

However, at the end of the day, I'm hedging against a world where it doesn't happen, because the arrival of AGI is either going to fix everything or kill us all, as far as I'm concerned. You can't hide, and if you run, you'll just die tired, as Martian colonies have an asteroid dropped on them, and whatever pathetic escape craft we make in the next 20 years get swatted before they reach the orbit of Saturn.

If things surprisingly go slower than expected, I hope to make enough money to FIRE and live off dividends, while also aggressively seeking every comparative advantage I can get, such as being an early-ish adopter of BCI tech (i.e. not going for the first Neuralink rollout but the one after, when the major bugs have been dealt with), so that I can at least survive the heightened competition with other humans.

I do wish I had more time, as I genuinely expect to more likely be dead by my 40s than not, but that's balanced out by the wonders that await should things go according to plan, and I don't think that, if given the choice, I would have chosen to be alive at any other time in history. I fully intend to marry and have kids, even if I must come to terms that they'll likely not make it past childhood.. After all, if I had been killed by a falling turtle at the ripe old age of 5, I'd still rather have lived than not, and unless living standards are visibly deteriorating with no hope in sight, I think my child will have a life worth living, however short.

Also, I expect the end to be quick and largely painless. An unaligned AGI is unlikely to derive any value from torturing us, and would most likely dispatch us dispassionately and efficiently, probably before we can process what's actually happening, and even if that's not the case and I have to witness the biosphere being rapidly dismantled for parts, or if things really go to hell and the other prospect is starving to death, then I trust that I have the skills and conviction to manufacture a cleaner end for myself and the ones I've failed..

Even if it was originally intended as a curse, "may you live in interesting times" is still a boon as far as I'm concerned..

TL;DR: Shortened planning windows, conservative financial decisions, reduction in personal volatility by leaving the regions of the planet that will be first to go FUBAR, not aiming for the kinds of specialization programs that will take greater than 10 years to complete, and overall conserving my energy for scenarios in which we don't all horribly die regardless of my best contributions.

Comment by AlphaAndOmega on My vision of a good future, part I · 2022-07-06T18:50:59.190Z · LW · GW

If I'm either incapable or unable to upgrade my cognition till the point where further increase would irreversibly break my personality or run up against sheer issues with latency from the size of the computing cluster needed to run me, then I consider that future strictly suboptimal.

I'm not attached to being a baseline human, as long as I can improve myself while maintaining my CEV or the closest physically instantiable equivalent of such, then I'll always take it. I strongly suspect that every additional drop of "intelligence" opens up the realm of novel experiences in a significantly nonlinear manner, with diminishing returns coming late, if ever. I want the set of novel, positive qualia available to my consciousness to expand faster than my ability to exhaust it, till Heat Death if necessary.

I'd ask whatever Friendly SAI is in charge to make a backup of my default mental state, then bootstrap myself till Matrioshka Brains struggle to hold me. Worst case scenario is that it causes an unavoidable loss of personal identity in the process, but even then, as long as I'm backed up that experiment is very much worth it. So what if the God that germinates from the seed of my soul has no resemblance to me today? I wouldn't have lost anything in trying..

Comment by AlphaAndOmega on Scott Aaronson is joining OpenAI to work on AI safety · 2022-06-18T14:48:24.726Z · LW · GW

While I too was using Tao as a reference class, it's not the only reason for mentioning him. I simply expect that people with IQs that ridiculously high are simply better suited to tackling novel topics, and I do mean novel, building a field from scratch, ideally with mathematical precision.

All the more if they have a proven track record, especially in mathematics, and I suspect that if Tao could be convinced to work on the problem, he would have genuinely significant insight. That and a cheerleader effect, which wouldn't be necessary in an ideal world, but that's hardly the one we live in is it?

Comment by AlphaAndOmega on Scott Aaronson is joining OpenAI to work on AI safety · 2022-06-18T05:15:56.624Z · LW · GW

I wonder what it would take to bring Terence Tao on board..

At any rate, this is good news, the more high status people in academia take Alignment seriously, the easier it becomes to convince the next one, in what I hope is a virtuous cycle!