Posts

Can you define "utility" in utilitarianism without using words for specific human emotions? 2022-09-21T03:29:34.261Z
Bay Rationalists Field Day 2022-02-22T19:55:21.065Z
Why rationalists are not much concerned about mortality? 2022-02-10T00:11:46.068Z
Bay Area Rationalists Field Day 2022-02-01T02:45:16.648Z
Bay Area Rationalist Field Day 2022-01-19T16:40:47.507Z
What Do We Know About The Consciousness, Anyway? 2021-03-31T23:30:29.709Z

Comments

Comment by SurvivalBias (alex_lw) on Can you define "utility" in utilitarianism without using words for specific human emotions? · 2022-09-22T02:03:30.587Z · LW · GW

No, they are not. Animals can feel e.g. happiness as well.

Yeah but the problem here is that we perceive happiness in animals only in as much as it looks like our own happiness. Did you notice that the closer an animal to a human the more likely we are to agree it can feel emotions? An ape can definitely display something like a human happiness, so we're pretty sure it can experience it. A dog can display something mostly like human happiness so most likely they can feel it too. A lizard - meh, maybe but probably not. An insect, most people would say no. Maybe I'm wrong and there's an argument that animals can experience happiness which is not based on their similarity to us, in that case I'm very curious to see this argument.

Sentience

For the record, I believe we do have at least crude mechanistic model of how consciousness works in general, and yes what's with the hard problem of consciousness in particular (the latter being a bit of a wrong question).

Otherwise, I actually think it somewhat answers my question. One my qualm would be that sentience does seem to come on a spectrum - but that can in theory be addressed by some scaling factor. The bigger issue for me is that it implies that a hardcore total utilitarian would be fine with a future populated by trillions of sentient but otherwise completely alien AIs successfully achieving their alien goals (e.g. maximizing paperclips) and experiencing desirable-state-of-consciousness about it. But I think some hardcore utilitarians would bite this bullet, and that wouldn't be a biggest bullet for a utilitarian to bite either.

Comment by SurvivalBias (alex_lw) on Can you define "utility" in utilitarianism without using words for specific human emotions? · 2022-09-21T20:08:30.871Z · LW · GW

>Utility itself is an abstraction over the level of satisfaction of goals/preferences about the state of the universe for an entity.

You can say that a robot toy has a goal of following a light source. Or thermostat has a goal of keeping the room temperature at a certain setting. But I'm yet to hear anyone counting those things towards total utility calculations.

Of course a counterargument would be "but those are not actual goals, those are the goals of humans that set it", but in this case you've just hidden all the references to humans into the word "goal" and are back to square 1.

Comment by SurvivalBias (alex_lw) on Can you define "utility" in utilitarianism without using words for specific human emotions? · 2022-09-21T20:02:46.242Z · LW · GW

So utility theory is a useful tool, but as far as I understand it's not directly used as a source of moral guidance (although I assume once you have some other source you can use utility theory to maximize it). Whereas utilitarianism as a metaethics school is concerned exactly with that, and you can hear people in EA talking about "maximizing utility" as the end in and of itself all the time. It was in this latter sense that I was asking.

Comment by SurvivalBias (alex_lw) on AGI Ruin: A List of Lethalities · 2022-06-15T23:07:37.985Z · LW · GW

To start off, I don't see much point in formally betting $20 on an event conditioned on something I assign <<50% probability of happening within the next 30 years (powerful AI is launched and failed catastrophically and we're both still alive to settle the bet and there was an unambiguous attribution of the failure to the AI). I mean sure, I can accept the bet, but largely because I don't believe it matters one way or another, so I don't think it counts from the epistemological virtue standpoint.

But I can state what I'd disagree with in your terms if I were to take it seriously, just to clarify my argument:

  1. Sounds good.
  2. Mostly sounds good, but I'd push back that "not actually running anything close to the dangerous limit" sounds like a win to me, even if theoretical research continues. One pretty straightforward Schelling point for a ban/moratorium on AGI research is "never train or run anything > X parameters", with X << dangerous level at then-current paradigm. It may be easier explain to the public and politicians than many other potential limits, and this is important.  It's much easier to control too - checking that nobody collects and uses a gigashitton of GPUs [without supervision] is easier than to check every researcher's laptop. Additionally, we'll have nuclear weapons tests as a precedent.
  3. That's the core of my argument, really. If the consortium of 200 world experts says "this happened because your AI wasn't aligned, let's stop all AI research", then Facebook AI or China can tell the consortium to go fuck themselves, and I agree with your skepticism that it'd make all labs pause for even a month (see: gain of function research, covid). But if it becomes public knowledge that a catastrophe of 1mln casualties happened because of AI, then it can trigger a panic which will make both the world leaders and the public to really honestly want to restrict this AI stuff, and it will both justify and enable the draconian measures required to make every lab to actually stop the research. Similar to how panics about nuclear energy, terrorism and covid worked. I propose defining "public agreement" as "leaders of the relevant countries (defined as the countries housing the labs from p.1, so US, China, maybe UK and a couple of others) each issue a clear public statement saying that the catastrophe happened because of an unaligned AI". This is not an unreasonable ask, they were this unanimous about quite a few things, including vaccines.
Comment by SurvivalBias (alex_lw) on AGI Ruin: A List of Lethalities · 2022-06-14T20:46:47.658Z · LW · GW

What Steven Byrnes said, but also my reading is that 1) in the current paradigm it's near-damn-impossible to built such an AI without creating an unaligned AI in the process (how else do you gradient-descend your way into a book on aligned AIs?) and 2) if you do make an unaligned AI powerful enough to write such a textbook, it'll probably proceed to converting the entire mass of the universe into textbooks, or do something similarly incompatible with human life.

Comment by SurvivalBias (alex_lw) on AGI Ruin: A List of Lethalities · 2022-06-14T20:39:25.785Z · LW · GW

It might, given some luck and that all the pro-safety actors play their cards right. Assuming by "all labs" you mean "all labs developing AIs at or near to then-current limit of computational power", or something along those lines, and by "research" you mean "practical research", i.e. training and running models. The model I have in mind not that everyone involved will intellectually agree that such research should be stopped, but that enough percentage of public and governments will get scared and exert pressure on the labs. Consider how most of the world was able to (imperfectly) coordinate to slow Covid spread, or how nobody have prototyped a supersonic passenger jet in decades, or, again, the nuclear energy - we as a species can do such things in principle, even though often for the wrong reasons.

I'm not informed enough to give meaningful probabilities on this, but to honor the tradition, I'd say that given a catastrophe with immediate, graphic death toll >=1mln happening in or near the developed world, I'd estimate >75% probability that ~all seriously dangerous activity will be stopped for at least a month, and >50% that it'll be stopped for at least a year. With the caveat that the catastrophe was unambiguously attributed to the AI, think "Fukushima was a nuclear explosion", not "Covid maybe sorta kinda plausibly escaped from the lab but well who knows".

Comment by SurvivalBias (alex_lw) on AGI Ruin: A List of Lethalities · 2022-06-14T19:17:22.476Z · LW · GW

The important difference is that the nuclear weapons are destructive because they worked exactly as intended, and the AI in this scenario is destructive because it failed horrendously. Plus, the concept of rogue AI has been firmly ingrained into public consciousness by now, afaik not the case with the extremely destructive weapons in 1940s [1]. So hopefully this will produce more public outrage (and scare among the elites themselves) => stricter external and internal limitations on all agents developing AIs. But in the end I agree, it'll only buy time, maybe few decades if we are lucky, to solve the problem properly or to build more sane political institutions.

  1. ^

    Yes I'm sure there was a scifi novel or two before 1945 describing bombs of immense power. But I don't think it was anywhere nearly as widely known as Matrix or Terminator.

Comment by SurvivalBias (alex_lw) on AGI Ruin: A List of Lethalities · 2022-06-13T23:41:14.582Z · LW · GW

How possible is it that a misaligned, narrowly-superhuman AI is launched, fails catastrophically with casualties in the 10^4 - 10^9 range, and the [remainder of] humanity is "scared straight" and from that moment onward treats the AI technology the way we treat nuclear technology now - i.e. effectively strangles it into stagnation with regulations - or even more conservatively? From my naive perspective it is somewhat plausible politically, based on the only example of ~world-destroying technology that we have today. And this list of arguments doesn't seem to rule out this possibility. Is there an independent argument by EY as to why this is not plausible technologically? I.e., why AIs narrow/weak enough to not be inevitably world-destroying but powerful enough to fail catastrophically are unlikely to be developed [soon enough]?

(To be clear, the above scenario is nothing like a path to victory and I'm not claiming it's very likely. More like a tiny remaining possibility for our world to survive.)

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-22T01:41:17.993Z · LW · GW

Yes and no. 1-6 are obviously necessary but not sufficient - there's much more to diet and exercise than "not too much" and "some" respectively. 7 and 8 are kinda minor and of dubious utility except for in some narrow circumstances so whatever. And 9 and 10 are hotly debated and that's exactly what you'd need rationality for, as well as figuring out the right pattern of diet and exercise. And I mean right for each individual person, not in general, and the same with supplements - a 60-year old should have much higher tolerance for potential risks of a longevity treatment than a 25yo, since the latter has more less to gain and more to loose.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-16T16:16:42.183Z · LW · GW

I would be very surprised if inflammation or loss of proteostasis did not have any effect on fascia, if only because they have negative effect on ~everything. But more importantly, I don't think there's any significant number of people dying from fascia stiffness? That's one of the main ideas behind the hallmarks of aging, that you don't have to solve the entire problem in its every minuscule aspect at once. If you could just forestall all these hallmarks or even just some of them, you could probably increase lifespan and healthspan significantly, thus buying more time to fix other problems (or develop completely knew approaches like mind uploading or regenerative medicine or whatever else).

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-16T16:01:14.902Z · LW · GW

You're fighting a strawman (nobody's going to deny death to anyone, and except for seriously ill most people who truly want to die now have an option to do so; myself I'm actually pro-euthanasia). And, once again, you want to inflict on literally everyone a fate you say you don't want for yourself. Also, I don't accept the premise there's any innate power balance in the universe that we ought to uphold even at the cost of our lives, we do not inhabit a Marvel movie. And you're assuming the knowledge which you can't possibly have, about exactly how human consciousness functions and what alterations to it we'll be able to make in the next centuries or millennia.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-16T15:52:37.983Z · LW · GW

That's, like, 99.95% probability, one in two thousand chances. You'd have two orders of magnitude higher chances of survival if you were to literally shoot yourself with a literal gun. I'm not sure you can forecast anything at all (about humans or technologies) with this degree of certainty decades into the future, definitely not that every single one of dozens attempts in a technology you're not an expert in fail and every single one of hundreds attempts in another technology you're not an expert in fail (building aligned AGI).

I don't believe there are any tradeoffs I can make which would give me a 50% chance to live to 300 years.

I don't believe it either, it's a thought experiment, I assumed it'd be obvious since it's a very common technique to estimate how much one should value low probabilities.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-16T05:26:09.502Z · LW · GW

Equating high risk/high reward strategies with Pascal Wager is a way too common failure mode, and it's helped by putting numbers on your estimates. How much is VERY TINY, how much do you think the best available options really cost, and how much would you be willing to pay (assuming you have that kind of money) for a 50% chance of living to 300 years?

To be clear, I'm not so much trying to convince you personally, as to get a generally better sense of the inferential distances involved.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-16T03:54:29.451Z · LW · GW

but that's not anywhere near solving it in principle

Of course they are not, that's not the point. The point is that they can add more time for us to discover more cures - to the few decades most rationalists already have, considering the age distribution. During that time new approaches will likely be discovered, hopefully adding even more time, until we get to mind uploading, or nanobots constantly repairing the body, or some other complete solution. The concept is called longevity escape velocity.

but I think it's more likely for bio-brains to continue dying and the immortal are digital from birth

Why would you think that?

And another question. Imagine you've found yourself with an incurable disease and 3 years to live. Moreover, it's infectious and it has infected everyone you love. Would you try experimental cures and encourage them to try as well, or would you just give up so as not to reduce your enjoyment of the remaining time?

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-16T03:40:28.869Z · LW · GW

Oh no, what if me and everyone I care about would only get to live 5 billion instead of 80 years. And all that only to find out it was a half-assed hypothetical.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-16T03:32:59.997Z · LW · GW

Just a reminder, in this argument we are not the modern people who get to feel all moral and righteous about themselves, we are the Greeks. Do you really want to die for some hypothetical moral improvement of future generations? If so, you can go ahead and be my guest, but myself I'd very much rather not to.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-16T03:27:45.018Z · LW · GW

Hmm that's interesting, I need to find those people.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-16T03:26:24.679Z · LW · GW

There are plenty of people who have AGI timelines that suggest to them that either AGI will kill them before they reach their natural mortality or AGI will be powerful enough to prevent their natural mortality by that point. 

True but there's also plenty of people who think otherwise, other comments being an example.

I'm not a biologist, but I'm reasonably sure that fascia getting tenser would be downstream of the hallmarks of aging, if that's what you're talking about. It's kinda like asking why "going to a boardgame party in San Francisco" isn't on the list of covid transmission vectors. And in any case, SENS is far from being the only organization, there's many others with different approaches and focus areas, probably one of them covers fascia even if SENS doesn't.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-16T03:11:53.365Z · LW · GW

I personally believe exactly the right kind of advocacy may be extremely effective, but that's really a story for a post. Otherwise yeah, AGI is probably higher impact for those who can and want to work there. However, in my observation the majority of rationalists do not in fact work in AGI, and imao life extension and adjacent areas have a much wider range of opportunities and so could be a good fit for many of those people.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-16T02:59:45.652Z · LW · GW

The way I see it, when we're talking about non-me humans, the vast majority of them will be replaced with people I probably like roughly the same amount, so my preference for longevity in general is mild.

 

Am I reading this incorrectly or are you saying that you don't care about your friends and loved ones dying?

There's at least two currently ongoing clinical trials with an explicit goal of slowing aging in humans (TAME and PEARL), that's just the most salient example. At some point I'll definitely make a post with a detailed answer to the question of "what can I do". As for the problem not being solvable in principle, I don't believe I've ever seen an argument for this which didn't involve a horrendous strawman or quasi-religion of some sort.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-16T02:33:26.552Z · LW · GW

Smallpox is also a very old problem, and lots of smart people had spent lots of time thinking about it, until they've figured out a way to fix it. In theory, you could make an argument that no viable approaches exist today or in the foreseeable future and so harm reduction is the best strategy (from the purely selfish standpoint, working on the problem would still help the people in the future in this scenario). However, I don't think in practice it would be a very strong argument, and in any case you are not making it.

If you're say 60+ than yes, anti-aging is not a realistic option and all you have is cryonics, but most of the people in the community are well below 60. And even for a 60+ years old, I'd say that using the best currently available interventions to get cryopreserved a few years later and have a slightly higher chance for reanimation would be a high priority.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-10T21:33:58.121Z · LW · GW

My impression is that it's more than most people do! [Although full disclosure, myself I'm signed up with CI and following what I believe is the right pattern of diet and exercise. I'll probably start some of the highest benefit/risk ratio compounds (read: rapamycin and/or NAD+ stuff) in a year or two when I'm past 30.]

But also, how do you feel about donating to the relevant orgs (e.g. SENS), working in a related or adjacent area, and advocating for this cause?

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-10T21:22:09.228Z · LW · GW

Well, about 55 million people die per year, most of them from aging, so solving it for everyone today vs say 50-60 years later with AGI would have saved 2-3 billions of potentially indefinite very very long lives. This definitely counts as "much impact for many people" on my book.

 

But also, what's the probability that we will indeed get AGI in the next 50 or 70 years? I mean, I know it's a hotly debated topic so asking for your personal best estimate.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-10T21:07:31.009Z · LW · GW

Mortality is thought about by everyone, forever.

Technically probably yes, but the specific position of "it is something we can and should do something about right now" is unfortunately nearly as fringe as AI risk: a bunch of vocal advocates with a small following pushing for it, plus some experts in the broader field and some public figures maybe kinda tentatively flirting with it. So, to me these are two really very comparable positions, very unconventional but also very obvious if you reason from the first principles and some basic background knowledge. Maybe that's why I may sound a bit frustrated or negative, because it feels like the people who clearly should be able to make this conclusion, for some reason don't. And that's why I'm basically asking this question, to understand why don't or what am I missing or whatever is going on.

 

By the way, can you clarify what's your take on the premise of the question? I'm still not sure whether you think:

  • Rationalists are paying comparatively little attention to mortality and it is justified
  • Rationalists are paying comparatively little attention to mortality and it is not justified
  • Rationalists are paying comparatively lot attention to mortality and I'm just not looking in the right places
  • Something else

 

Yeah, I'm talking about exercise and "eating healthy" and all the stuff that everyone knows you should do but many don't because it's unpleasant and hard.

Ok, in that case akrasia etc debates are very relevant. But even so, not everybody knows. Maybe the facts that you should exercise and watch what you eat themselves are relatively uncontroversial (although I still remember the dark days when EY himself was advocating on facebook that "calories in / calories out" is bullshit). But exactly what kinds of diet and exercise are optimal for longevity is a hugely controversial topic, and it's mainly not for the lack of data, it's for the lack of interpretation, i.e. something that we could well try to do on lesswrong. So it'd be cool to see more posts e.g. like this.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-10T17:38:51.856Z · LW · GW

I'm well aware, but this comment section is the first time I hear there's a non-trivial overlap! Are you saying many active rationalists are SENS supporters?

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-10T17:36:27.136Z · LW · GW

So your argument is that people should die for their own good, despite what they think about it themselves? Probably not since it'd be a almost a caricature villain, but I don't see where else are you going with this. And the goal of "not developing an excruciatingly painful chronic disease" is not exactly at odds with the goal "combat aging".

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-10T15:52:43.796Z · LW · GW

>By the way I would jump on the opportunity of an increased life span to say 200-300 years, 80 seems really short, but not indefinite extension

Ok that's honestly good enough for me, I say lets get there and then argue whether we need more extension.

I'm no therapist and not even good as a regular human being at talking about carrying burdens that make one to want to kill themselves eventually, you should probably seek advice of someone who can do a better job at it.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-10T15:48:56.204Z · LW · GW

Cryonics is around 20 bucks a month if you get it through insurance, plus 120 to sign up.

With that out of the way, I think there is substantial difference between "no LEV in 20 years" and "nothing can be done". For one thing, known interventions - diet, exercise, very likely some chemicals - can most likely increase your life expectancy by 10-30 years depending on how right you get it, age, health and other factors. For another thing, even if working on the cause, donating to it or advocating for it won't help yourself, it can still help many people you know and love, not to mention everyone else. Finally, the whole point of epistemic rationality (arguably) is to work correctly with probabilities. How certain you are that there will be no LEV in 20 years? If there's a 10% chance, isn't it's worth giving a try and increasing it a bit? If you ~100% certain, where do you get this information?

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-10T15:41:32.031Z · LW · GW

Dangerous proposition in what sense? Someone may die? Everyone may die? I have, um, not very good news for you...

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-10T05:59:50.672Z · LW · GW

>When one realizes how far life is from the rosy picture that is often painted, one has a much easier time accepting death, even while still fearing it or still wanting to live as long as possible.

Do you truly estimate your life as not worth or barely worth living? If yes, I'm deeply sorry about that and I hope you'll find a way to improve it. Let me assure you that there's many people, myself included, who truly genuinely love life and enjoy it.

If it's just a comforting lie you believe in believing to make the thought of death more tolerable, well, I can understand that, death really is terrifying, but then consider maybe not to use it as an argument.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-10T05:29:48.684Z · LW · GW

>I'm pretty concerned, I'm trying to prevent the AI catastrophe happening that will likely kill me.

That was one of my top guesses, and I'm definitely not implying that longevity is higher or equal priority than AI alignment - it's not. I'm just saying that after AI alignment and maybe rationality itself, not dying [even if AGI doesn't come] seems like a pretty darn big deal to me. Is your position that AGI in our lifetime is so inevitable that other possibilities are irrelevant? Or that other possibilities are non-trivial (say above 10%) but since AGI is the greatest risk all resources should be focused on it? If the latter, do you believe it should be the strategy of the community as a whole or just those working on AGI alignment directly?

[Exercising 30 min few times a week is great, and I'm glad your housemate pushes you to do it! But, well, it's like not going to big concerts in Feb 2020 - it's basic sanity most regular people would also know to follow. Hell it's literally the FDA advice and has been for decades.]

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-10T05:05:21.399Z · LW · GW

I agree, Ukraine was an exaggeration. I've checked the tags and grants before asking the question, and am well aware of SENS but never thought or heard of it being adjacent. Is it? Didn't know of the three defunct institutions as well, so I should raise my estimate somewhat.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-10T04:47:47.361Z · LW · GW

I have indeed spent a certain amount of time figuring out whether it's the case, and the answer I came to was "yep, definitely". Edited the question to make it more clear. I didn't lay out the reasoning behind it, because I assumed anyone arguing in good faith would either accept the premise based on their own experience, or just point to the counterexamples (as Elizabeth and in a certain stretched sens Ben Pace did).

 

>low hanging fruit might be picked WRT mortality

I'm doubtful, but I can certainly see a strong argument for this! However my point is that, like with existential risks, it is a serious enough problem that it's worth focusing on even after low hanging fruit has been picked up.

 

>Maybe mortality is mostly about making ourselves do the right thing and akrasia type stuff

Hmm, can you elaborate on what do you mean here? Are you talking about applying [non-drug] interventions? But the best interventions known today will give you 1-2 decades if you're lucky.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-10T04:33:36.483Z · LW · GW

Thanks for the answer, that wasn't one of my top guesses! Based on your experience, do you think it's widely held in the community?

 

And I totally see how it kinda makes sense from the distance because it's what the most vocal figures of the anti-aging community often claim. The problem is that it has also been the case 20 years ago - see Methuselah Foundation "make 90 the new 50 by 2030" - and probably 20 years before that. And, to the best of my understanding, while substantial progress has been made, there hasn't been any revolutions comparable with e.g. revolution in ML over the same period. And ironically, if you talk to the rank-n-file folks in the longevity community, many of them are stocked about AGI coming and saving us all from death, because they see it as the only hope for aging to be solved within their lifetime. It is certainly possible that we solve aging in the next 20 years, but it's nowhere near guaranteed, and my personal estimate of this happening (without aligned AGI help) is well below 50%. Are you saying your estimates of it happening soon enough are close to 100%?

 

I also wouldn't call billion-dollar investments uncommon, the only example I can think of is Altos Labs, and it's recent and so far nobody seems to know wtf exactly are they doing. And AI safety also has billion-dollar range players, namely OpenAI.

 

Most importantly, throwing more money at the problem isn't the only possible approach. Consider how early in the COVID pandemic there was a lot of effort put into figuring out what exactly is the right strategy on the individual level. Due to various problems, longevity advice suffers from similar levels of uncertainty. There's a huge amount of data gathered, but it's all confusing and contradictory and models are very incomplete, and there's various sources of bias etc - and it's a hugely important problem to get right for ~everyone. Sounds like a perfect use case for the methods of rationality to me, yet there's very little effort in this direction, nothing to compare with COVID - which is nowhere nearly as lethal! And just like with COVID, even if someone is young and optimistic so they are confident they'll be able to jump on the LEV train, almost everyone has friends or loved ones who are much older.

Comment by SurvivalBias (alex_lw) on Why rationalists are not much concerned about mortality? · 2022-02-10T02:19:51.641Z · LW · GW

I'd be happy to be proven wrong, and existence is generally much easier to prove than non-existence. Can you point to any notable rationality-adjacent organizations focused on longevity research? Bloggers or curated sequences? When was the last rationalist event with focus on life extension (not counting cryonics, it was last Sunday)? Any major figures in the community focused on this area?

To be clear, I don't mean "concerned about a war in Ukraine" level, I mean "concerned about AI alignment" level. Since these are the two most likely ways for the present day community members humans to die, with the exact proportion between them depending on one's age and AI timelines estimate, I would expect a roughly comparable level of attention and that is very much not what I observe. Am I looking in the wrong places?

Comment by SurvivalBias (alex_lw) on Podcast Club IRL: Julia Galef on the Scout Mindset · 2021-08-18T14:44:08.816Z · LW · GW

Meetup link seems to be broken

Comment by SurvivalBias (alex_lw) on Reasons against anti-aging · 2021-04-21T18:54:12.392Z · LW · GW

As someone who is very much in favor of anti-aging, I'd answer to it something like this: "I'm fine with you entertaining all these philosophical arguments, and if you like them so much you literally want to die for them, by all means. But please don't insist that me and everyone I care or will care about should also die for your philosophical arguments."

Comment by SurvivalBias (alex_lw) on What Do We Know About The Consciousness, Anyway? · 2021-04-06T06:06:24.397Z · LW · GW

we're perceiving things as "qualities", as "feels", even though all we are really perceiving is data

I consider it my success as a reductionist that this phrase genuinely does not make any sense to me.

But he says he doesn't think the word "illusion" is a helpful word for expressing this, and illusionism should have been called something else, and I think he's probably right.

Yep, can't agree more, basically that's why I was asking - "illusion" doesn't sound like the right concept here.

Comment by SurvivalBias (alex_lw) on What Do We Know About The Consciousness, Anyway? · 2021-04-06T06:00:23.124Z · LW · GW

Those are all great points. Regarding your first question, no, that's not the reasoning I have. I think consciousness is the ability to reflect on myself firstly because it feels like the ability to reflect on myself. Kind of like the reason that I believe I can see is that when I open my eyes I start seeing things and if I interact with those things they really are mostly where I see them, nothing more sophisticated than that. There's a bunch of longer more theoretical arguments I can bring for this point, but I never thought I should because I was kind of taking it as a given. It well may be me falling into the typical mind fallacy, if you say some people say otherwise. So if you have different intuitions about the consciousness, can you tell:

  1. How do you subjectively, from the first person view, know that you are conscious?
  2. Can you genuinely imagine being conscious but not self aware from the first person view?
  3. If you get to talk to and interact with, an alien or an AI of unknown power and architecture, how would you go about finding out if they are conscious?

And because it doesn't automatically fits into "If you have a conscious mind subjectively perceiving anything about the outside world, it has to feel like something" if you just replace "conscious" by "able to percieve itself".

Well, no, it doesn't fit quite as simple, but overall I think it works out. If you have an agent able to reflect on itself and model itself perceiving something, it's going to reflect on the fact that it perceives something. I.e. it's going to have some mental representation for both the perception and for itself perceiving it. It will be able to reason about itself perceiving things, and if it can communicate it will probably also talk about it. Different perceptions will be in relation to each other (e.g. sky is not the same color as grass, and grass color is associated with summer and warmth and so on). And, perhaps most importantly, it will have models of other such agents perceiving things and it will on the high abstract level that they have the same perceptions in them. But it will only have the access to the lower level data for such perceptions from its own sensory inputs, not others', so it won't be able to tell for sure what it "feels like" to them, because it won't be getting theirs stream of low-level sensory inputs.

In short, I think - and please do correct me if you have a counterexample - that we have reasons to expect such an agent to make any claim humans make (given similar circumstances and training examples), and we can make any testable claim about such an agent that we can make about a human.

Comment by SurvivalBias (alex_lw) on What Do We Know About The Consciousness, Anyway? · 2021-04-03T16:28:01.712Z · LW · GW

Ah, I see. My take on this question would be that we should focus on the word "you" rather than "qualia". If you have a conscious mind subjectively perceiving anything about the outside world (or its own internal workings), it has to feel like something, almost by definition. Like, if you went to go get your covid shot and it hurt you'd say "it felt like something". If and only if somehow you didn't even feel the needle piercing your skin, you'll say "I didn't feel anything". There were experiments proving that people can react to a stimulus they are not subjectively aware of (mostly for visual stimuli), but I'm pretty sure in all those cases they'd say they didn't see anything - basically that's how we know they were not subjectively aware of it. What would it even mean for a conscious mind to be aware of a stimulus but it not "feeling like something"? It must have some representation in the consciousness, that's basically what we mean by "being aware of X" or "consciously experiencing X".

So I'd say given a consciousness experiencing stuff, you necessarily have conscious experiences (aka qualia), that's a tautology basically. So the question becomes why some things have consciousness, or to narrow it down to your question - why (certain) recursively self-modeling systems are conscious? And that's kind of what I was trying to explain by the part 4 of the post, and approximately the same idea just from another perspective is much better covered in this book review and this article

But if I tried to put it in one paragraph, I'd start with - how do I know that I'm conscious and why do I think I know it? And the answer would be a ramble along the lines of: well when I look into my mind I can see me, i.e. some guy who thinks and makes decisions and is aware of things, and have emotions and memories and so on and so forth. And at the same time as I see I also am this guy! I can have different thoughts whenever I choose to (to a degree), I can do different things whenever I choose to (to a still more limited degree), and at the same time I can reflect on the choice process. So my theory is that I can perceive myself as a human mind mostly because the self-reflecting model - which is me - has trained to perceive other human mind so well that it learned to generalize to itself (see the whole entire post for the details). Although Graziano in the article and book I linked provides a very convincing explanation as to why this self-modeling would also be very helpful for the general reasoning ability - something I was unsuccessfully trying to figure out in the part 5.

Comment by SurvivalBias (alex_lw) on What Do We Know About The Consciousness, Anyway? · 2021-04-03T15:21:23.777Z · LW · GW

Your "definition" (which really isn't a definition but just three examples) have almost no implications at all, that's my only issue with it.

Comment by SurvivalBias (alex_lw) on What Do We Know About The Consciousness, Anyway? · 2021-04-02T18:56:24.443Z · LW · GW

I don't think qualia - to the degree it is at all a useful term - has much to do with the ability to feel pain, or anything. In my understanding all definitions of qualia assume it is a different thing from purely neurological perceptions (which is what I'd understand by "feelings"), more specifically that the perceptions can generate qualia sometimes in some creatures but they don't automatically do.

Otherwise you'd have to argue one of the two:

  1. Either even the most primitive animals like worms which you can literally simulate neuron by neuron, have qualia as long as they have some senses and neurons.
  2. ..or "feel pain" and e.g. "feel warmth" are somehow fundamentally different where the first necessarily requires/produces quale and the second may or may not produce it.

Both sound rather indefensible to me, so it follows that an animal can feel pain without experiencing a quale of it, just like a scallop can see the light without experiencing a quale of it. But two caveats on this. First, I don't have a really good grasp on what a qualia is, and as wikipedia attests neither do the experts. I feel there's some core of truth that people are trying to get at with this concept (something along what you said in your first comment), but also it's very often used as a rug for people to hide their confusion under, so I'm always skeptical about using this term. Second, whether or not one should ascribe any moral worth to the agents without consciousness/qualia is decisively not a part of what I'm saying here. I personally do, but as you say it depends on one's preferences, and so largely orthogonal to the question of how consciousness works.

Comment by SurvivalBias (alex_lw) on What Do We Know About The Consciousness, Anyway? · 2021-04-02T18:07:25.930Z · LW · GW

Looking at your debate both with me and with Gordon below, it seems like your side of the argument mostly consists of telling the opponent "no you're wrong" without providing any evidence to that claim. I honestly did my best to raise the sanity waterline a little, but to no success, so I don't see much sense in continuing.

Comment by SurvivalBias (alex_lw) on What Do We Know About The Consciousness, Anyway? · 2021-04-02T16:38:56.170Z · LW · GW

Sure, I wasn't claiming at any point to provide a precise mathematical model let alone implementation, if that's what you're talking about. What I was saying is that I have guesses as to what that mathematical model should be computing. In order to tell whether the person experiences a quale of X (in the sense of them perceiving this sensation), you'd want to see whether the sensory input from the eyes corresponding to the red sky is propagated all the way up to the top level of predictive cascade - the level capable of modeling itself to a degree - and whether this top level's state is altered in a way to reflect itself observing the red sky.

And admittedly what I'm saying is super high level, but I've just finished reading a much more detailed and I think fully compatible account of this in this article that Kaj linked. In their sense, I think the answer to your question is that the qualia (perceived sensation) arises when both attention and awareness are focused on the input - see the article for specific definitions.

The situation where the input reaches the top level and affects it, but is not registered subjectively, corresponds to attention without awareness in their terms (or to the information having propagated to the top level, but the corresponding change in the top level state not being reflected in itself). It's observed in people with blindsight, and also was recreated experimentally.

Comment by SurvivalBias (alex_lw) on What Do We Know About The Consciousness, Anyway? · 2021-04-02T02:53:25.880Z · LW · GW

Replacing it with another word of which you then use identically isn't the same as tabooing, that's kind of defeats the purpose.

there can still be agreement that they in some sense about sensory qualities.

There may be, but then it seems there's no agreement about what sensory qualities are.

I've said s already, haven't I? A solution to the HP would allow you to predict sensory qualities from detailed brain scans, in the way that Mary can't.

No, you have not, in fact in all your comments you haven't mentioned "predict" or "mary" or "brain" ever once. But now we're getting somewhere! How do you tell that a certain solution can or can't predict "sensory qualities"? Or better, when you say "predict qualities from the brain scans" do you mean "feel/imagine them yourself as if you've experienced those sensory inputs firsthand", or do you mean something else?

Comment by SurvivalBias (alex_lw) on What Do We Know About The Consciousness, Anyway? · 2021-04-01T23:16:29.209Z · LW · GW

Yeah, although seems only in the sense where "everything [we perceive] is illusion"? Which is not functionally different from "nothing is illusion". Unless I'm missing something?

Comment by SurvivalBias (alex_lw) on What Do We Know About The Consciousness, Anyway? · 2021-04-01T23:13:23.557Z · LW · GW

Yeah that sounds reasonable and in line with my intuitions. Where by "somebody" I would mean consciousness - the mind modeling itself. The difference between "qualia" and "no qualia" would be the difference between the signal of e.g. pain propagating all the way to the topmost, conscious level, which would predict not just receiving the signal (as all layers below also do), but also predict its own state altered by receiving the signal.  In the latter case, the reason why the mind knows there's "somebody" experiencing it, is because it observes (=predicts) this "somebody" experiencing it. And of course that "somebody" is the mind itself.

And then it raises questions like "why not having ability to do recursion stops you from feeling pain".

Well my - and many other people's - answer to that would be that of course it doesn't, for any reasonable definition of pain. Do you believe it does?

Comment by SurvivalBias (alex_lw) on What Do We Know About The Consciousness, Anyway? · 2021-04-01T22:50:17.832Z · LW · GW

I'm not trying to pull the subject towards anything, I'm just genuinely trying to understand your position, and I'd appreciate a little bit of cooperation on your part in this. Such as, answering any of the questions I asked. And "I don't know" is a perfectly valid answer, I have no intention to "gotcha" you or anything like this, and by your own admission the problem is hard. So I'd ask you to not interpret any of my words above or below as an attack, quite the opposite I'm doing my best to see your point.

You should be using the famous hardness of the HP as a guide to understanding it ... If it seems easy , you've got it wrong.

With all due respect, that sounds to me like you're insisting that the answer to a mysterious question should be itself mysterious, which it shouldn't. Sorry if I misinterpret your words, in that case again I'd appreciate you being a bit more clear about what you're trying to say.

FYI, there is no precise and universally accepted definition of "matter".

Exactly, and that is why using Wikipedia article for definition in such debates is not a good idea. Ideally, I'd ask you (or try myself in an identical situation) to taboo the words "qualia" and "hard problem" and try to explain what exactly question(s) do you think remains unanswered by the theory. But failing that, we can at least agree on the definition on qualia.

And even if we insist on using Wiki as the source of truth, here's the direct quote: "Much of the debate over their importance hinges on the definition of the term, and various philosophers emphasize or deny the existence of certain features of qualia. Consequently, the nature and existence of various definitions of qualia remain controversial because they are not verifiable." To me it sounds at odds with, again direct quote: "Qualia are sufficiently well defined to enable us to tell that you have not solved the hard problem". If nature and even existence something depends on the definition, it's not sufficiently well defined to tell whether theory X explains it (which is all not to say that you're wrong and wikipedia is right, I don't think it's the highest authority on such matters. Just that you seem to have some different, narrower definition in mind so we can't use reference to wiki as the source of truth)

Note that not everything that is true of qualia (or anything else) needs to be in the definition.

Yeah, I kinda hoped that I don't need to spell it out, but okay, there we go. You're correct, not everything that's true of qualia needs to be in the definition. However I would insist that a reasonable definition doesn't directly contradict any important true facts. Whereas one of the definitions in that wiki article (by Dennett) says that qualia is "private; that is, all interpersonal comparisons of qualia are systematically impossible."

I would not expect a definition alone to answer every possible question.

Again, totally agree, that's why I started with specific questions rather than definitions. So, considering that "I don't know" is a perfectly reasonable answer, could you maybe try answering them? Or, if that's seems like a better option to you, give an example of a question which you think proves Graziano/my theory isn't sufficient to solve the hard problem?

Comment by SurvivalBias (alex_lw) on What Do We Know About The Consciousness, Anyway? · 2021-04-01T20:26:52.544Z · LW · GW

The part that you quoted doesn't define anything, it's just 3 examples, which together may be just as well defined simply as "sensations". And the Wikipedia article itself lists a number of different, not equivalent definitions none of which is anything I'd called rigorous, plus a number of references to qualia proponents who claim that this or that part of some definition is wrong (e.g. Ramachandran and Hirstein say that qualia could be communicated), plus a list of qualia opponents who have significant issues with the whole concept. That is exactly what I'm referring to as "ill-defined".

Now you say that you think qualia is well-defined, so I'm asking you to help me to understand the definition you have in mind, so we can talk about it meaningfully. That's why the questions matter - I can't answer you whether I think I or Clark or Graziano or whoever else solved the hard problem if I don't understand what do you mean by the hard problem (for which not all definitions even include the term "qualia").

Do you have something to say about qualia that depends on knowing the answer?

Well of course, everything I have to say depends on knowing the answer because the answer would help me understand what is it that you mean by qualia. So do you feel like your definition allows you to answer this question? And, while we're at it, my follow up question of whether you assume animals have qualia and if yes which of them? If so, that'd be very helpful for my understanding.

Comment by SurvivalBias (alex_lw) on What Do We Know About The Consciousness, Anyway? · 2021-04-01T19:52:48.547Z · LW · GW

Thanks a lot for the links! I didn't look into them yet, but the second quote sounds pretty much exactly like what I was trying to say, only expressed more intelligibly. Guess the broad concept is "in the air" enough that even a layman can grope their way to it.