Posts

Adele Lopez's Shortform 2020-08-04T00:59:24.492Z · score: 6 (1 votes)
Optimization Provenance 2019-08-23T20:08:13.013Z · score: 41 (25 votes)

Comments

Comment by adele-lopez-1 on The rationalist community's location problem · 2020-09-24T03:24:22.937Z · score: 2 (1 votes) · LW · GW

Lot's of people in the community have seasonal affective disorder (see e.g. https://www.lesswrong.com/posts/hC2NFsuf5anuGadFm/how-to-build-a-lumenator), so that would lead me to expect people would want to live in places with more sunlight, which tend to not have cold weather.

Comment by adele-lopez-1 on What happens if you drink acetone? · 2020-09-16T05:14:16.196Z · score: 7 (4 votes) · LW · GW

I think you missed the most interesting effect, which is that ingesting it would put you into some sort of ketosis or at higher levels: ketoacidosis.

Comment by adele-lopez-1 on Escalation Outside the System · 2020-09-09T00:32:59.240Z · score: 7 (4 votes) · LW · GW

On one hand, I think you're mostly right about this not being an actual proposal, but I also think that people saying stuff like this would (and will) use guillotines if/when they have the opportunity and think they can get away with it.

Comment by adele-lopez-1 on Forecasting Thread: AI Timelines · 2020-08-22T15:35:04.186Z · score: 4 (2 votes) · LW · GW

That 30% where we get our shit together seems wildly optimistic to me!

Comment by adele-lopez-1 on Forecasting Thread: AI Timelines · 2020-08-22T15:29:32.529Z · score: 18 (8 votes) · LW · GW

Roughly my feelings: https://elicit.ought.org/builder/trBX3uNCd

Reasoning: I think lots of people have updated too much on GPT-3, and that the current ML paradigms are still missing key insights into general intelligence. But I also think enough research is going into the field that it won't take too long to reach those insights.

Comment by adele-lopez-1 on How much can surgical masks help with wildfire smoke? · 2020-08-21T16:14:26.228Z · score: 11 (5 votes) · LW · GW

I think an arbitrary kind of mask is effective for COVID-19 largely because of the fluid dynamics:

If the air has the particles you want to avoid evenly distributed throughout (as with smoke), then this model predicts you'll miss out on most of the benefit of a mask which does the appropriate filtering. So it's probably not worth using surgical masks for smoke.

Comment by adele-lopez-1 on Do we have updated data about the risk of ~ permanent chronic fatigue from COVID-19? · 2020-08-14T22:13:20.720Z · score: 10 (8 votes) · LW · GW

Anecdotal evidence suggests that it is fairly common: https://www.reddit.com/r/COVID19positive/ -- 2 of the 5 top posts from today are from people complaining about experiencing this, and are both full of comments personally relating to it. There is obviously going to be a selection bias here, but it seems like a good starting point for estimating a lower bound if you can't find enough good studies.

Comment by adele-lopez-1 on Many-worlds versus discrete knowledge · 2020-08-14T00:43:01.075Z · score: 12 (7 votes) · LW · GW

Say that there is some code which will run two instances of you, one where you see a blue light, and one where you see a green light. The code is run, and you see a blue light, and another you sees a green light. The you that sees a blue light gains indexical knowledge about which branch of the code they're in. But there's no need for the code to have a "reality" index parameter to allow them to gain that knowledge. You implicitly have a natural index already: the color of light you saw. I don't see why someone living in a Many-Worlds universe wouldn't be able to do the equivalent thing.

So I guess I would say that in some sense, once you've figured out the rules, measurements don't give you any knowledge about the wave function, they just give you indexical knowledge.

Comment by adele-lopez-1 on Adele Lopez's Shortform · 2020-08-08T19:49:34.113Z · score: 2 (1 votes) · LW · GW

It seems that privacy potentially could "tame" a not-quite-corrigible AI. With a full model, the AGI might receive a request, deduce that activating a certain set of neurons strongly would be the most robust way to make you feel the request was fulfilled, and then design an electrode set-up to accomplish that. Whereas the same AI with a weak model wouldn't be able to think of anything like that, and might resort to fulfilling the request in a more "normal" way. This doesn't seem that great, but it does seem to me like this is actually part of what makes humans relatively corrigible.

Comment by adele-lopez-1 on Adele Lopez's Shortform · 2020-08-08T19:39:11.571Z · score: 6 (3 votes) · LW · GW

Privacy as a component of AI alignment

[realized this is basically just a behaviorist genie, but posting it in case someone finds it useful]

What makes something manipulative? If I do something with the intent of getting you to do something, is that manipulative? A simple request seems fine, but if I have a complete model of your mind, and use it phrase things so you do exactly what I want, that seems to have crossed an important line.

The idea is that using a model of a person that is *too* detailed is a violation of human values. In particular, it violates the value of autonomy, since your actions can now be controlled by someone using this model. And I believe that this is a significant part of what we are trying to protect when we invoke the colloquial value of privacy.

In ordinary situations, people can control how much privacy they have relative to another entity by limiting their contact with them to certain situations. But with an AGI, a person may lose a very large amount of privacy from seemingly innocuous interactions (we're already seeing the start of this with "big data" companies improving their advertising effectiveness by using information that doesn't seem that significant to us). Even worse, an AGI may be able to break the privacy of everyone (or a very large class of people) by using inferences based on just a few people (leveraging perhaps knowledge of the human connectome, hypnosis, etc...).

If we could reliably point to specific models an AI is using, and have it honestly share its model structure with us, we could potentially limit the strength of its model of human minds. Perhaps even have it use a hardcoded model limited to knowledge of the physical conditions required to keep it healthy. This would mitigate issues such as deliberate deception or mindcrime.

We could also potentially allow it to use more detailed models in specific cases, for example, we could let it use a detailed mind model to figure out what is causing depression in a specific case, but it would have to use the limited model in any other contexts or for any planning aspects of it. Not sure if that example would work, but I think that there are potentially safe ways to have it use context-limited mind models.

Comment by adele-lopez-1 on Adele Lopez's Shortform · 2020-08-04T00:59:24.868Z · score: 8 (4 votes) · LW · GW

Half-baked idea for low-impact AI:

As an example, imagine a board that's lodged directly by the wall (no other support structures). If you make it twice as wide, then it will be twice as stiff, but if you make it twice as thick, then it will be eight times as stiff. On the other hand, if you make it twice as long, it will be eight times more compliant.

In a similar way, different action parameters will have scaling exponents (or more generally, functions). So one way to decrease the risk of high-impact actions would be to make sure that the scaling exponent is bounded above by a certain amount.

Anyway, to even do this, you still need to make sure the agent's model is honestly evaluating the scaling exponent. And you would still need to define this stuff a lot more rigorously. I think this idea is more useful in the case where you already have an AI with high-level corrigible intent and want to give it a general "common sense" about the kinds of experiments it might think to try.

So it's probably not that useful, but I wanted to throw it out there.

Comment by adele-lopez-1 on Free Educational and Research Resources · 2020-07-31T06:01:36.761Z · score: 4 (3 votes) · LW · GW

http://www.av8n.com/ has lots of content on physics (including a book about aviation that goes into physics, a thermodynamics book, and a book on spacetime) and math, along with some essays on pedagogy. The real reason it stands out though, is because it explains things in an exceptionally deconfused way.

If you enjoyed the LW physics sequences, and wanted more, this is where you want to be!

Comment by adele-lopez-1 on Raemon's Shortform · 2020-07-30T18:32:13.696Z · score: 6 (3 votes) · LW · GW

I wrote a thing about this.

https://www.lesswrong.com/posts/6wkY2DcCnzNyJTDsw/looking-for-answers-about-quantum-immortality?commentId=b3ZLzjSYWhHsMEYRr

Comment by adele-lopez-1 on What are the risks of permanent injury from COVID? · 2020-07-07T22:42:53.113Z · score: 4 (5 votes) · LW · GW

Pulmonary fibrosis seems to be a fairly common outcome of COVID19 (especially if you needed hospitalization). According to Wikipedia, "Life expectancy is generally less than five years."

Comment by adele-lopez-1 on Covid 6/25: The Dam Breaks · 2020-06-26T00:19:24.489Z · score: 6 (3 votes) · LW · GW

The West region seems to now be showing signs of important subregional distinctions: in particular, (IIRC) a lot of the increase in cases in the West is being driven by increases in southern California.

Comment by adele-lopez-1 on Intuitive Lagrangian Mechanics · 2020-06-21T04:59:02.554Z · score: 6 (3 votes) · LW · GW

FWIW this did make Lagrangian mechanics feel more intuitive for me. I wish it showed the derivation of the classical version as a second order approximation though.

Comment by adele-lopez-1 on The "hard" problem of consciousness is the least interesting problem of consciousness · 2020-06-08T20:33:38.879Z · score: 8 (4 votes) · LW · GW

Yes! Subjective experience has a topology. The neighborhoods are just all the collections of qualia that feel some amount of similar, for any sort of amount that feels different. And the "shape" of the #2 color is different because it is not homeomorphic to the #1 color.

There's also (something like) a metric space structure on this topology, since different things feel different amounts of different. People seem to have variation in this while still having the same topological structure as others.

My hypothesis about the "mysterious redness of red" is that it feels striking in part because it has a more mathematically interesting/complex homotopy type, and that you could in principle give people entirely new qualia by arranging their neurons to experience a novel homotopy type.

Comment by adele-lopez-1 on Why Rationalists Shouldn't be Interested in Topos Theory · 2020-05-26T00:14:18.312Z · score: 9 (4 votes) · LW · GW

I've ran into this too, and I think that quasitopoi are also a dead-end for this sort of thing. I'm currently interested in linear logic as well!

Comment by adele-lopez-1 on Comment on "Endogenous Epistemic Factionalization" · 2020-05-20T21:34:36.411Z · score: 20 (6 votes) · LW · GW

I wonder if this would still happen if say, 1 in 1000 agents will randomly lie about their evidence (always in the same direction), and all agents start with the correct prior on trustworthiness and do the correct update when they disagree. I'd guess that there's some threshold percentage of untrustworthy agents above which you get factions, and below get convergence.

Looking at the picture of the factions, it looks like you can tell (fairly well at least) which corner is correct from the global structure. Maybe there's a some technique you can use in a more general situation to determine what what the correct combination of claims is based on what sort of factions (incl the most likely internal disagreements) are organized around them.

Comment by adele-lopez-1 on What are your greatest one-shot life improvements? · 2020-05-18T00:01:18.397Z · score: 3 (4 votes) · LW · GW

nope, they are two different problems with two different books recommended

Comment by adele-lopez-1 on What are your greatest one-shot life improvements? · 2020-05-17T23:59:54.385Z · score: 7 (5 votes) · LW · GW

The main emotion that was a problem was feeling very hurt/insecure by some perceived slight or something, which resulted in in the moment reactions, like crying or getting upset with someone

Comment by adele-lopez-1 on What are your greatest one-shot life improvements? · 2020-05-17T19:21:16.669Z · score: 4 (2 votes) · LW · GW

I solved the same problem by using Dvorak.

I really love my Ultimate Hacking Keyboard which looks pretty similar to the Ergodox EZ one.

Comment by adele-lopez-1 on What was your reasoning for deciding whether to raise children? · 2020-05-17T03:43:25.847Z · score: 5 (3 votes) · LW · GW

I've always loved being with kids and wanted my own to raise someday ever since childhood. It just feels like an inherently good and meaningful thing to do for me.

Comment by adele-lopez-1 on What are your greatest one-shot life improvements? · 2020-05-17T02:32:45.181Z · score: 33 (24 votes) · LW · GW

I used to have really strong emotions that could be triggered by trivial things, which caused both me and the people I was around a lot of suffering.

I managed to permanently stop this, reducing my emotional suffering by about 90%! I did this by resolving to completely own and deal with my emotions myself, and told relevant people about this commitment. Then I was just pretty miserable and lonely feeling for about 3 months, and then these emotional reactions just stopped completely without any additional effort. I think I permanently lowered my level of neuroticism by doing this.

Comment by adele-lopez-1 on What are your greatest one-shot life improvements? · 2020-05-17T02:24:42.683Z · score: 10 (7 votes) · LW · GW

I had bad carpal tunnel pain and RSI due to my coding job 3 years ago, to the point where it was very painful to type, and moderately painful all the time. I was worried I would have to find a new career.

I solved it by seeing David Bacome at Psoas in SF. After about 7 sessions the pain went away completely. He also taught me how to do some exercises to help prevent it from happening again, which I do whenever I start feeling lots of tension in my wrists. It hasn't been an issue since then, and I have no problem using a keyboard for both work and many of my hobbies.

Comment by adele-lopez-1 on What are your greatest one-shot life improvements? · 2020-05-17T02:20:12.885Z · score: 10 (8 votes) · LW · GW

A friend tells me that they would have a new cold sore every ~3 weeks during winter months. After reading this paper: https://proceedings.med.ucla.edu/wp-content/uploads/2016/03/A151218DG-WH-edited.pdf they told the local pharmacist that they needed the chickenpox vaccine since they never had it as a child (which was a lie). Since then (about 3 years), they have only had one cold sore which was much milder than the previously typical ones.

As a side note: it seems to me like it would be worth trying this as a pre-exposure prophylactic for genital herpes if you have sex with multiple people.

Comment by adele-lopez-1 on Insights from Euclid's 'Elements' · 2020-05-15T23:25:26.973Z · score: 2 (1 votes) · LW · GW

I also find the long S super annoying, but it at least should be pretty easy to make a browser plugin or something to replace 'ſ' with 's' everywhere.

Comment by adele-lopez-1 on Topological metaphysics: relating point-set topology and locale theory · 2020-05-01T20:56:57.126Z · score: 5 (3 votes) · LW · GW

Another way to make it countable would be to instead go to the category of posets, Then the rational interval basis is a poset with a countable number of elements, and by the Alexandroff construction corresponds to the real line (or at least something very similar). But, this construction gives a full and faithful embedding of the category of posets to the category of spaces (which basically means you get all and only continuous maps from monotonic function).

I guess the ontology version in this case would be the category of prosets. (Personally, I'm not sure that ontology of the universe isn't a type error).

Comment by adele-lopez-1 on Holiday Pitch: Reflecting on Covid and Connection · 2020-04-23T00:27:52.523Z · score: 3 (2 votes) · LW · GW

"Social Distancing Day" feels like the most natural name for it to me.

I think just "Distance" or "Inside" is not enough for people to immediately know what it's a reference to.

"Flatten the Curve Day"

"Lockdown Day"

"Pandemic Resistance Day"

Comment by adele-lopez-1 on Helping the kids post · 2020-04-21T20:58:12.714Z · score: 4 (2 votes) · LW · GW

I love this! I'd like to do something like this with my kids when they're old enough.

One thing seemed a little off: does Lily really describe herself as "quite the chatterbox"?

When I was her age, I really really loved the periodic table. It was so interesting to realize that (almost) everything was made out all these different things, and that you could organize them in a colorful way. I named all of my stuffies "Boron" because I was 5 and it was therefore my favorite element.

Comment by adele-lopez-1 on What are the Nth Order Effects of the Coronavirus? · 2020-04-07T19:42:48.373Z · score: 5 (3 votes) · LW · GW

Start of the golden age of animation?

Comment by adele-lopez-1 on What should we do once infected with COVID-19? · 2020-03-21T01:36:38.931Z · score: 9 (4 votes) · LW · GW

Wei Dai's first link was a doc with medical guidelines written by people with medical expertise (though not (explicitly) for civilians, I would expect legal risk to deter medical professionals from making guidelines for civilian use). That link is now dead, but archived here.

It included the South Korean guidelines:

According to the Korea Biomedical Review, the South Korean COVID-19 Central Clinical Task Force guidelines are as follows:
1.        If patients are young, healthy, and have mild symptoms without underlying conditions, doctors can observe them without antiviral treatment;
2.        If more than 10 days have passed since the onset of the illness and the symptoms are mild, physicians do not have to start an antiviral medication;
3.        However, if patients are old or have underlying conditions with serious symptoms, physicians should consider an antiviral treatment. If they decide to use the antiviral therapy, they should start the administration as soon as possible:
… chloroquine 500mg orally per day.
4.        As chloroquine is not available in Korea, doctors could consider hydroxychloroquine 400mg orally per day (Hydroxychloroquine is an analog of chloroquine used against malaria, autoimmune disorders, etc.  It is widely available as well).
5.        The treatment is suitable for 7 - 10 days, which can be shortened or extended depending on clinical progress.
Notably, the guidelines mention other antivirals as further lines of defense, including anti-HIV drugs.

My current strategy is to follow these guidelines (with hydroxychloroquine + zinc) if medical treatment is unavailable, there's strong evidence that the illness is COVID-19, and serious COVID-19 symptoms are present. I'll also have activated charcoal on hand to help mitigate accidental overdoses. I'm trying my best to familiarize myself with the risks involved so that I can make good decisions if the situation calls for it. Of course, my primary strategy is prevention in the first place.

Comment by adele-lopez-1 on What should we do once infected with COVID-19? · 2020-03-21T00:52:35.154Z · score: 4 (2 votes) · LW · GW

BTW, the google doc appears to have been taken down due to a TOS violation.

Comment by adele-lopez-1 on What should we do once infected with COVID-19? · 2020-03-21T00:50:48.952Z · score: 4 (4 votes) · LW · GW

You can buy hydroxychloroquine here still (as of March 20th): https://fixhiv.com/shop/coronavirus-drugs/hcqs-400-hydroxychloroquine-400-mg/ which imports it from India. This site also lets you easily buy a prescription for it, FWIW.

Check for G6PD deficiency before taking chloroquine (can be done through the 23-and-me interface) as it can cause haemolysis. Apparently not an issue with hydroxychloroquine: https://www.ncbi.nlm.nih.gov/pubmed/28556555

Comment by adele-lopez-1 on What should we do once infected with COVID-19? · 2020-03-21T00:43:18.754Z · score: 10 (5 votes) · LW · GW

Just because something is dangerous in overdose doesn't mean that medical supervision is needed: for example acetaminophen, or even water. The relevant thing is that the therapeutic dose is close to the lethal dose for chloroquine, and chloroquine dosing is complicated.

Hydroxychloroquine is 40% less toxic while still being effective, according to this article: https://www.nature.com/articles/s41421-020-0156-0

Medical supervision may not be available if current trends continue, so we must carefully weigh the options available to us.

Comment by adele-lopez-1 on Does the 14-month vaccine safety test make sense for COVID-19? · 2020-03-19T00:38:17.256Z · score: 3 (2 votes) · LW · GW

That sounds right to me.

Comment by adele-lopez-1 on Does the 14-month vaccine safety test make sense for COVID-19? · 2020-03-18T19:46:18.538Z · score: 15 (10 votes) · LW · GW

From what I can tell, it looks like the main danger is with a live vaccine, where the vaccine can give the disease to a large number of people (biggest actual disaster seems to have been the Cutter incident, which infected 40,000 people with polio).

I assume that the trial is also there to catch potential black swan issues.

IIRC the Covid-19 vaccines on trial are not live, so the case for doing the 14 month watch was not as strong as I expected. Certainly worth considering more carefully at least.

Comment by adele-lopez-1 on Category Theory Without The Baggage · 2020-03-04T18:55:43.032Z · score: 5 (3 votes) · LW · GW

This is more mathematically justified than you seem to think. Posets are topological spaces and categories, and every space is weak homotopy equivalent to a poset space, which explains why the intuition works so well.

Comment by adele-lopez-1 on Category Theory Without The Baggage · 2020-03-04T18:50:37.033Z · score: 1 (2 votes) · LW · GW
the traditional presentation of category theory is perfectly adapted to its original purpose

I think this is too generous. The traditional way of conceptualizing a given math subject is usually just a minor modification of the original conceptualization. There's a good reason for this, which is that updating the already known conceptualization across a community is a really hard coordination problem -- but this also means that the presentation of subjects has very little optimization pressure towards being more usable.

Comment by adele-lopez-1 on What are the merits of signing up for cryonics with Alcor vs. with the Cryonics Institute? · 2020-02-28T06:54:54.665Z · score: 3 (2 votes) · LW · GW

I'm planning to go with ACS, which is a lesser known cryonics organization that has been around longer than Alcor and CI. The price for a full suspension is $155,000 which is in between the CI and Alcor prices.

They don't actually run their own facilities, instead they contract with other organizations, currently CI to hold the vitrified bodies. For doing suspensions, they seem to have their own procedure, and you can additionally choose to have them contract other organizations such as Suspended Animation Inc. (which is the one Alcor uses).

Since they contract, they have increased flexibility which seems quite valuable. In particular, it helps against organizational incompetence which both Alcor and CI seem to have their fair share of. It's harder to find info about the competence of ACS themselves, but the fact that they've been around a long time bodes slightly well.

They also sponsor cryonics research, which is really cool.

Anyway, I'd really appreciate having more people analyze them as a cryonics option before I commit to them!

Comment by adele-lopez-1 on What are the risks of having your genome publicly available? · 2020-02-12T05:19:09.848Z · score: 13 (7 votes) · LW · GW

There's an upper limit to how relatively bad it can be due to the fact that you are shedding copies of your genome in public all the time.

Comment by adele-lopez-1 on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-28T06:49:37.813Z · score: 5 (3 votes) · LW · GW

Yes, lol :)

Comment by adele-lopez-1 on Honoring Petrov Day on LessWrong, in 2019 · 2019-09-27T04:46:21.931Z · score: 16 (6 votes) · LW · GW

I noticed after playing a bunch of games of a mafia-type game with some rationalists that when people made edgy jokes about being in the mob or whatever, they were more likely to end up actually being in the mob.

Comment by adele-lopez-1 on Novum Organum: Introduction · 2019-09-22T21:07:44.759Z · score: 6 (3 votes) · LW · GW

What schedule are you going to posting these at? I've been eagerly looking forward to the next installment!

Comment by adele-lopez-1 on Looking for answers about quantum immortality. · 2019-09-12T04:58:52.470Z · score: 6 (3 votes) · LW · GW

[Note: potential info hazard, but probably good to read if you already read the question.]

[Epistemic status: this stuff is all super speculative due to the nature of the scenarios involved. Based on my understanding of physics, neuroscience, and consciousness, I haven't seen anything that would rule this possibility out.]

All I want to know is, is this stuff just being pulled out of his butt? Like, an extremely unlikely hypothetical that nonetheless carries huge negative utility? I'd be okay with that, as I'm not a utilitarian. Or have these scenarios actually been considered plausible by AI theorists?

FWIW, I've thought about this a lot and independently came up with and considered all the scenarios mentioned in the Turchin excerpt. It used to really really freak me out, and I believed it on a gut level. Avoiding this kind of outcome was my main motivation for actually getting the insurance for cryonics (the part I was previously cryocrastinating on). However, I now believe that QI is not an s-Risk and don't feel personally worried about the possibility anymore.

One thing to note is that this is a potential problem in any sufficiently large universe, and doesn't depend on a many-worlds style interpretation being correct. Tegmark has a list of various multiverses, which are different and affect what scenarios we might face. I do believe in many-worlds (as a broad category of interpretations) though.

Lots of the comments here seem confused about how this works, so I'll recap. If I'm at the point of death where I'm still conscious, the next moment I'll experience will be (in expectation) whatever conscious state has the highest probability mass in the multiverse, which is also a valid next conscious moment from the previous moment. Note that this next conscious moment is not necessarily in the future of the previous moment. If the multiverse contains no such moments, then we would just die the normal way. If the multiverse includes lots of humans doing ancestor simulations, you potentially could end up in one of those, etc... The key is that out of all conscious beings in the multiverse who feel like this just happened to them, those are (tautologically) the ones having the subjective experience of the next valid conscious moment. And it's valid to care about these potential beings, and is AFAICT the reason I care about my future selves (who do not exist yet) in the normal sense.

Regarding cryonics, it seems like the best way to preserve a significant amount of information about my last conscious moment. To whatever extent information about this is lost, a civilization that cares about this could optimize for likelihood of being a valid next conscious moment. I think this is the main actionable thing you can do for this. Of course, this only passes the buck to the future, since there is still the inevitable heat death of the universe to contend with.

Another thing that seems especially plausible for sudden deaths Aranyosi's 1 scenario. In this case, the highest probability mass next conscious moment will be a moment based on the moment from a few seconds before, but with a "false" memory of having survived a sudden death. This has relatively high probability because people sometimes report having kind of experience when they have a close call. But this again simply passes the buck to the future, where you're most likely to die from a gradual decline.

However, I think that by far, the most likely situation is common to death by aging, illness, or heat death of the universe. At the last moment of consciousness, the only next conscious moments that will be left will be in highly improbable worlds. But which world you are most likely to "wake up" in is still determined by Occam's razor. People seem to imagine that these improbable worlds will be ones where your consciousness remains in a similar state to the one you died in, but I think this is wrong.

Think carefully about what things are actually happening to support a conscious experience. Some minimal set of neurons would need to be kept functional -- but beyond that, we should expect entropy to effect things which are not causally upstream of the functionality of this set of neurons. Since strokes happen often, and don't always cause loss of consciousness, we can expect them to eventually occur for every non-essential (for consciousness) region of the brain. Because people can experience nerve damage to their sensory neurons without losing consciousness, we can expect that the ability to experience physical pain will decay. Emotional pain doesn't seem to be that qualitatively different from physical pain (e.g. is also mitigated by NSAIDs), so I expect this will be true for pain in general.

So most of your body and most of your mind will still decay as normal, only the absolutely essential neuronal circuitry (and whatever else, perhaps blood circulation) to induce a valid next conscious moment will miraculously survive. Anesthesia works by globally reducing synapse activity. So the initial stages this would likely feel like going under anesthesia, but where you never quite go out. Because anesthetics stop pain (remember this is still true if applied locally), and because by default, we do not experience pain, I'm now pretty sure that given QI being real: infinite agony is very unlikely.

Comment by adele-lopez-1 on Soft takeoff can still lead to decisive strategic advantage · 2019-08-23T20:10:24.811Z · score: 4 (2 votes) · LW · GW

Yeah, I think the engineer intuition is the bottleneck I'm pointing at here.

Comment by adele-lopez-1 on Actually updating · 2019-08-23T18:31:30.017Z · score: 4 (2 votes) · LW · GW

This rings really true with my own experiences; glad to see it written up so clearly!

I think that lots of meditation stuff (in particular The Mind Illuminated) is pointing at something like this. One of the goals is to train all of your subminds to pay attention to the same thing, which leads to increasing your ability to have an intention shared across subminds (which feels related to Romeo's post). Anyway, I think it's really great to have multiple different frames for approaching this kind of goal!

Comment by adele-lopez-1 on Thoughts from a Two Boxer · 2019-08-23T18:05:57.218Z · score: 5 (3 votes) · LW · GW

I think people make decisions based on accurate models of other people all the time. I think of Newcomb's problem as the limiting case where Omega has extremely accurate predictions, but that the solution is still relevant even when "Omega" is only 60% likely to guess correctly. A fun illustration of a computer program capable of predicting (most) humans this accurately is the Aaronson oracle.

Comment by adele-lopez-1 on Soft takeoff can still lead to decisive strategic advantage · 2019-08-23T17:22:54.477Z · score: 15 (6 votes) · LW · GW

This post has caused me to update my probability of this kind of scenario!

Another issue related to the information leakage: in the industrial revolution era, 30 years was plenty of time for people to understand and replicate leaked or stolen knowledge. But if the slower team managed to obtain the leading team's source code, it seems plausible that 3 years, or especially 0.3 years, would not be enough time to learn how to use that information as skillfully as the leading team can.

Comment by adele-lopez-1 on What supplements do you use? · 2019-07-29T03:10:51.779Z · score: 8 (5 votes) · LW · GW

Is there a reason not to take it if you're younger than 40?