Posts

Comments

Comment by dpiepgrass on Luna Lovegood and the Chamber of Secrets - Part 3 · 2020-12-02T05:06:53.952Z · LW · GW

Especially as no character has given a reason to suspect any sort of "perception filter" a la Doctor Who. Incidentally, didn't Hogwarts often reconfigure itself in HPMOR? Seems odd, then, that Fred/George believe they've seen it all.

Comment by dpiepgrass on What should experienced rationalists know? · 2020-10-15T18:56:10.085Z · LW · GW

Thank you for this valuable overview, it's worth bookmarking.

The link in section 3 does not support the idea that humans don't suffer from a priming effect (this may not have been what you meant, but that's how it sounds). Rather, the studies are underpowered and there is evidence of positive-result publication bias. This doesn't mean the published results are wrong, it means 'grain of salt' and replication is needed. LWers often reasonably believe things on less evidence than 12 studies.

Comment by dpiepgrass on What should experienced rationalists know? · 2020-10-15T18:55:19.533Z · LW · GW

Thank you for this valuable overview, it's worth bookmarking.

The link in section 3 does not support the idea that humans don't suffer from a priming effect (this may not have been what you meant, but that's how it sounds). Rather, the studies are underpowered and there is evidence of positive-result publication bias. This doesn't mean the published results are wrong, it means 'grain of salt' and replication is needed. LWers often believe things on less evidence than this.

Comment by dpiepgrass on Dark Side Epistemology · 2020-07-03T07:55:20.917Z · LW · GW

Yeah, this was a good discussion, though unfortunately I didn't understand your position beyond a simple level like "it's all quarks".

On the question of "where does a virtual grenade explode", to me this question just highlights the problem. I see a grenade explosion or a "death" as another bit pattern changing in the computer, which, from the computer's perspective, is of no more significance than the color of the screen pixel 103 pixels from the left and 39 pixels down from the top changing from brown to red. In principle a computer can be programmed to convincingly act like it cares about "beauty" and "love" and "being in pain", but it seems to me that nothing can really matter to the computer because it can't really feel anything. I once wrote software which actually had a concept that I called "pain". So there were "pain" variables and of course, I am confident this caused no meaningful pain in the computer.

I intuit that at least one part* of human brains are different, and if I am wrong it seems that I must be wrong either in the direction of "nothing really matters: suffering is just an illusion" or, less likely, "pleasure and suffering do not require a living host, so they may be everywhere and pervade non-living matter", though I have no idea how this could be true.

* after learning about the computational nature of brains, I noticed that the computations my brain does are invisible to me. If I glance at an advertisement with a gray tube-nosed animal, the word "elephant" comes to mind; I cannot sense why I glanced at the ad, nor do I have any visibility into the processes of interpreting the image and looking up the corresponding word. What I feel, at the level of executive function, is only the output of my brain's computations: a holistic sense of elephant-ness (and I feel as though I "understand" this output—even though I don't understand what "understanding" is). I have no insight into what computations happened, nor how. My interpretation of this fact is that most of the brain is non-conscious computational machinery (just as a human hand or a computer is non-conscious) which is connected to a small kernel of "consciousness" that feels high-level outputs from these machines somehow, and has some kind of influence over how the machinery is subsequently used. Having seen the movie "Being John Malkovich", and having recently head of the "thousand brains theory", I also suppose that consciousness may in fact consist of numerous particles which likely act identically under identical circumstances (like all other particles we know about) so that many particles might be functionally indistinguishable from one "huge" particle.

Comment by dpiepgrass on Procedural Knowledge Gaps · 2020-07-03T06:40:15.252Z · LW · GW

I don't know of a good content aggregator. I guess I would like to see a personalized web site which shows me all the posts/articles from all the good blogs and publishers I know about.

RSS readers are a good start, but not every site has a proper feed (with full, formatted article text and images) and usually the UI isn't what I want (e.g. it might be ugly compared to viewing the site in a browser; also I'd like to be able to see a combined feed of everything rather than manually selecting a particular blog). In the past, I needed caching for offline viewing on a phone or laptop, but mobile internet prices have come down so I bit the bullet and pay for it now. I wonder what tools people like here?

I also wish I had a tool that would index all the content I read on the internet. Often I want to find something I have read before, e.g. to show it to someone with whom I'm conversing, but AFAIK there is no tool for this.

Another tool I wish for is a public aggregator: when I find a good article (or video) I want to put it on a public feed that is under my own control. Viewed in a web browser, ideally the feed would look like a news site, or a blog, or a publication on medium.com. And then someone else could add my "publication" to their own RSS reader, and the ideal RSS reader would produce a master feed that deduplicates (but highlights) content that multiple people (to whom I subscribe) have republished (I was on Twitter yesterday and got annoyed when it showed me the same damn video like 15 times retweeted by various people).

Comment by dpiepgrass on 3 Levels of Rationality Verification · 2020-06-26T18:49:04.412Z · LW · GW

Nothing makes me want to upvote someone like a downvote-without-comment on a post that seems vaguely reasonable.

Comment by dpiepgrass on Dark Side Epistemology · 2020-06-26T17:56:55.075Z · LW · GW

"Car" isn't an adjective just because there's a "Car factory"; Consider: *"the factory is tall, car, and red".

Do you expect this to change?

Yes, but I expect it to take a long time because it's so hard to inspect living humans brains non-destructively. But various people theorize about the early universe all the time despite our inability to see beyond the surface of last scattering... ideas about consciousness should at least be more testable than ideas about how the universe began. Hard problems often suffer delays; my favorite example is the delay between the Michelson–Morley experiment's negative result and the explanation of that negative result (Einstein's Special Relativity). Here, even knowing with certainty that something major was missing from physics, it still took 18 years to find an explanation (though I see here an ad-hoc explanation was given by George FitzGerald in 1889 which pointed in the right direction). Today we also have a long-standing paradox where quantum physics doesn't fit together with relativity, and dark matter and dark energy remain mysterious... just knowing there's a problem doesn't always quickly lead to a solution. So, while I directly sense a conflict between my experience and purely reductive consciousness, that doesn't mean I expect an easy solution. Assuming illusionism, I wouldn't expect a full explanation of that to be found anytime soon either.

postulating a qualia particle

It was just postulation. I wouldn't rule out panpsychism.

Chalmers seems not to believe in a consciousness without physical effects - see his 80000 hours interview. So Yudkowsky's description of Chalmers' beliefs seems to be either flat-out wrong, or just outdated.

Namely - before the question of consciousness is solved, the qualia particle will be found.

I do hope we solve this before letting AGIs take over the world, since, if I'm right, they won't be "truly" conscious unless we can replicate whatever is going on in humans. Whether EAs should care about insect welfare, or even chicken welfare, also hinges on the answer to this question.

Comment by dpiepgrass on 3 Levels of Rationality Verification · 2020-06-26T16:43:08.612Z · LW · GW

To make things more interesting, measure the pre-existing biases of the test-taker and then... give bonus points for assumptions and issues mentioned by the test-taker that are contrary to their own bias? e.g. if they are predisposed to be against nuclear power then a comment like "Regulations passed after Post-Three-Mile-Island probably increase safety a lot in newer reactors" would count in their favor, whereas if they are predisposed to be in favor of nuclear power, mentioning risks of nuclear waste would count in their favor. Also, correctly including factors in their model that are contrary to their bias (e.g. +1 if their preconception is against nuclear but they correctly identify the rate of non-CLL leukemia (14*2/3 or 1.5%*2/3) and use that number to estimate the risk, rather than mixing up non-CLL with total leukemia). A special case, common outside LessWrong: failure to identify any factors contrary to their bias is a red flag. Another red flag: isolated demands for rigor / questioning studies only when the conclusion is disliked.

A problem with my style here, especially re: the final two questions, is the difficulty of automated testing. It's tempting to convert to a multiple-choice test, yet we want participants to generate their own ideas. A compromise for sake of automation: gather hundreds of reasonable ideas from initial test-takers, and identify searchable keywords that will, when typed, find those ideas. Then test-takers can type (complete) keywords to find and add pre-existing ideas as their answers.

Comment by dpiepgrass on 3 Levels of Rationality Verification · 2020-06-26T08:48:50.832Z · LW · GW

How about a test that causes people to build and use mental models and formulas? People are asked to estimate primarily numeric facts based on other facts. In each question, give people a set of "measured facts"* and ask people to estimate more relevant facts/consequences via back-of-envelope calculations (or a computer program, for more precision). But unlike a normal math word problem, set up the test so that, say, 2/3 of the questions cannot be accurately estimated with only the information given. Among that 2/3, half can be accurately estimated by adding some common-sense info (e.g. that most people work about 40 hours a week, that life expectancy is about 80 years, that almost half of American voters vote Republican, etc.), and the other half require more esoteric information that test-takers will rarely have. For all the questions, test-takers need to build a simple mental model or formula that would allow them to do the calculation, state any information they need that is missing, and try briefly to look up the info online in order to compute a reasonable estimate. If they can't do this, they need to express the answer in terms of unknown variables and then guess what the values of the variables are. They must also state relevant assumptions.

This is a means both to improve rationality as well as test it.

Example question set:
Background: in some types of accidents at some types of nuclear plants, radioactive substances can be released into the atmosphere (radioactive substances emit ionizing radiation). It is medically plausible that there is no perfectly safe dose of ionizing radiation in human tissue, and that radiation damage to DNA is cumulative, because cells repair some DNA damage very slowly, or never, and this damage can lead to cancer years after radiation exposure. This is known as the linear no-threshold hypothesis: that the health risk is proportional to exposure and there is no safe dose. If residents are promptly evacuated during an accident, the primary risk to their health upon returning will be from long-term exposure to radioactive cesium, which mainly causes a type of cancer called non-CLL leukemia.**
• A metastudy reports that the excess relative risk (ERR) of non-CLL leukemia from 100 mGy of radiation is about 19% (this means that people get non-CLL leukemia 19% more often than normal).**
• The normal rate of leukemia in the U.S. is about 14 diagnoses per 100,000 people per year. About 1.5% of people are diagnosed with leukemia at some point in their lifetime
• The normal death rate of leukemia is about 6.4 per 100,000 people per year in the U.S.
• One third of leukemia cases are CLL leukemia cases.
• Another study estimates that in the U.S. there are about 16,000 excess deaths annually due to electricity generation emissions, which is a low rate compared to some developing countries. The researchers estimate that 91% of these deaths were the result of emissions from coal-fired power plants.
• There are 328 million people in the U.S. and 7.5 billion in the world
• About 65% of all electricity worldwide is produced by burning fossil fuels. About 10% of electricity is from nuclear plants and 38.3% is from coal.
• Assume two-thirds of cancer cases and deaths from a nuclear accident occur outside the city where the accident occurred***

Scenario: suppose that another nuclear accident were to happen, one somewhat more serious than Fukushima, inside a city of one million people, in a developed country. Suppose that all evacuated persons return to their homes after one month and, as a result, are exposed to 100 mGy of radiation on average, mostly from cesium. Assume that half of this radiation dose occurs in the first 10 years and that most of it has occurred within 40 years***.

Questions:
1. Estimate the chance that the radiation will cause non-CLL leukemia in a particular, random person in the city at some point in their lives.
2. Estimate the chance that the radiation will kill a particular, random person in the city after they move back.
3. Estimate the total number of non-CLL leukemia cases caused by the radiation (over 40+ years).
4. Estimate the total number of people that will die as a result of the radiation (over 40+ years).
5. Assume that all nuclear accidents worldwide, combined, cause this number of deaths once every 20 years (e.g. in a 20-year period there might be two accidents, each half as serious as this one). What is the expected number of deaths per year in a randomly selected city of about one million people?
6. Estimate the number of excess deaths caused by power generation in that same city (i) per year, and (ii) over a 40-year period, if all its electricity came from fossil fuels instead of the nuclear plant.
7. Brainstorm additional factors that might change your estimates above.
8. Brainstorm other considerations that would be relevant to evaluating safety of nuclear power compared to alternatives.

Example answers:
1. Assumptions: All people have lives of average length (80 years). Age distribution in the city is uniform from 0 to 80. Leukemia risk is elevated uniformly after exposure for the rest of the person's life. All developed countries have similar leukemia rates. Leukemia is diagnosed very soon after it develops. Leukemia risk does not vary by age (this is not true, but on the other hand, I question whether it was appropriate for the metastudy to use ERR instead of excess absolute risk (EAR)). Radiation exposure probably drops off mostly according to cesium's half-life, but to simplify the calculation, assume 50% of the 100 mGy dose is delivered linearly in the first 10 years and the other 50% linearly over the following 30 years.
• Normal non-CLL leukemia risk is 14*2/3 = 9.333 per 100,000 per year
• A random person has on average 40 years of life left (50% of an 80-year lifetime)
• Excess risk of non-CLL leukemia is 19%, so 9.333*0.19 = 1.773 per 100,000 once the full dose happens.
• But there's a long delay before reaching the full dose... integrating over my approximate exposure function, average excess incidence should average 1.773/2/2 per 100,000 in the first 10 years and 1.773*0.75 over the next 30. Neglecting young and old people to simplify the calculation, the lifetime risk is about 1.773*0.25*10 + 1.773*0.75*30 = 44.3 per 100,000 over 40 years, so the lifetime risk is about 0.0443%, or 1 in 2260.

Fun fact 1: Before writing this LessWrong post, I did a calculation like this to learn about the risks of radiation, because I couldn't find any research estimating what I wanted to know. Radiation risks seem to be among the world's best-kept secrets. I'd rather see a peer-reviewed paper answer "how likely is X amount of radiation to kill me" than rely on my "napkin" model, but I haven't found any such research.
Fun fact 2: the answer increases if your starting point is "1.5% of people are diagnosed with leukemia at some point in their lifetime" since "14 per 100,000 people per year" only adds up to 1.12% per 80-year lifetime. I don't know why these numbers don't match up.
Fun fact 3: I should really be using a simple (Monte Carlo) computer model for this with exponential decay of radiation exposure... no idea if it would raise or lower my estimate.

2. (Further) Assumptions: Non-CLL leukemia is the only cause of death driven by radiation. Years of life left after the first cell turns cancerous is negligible. Probably both assumptions are significantly wrong, but the first assumption underestimates deaths and the second overestimates them so it seems like a wash.
• 6.4/14 = 45.7% of cases are fatal so the risk is 0.0443%*0.457 = 0.0202% or 1 in 4939.

3. Assumption: cancer screenings do not increase as a result of the accident (I'm sure this is wrong). There will be about 0.000443*1,000,000 = 443 excess cases in the city and about 487*3 = 1329 excess cases total
4. There will be about 1329*6.4/14 = about 607 excess deaths total

5. There will be 607/20 = 30.3 deaths worldwide per year from all nuclear accidents. Given a world population of 7.5 billion, that's about 0.004 deaths in a city of one million. The risk increases somewhat in cities that contain their own nuclear plant, if the plant is one of the more hazardous (read: old) models.

6. In a random U.S. city, the expected deaths per million in the U.S. from fossil fuels is 16'000/328=48.8 per year. (i) Assuming air pollution's effects are mainly local and 100% of power generation comes from fossil fuels, the expectation for a U.S. city is 16'000/328/0.65 = 75 deaths per year due to fossil fuels. (ii) which is 3001 deaths over a 40-year period (4.5x higher than the nuclear meltdown scenario).

7.
• Increased screening due to concern about the risk will increase the rate of cancer diagnoses, but not rates of cancer, and cancer death rates may be reduced by early detection.
• Radiation could cause other types of cancer deaths (I heard, for example, that short-lived iodine isotopes can cause thyroid cancer, but that this can be mitigated with iodine pills).
• Etc.: I'm getting tired but you get the idea

8.
• Regulations passed after Post-Three-Mile-Island probably increase safety a lot in newer reactors (but make new plants cost-prohibitive to certify and build)
• Nuclear waste increases long-term risk (but less than most people think, I would add)
• It has been suggested that terrorists could steal nuclear fuel and build a bomb with it. (I don't know if this is remotely plausible, but I do know that reactor-grade uranium is not directly usable in a bomb.)

• Deaths during plant construction and related mining should be similar between nuclear and fossil fuel plants; solar plant construction seems like it should be safer than nuclear, oil/coal, and wind.

• Though deaths from fossil fuels are more numerous, each death is expected to be less bad because it should happen near the end of a person's life due to many years of lung damage, whereas in the nuclear case, some young people will be affected. It's strange to me that fossil fuel deaths are not measured as "years of life lost" instead.

* The "facts" can be real or based on back-of-envelope calculations, but the test-taker is to assume the information is factual. If it is not factual, and concerns the real world, it mustn't be excessively off-the-mark because humans can't simply erase misinformation from our minds so it's best not to intentionally mess with us.
** This is roughly correct AFAIK but I'm not an expert. Also, the metastudy strangely neglects to model time, e.g. it does not say that the risk is elevated for the rest of peoples lives, or that it is elevated for X years, or anything time-related like that. I don't see why risk would be elevated for life—if damage will cause a cell to turn cancerous, why would it wait 20 years to do so?—but conservatively this is my mental model anyway. I've seen a study that indicates 100 mGy is more than the average dose avoided by relocating residents of Fukushima; note also that mGy and mSv are the same SI units, so I don't understand the difference.
*** This datum is made-up as I haven't found information about it.

After going through this exercise I think the formulas need to be more explicit... really we should write a program for nontrivial models, e.g....

// TODO: turn into Monte Carlo simulation

let ExcessLifetimeChanceOfCancer = 0, BaseNonCLLLRisk = (14.0*2/3)/100'000, Dose = 0, InitialYearlyDose = ??

let CesiumHalfLifeYears = 30.17, YearlyDecayFactor = 0.5**(1/CesiumHalfLifeYears), ERRPermGy = 0.19/100

for year in 1..YearsOfLifeLeft {

. Dose += InitialYearlyDose

. InitialYearlyDose *= YearlyDecayFactor

. ExcessLifetimeChanceOfCancer += BaseNonCLLLRisk * ERRPermGy / 100

}

print(Dose) // TODO: pick initial dose so that total tends to be 100 or a bit less

print(ExcessLifetimeChanceOfCancer)

And also there would be need of numerous easier exercises than this one.

Comment by dpiepgrass on Make an Extraordinary Effort · 2020-06-25T21:52:22.959Z · LW · GW
Every now and then, someone asks why the people who call themselves "rationalists" don't always seem to do all that much better in life

I think this is because, while we have lots of useful theory and advice here about epistemic rationalism, we have virtually nothing about instrumental rationalism.

1. Success requires taking action, and taking action requires generating options, and I'm not very good at generating options. Little in the way of advice for doing it is given in Rationality A-Z (at least up to this point) beyond "spend a few minutes thinking about it, if it's important." It's good advice, but seems insufficient. (I would kind of expect a leading expert in AGIs to have more ideas about this problem—or are the techniques used by AIs inapplicable to humans?)

2. Motivation is perhaps a larger problem for me. I don't know how motivation works and how to create it in myself; in theory there is a thing I want to create more than anything else in the world; in practice I just haven't felt like doing it for several weeks (perhaps this is because I anticipate no one will see its utility and almost everyone will ignore it as they have in the past, but actually the rationale lies in my emotional system, it cannot be introspected, and sometimes my feelings change so I can work on it again.) "Shut up and do the impossible!", says the next post, but this requires quite some motivation.

3. A lot of success in life depends on our relationships with others, and I've never been good at developing relationships, nor has there been much advice about that here. I don't know any aspiring rationalists in person, and I find the poor reasoning of most people to be grating. I can't use terms like "expected value" with others and expect to be understood. Succeeding at standard non-rationalist office politics is one of those things that I'd love to do in theory, but in practice it's unpredictable and mysterious and scary and I lack the will to take the necessary risks (especially having lost my last two jobs, I really want something stable at the moment). I might worry that I'll never be more than a low-level employee, if it would do any good to worry. I often lament that I play the role of a "leaf node" in the game of life—a person no one pays much attention to—but I simply don't know how to fix the problem.

Bayesians with the same priors cannot agree to disagree, is a believing Orthodox Jew.

The facile explanation is that people compartmentalize and have biases, but this reminds me, where do priors come from? So far I have not seen any proposals for how to evaluate evidence in a new area of study, let alone how to evaluate evidence "from scratch".

Comment by dpiepgrass on Dark Side Epistemology · 2020-06-23T22:16:17.425Z · LW · GW

So, I think we've cleared up the distinction between illusionism and non-illusionism (not sure if the latter has its own name), yay for that. But note that Linux is a noun and "conscious" is an adjective—another type error—so your analogy doesn't communicate clearly.

But it does know to interact with mammals and not with trees and diamonds?

I can't be sure of that. AFAIK, you are correct that we have no falsifiable predictions as of yet—it's called the "hard problem" for a reason. But illusionism has its own problems. The most obvious problem—that there is no "objective" subjective experience, qualia, or clear boundaries on consciousness in principle (you could invent a definition that identifies a "boundary" or "experience", but surely someone else could invent another definition with different boundaries in edge cases)—tends not to be perceived as a problem by illusionists, which is mysterious to me. I think you're saying the suffering has no specific location (in my hypothetical scenario), but that it still exists, and that this makes sense and you're fine with it; I'm saying I don't get it.

But perhaps illusionism's consequences are a problem? In particular, in a future world filled with AGIs, I don't see how morality can be defined in a satisfactory way without an objective way to identify suffering. How could you ever tell if an AGI is suffering "more" than a human, or than another AGI with different code? (I'm not asking for an answer, just asserting that a problem exists.)

Comment by dpiepgrass on My Bayesian Enlightenment · 2020-06-17T22:57:42.941Z · LW · GW

Yes, I'm baffled as well. Eliezer says that the prior P("at least one of them is a boy"|1 boy 1 girl) + P("at least one of them is a girl"|1 boy 1 girl) = 1, which is nonsensical given that, in fact, the mathematician could have said many other things (given 1 boy 1 girl). But even if this were true, it still doesn't tell us the probability P("at least one of them is a boy"|two boys). Regardless of whether she has one boy or two boys, "at least one of them is a boy" is a very unusual thing to say, and it leads me to suppose that she had two children born as boys, one of whom is transgender. But how do I assign a probability to this? No idea.

If the mathematician herself had said "what is the probability that they are both boys?" it becomes more likely that she's just posing a math problem, because she's a mathematician... but that's not how the question was posed, so hmm.

Comment by dpiepgrass on Dark Side Epistemology · 2020-06-15T15:47:10.241Z · LW · GW
I am now confused.

I don't know why. I have an AMD Ryzen 5 CPU and my earlier premise should make sense if you know what "single-threaded" means.

Why do you think this unknown particle is not compatible with rocks and CPUs?

I thought it was obvious, but okay... let X be a nontrivial system or pattern with some specific mathematical properties. I can't conceive of a rule by which any arbitrary physical representation of X could be detected, let alone interacted with. If a particle (or indivisible entity) does something computationally impossible (or even just highly intelligent), I call it magic.

Does it pay rent in anticipation?

It pays rent in sensation. I have a first-person subjective experience and I am unable to believe that it is only an abstraction. (Otherwise I probably would have turned atheist much sooner.)

Comment by dpiepgrass on Dark Side Epistemology · 2020-06-13T18:52:05.655Z · LW · GW
Yes, no one would call your GPU conscious.

I wasn't talking about the GPU. Using the word "yes" to disagree with me is off-putting.

How is that different from reductionism?

I never said I rejected reductionism. I reject illusionism.

Ah, it's a magical particle. It is smaller than an electron

Quite the opposite. A magical particle would be one that is inexplicably compatible with any and every representation of human-like consciousness (rocks, CPUs of arbitrary design) - with the term "human-like" also remaining undefined. I make no claims as to its size. I claim only that it is not an abstraction, and that therefore known physics does not seem to include it.

So intelligent it knows exactly what to interact with

I do not think it is intelligent, though it may augment intelligence somehow.

How willing would you be to put such an AGI in the state of mind described by reductionists as "pain"

I think it's fair to give illusionism a tiny probability of truth, which could make me hesitant (especially given its convincing screams), but I would be much more concerned about animal suffering than about my AMD Ryzen 5 3600X suffering.

By the way, where will the suffering be located? Is it in the decode unit? The scheduler? the ALU? The FPU? The BTB? The instruction L1 cache? The data L1 cache? Does the suffering extend to the L2 cache? the L3? out to the chipset and the memory sticks? Is this a question that can be answered at all, and if so, how could one go about finding the answer?

Comment by dpiepgrass on Covid-19: My Current Model · 2020-06-11T14:38:14.197Z · LW · GW
"The WHO has lied repeatedly, to our face, about facts vital to our safeguarding our health and the health of those around us. They continue to do so. It’s not different from their normal procedures."

Are you sure not providing evidence for your claims is the right call?

Comment by dpiepgrass on Dark Side Epistemology · 2020-06-06T15:59:11.520Z · LW · GW

Well, the phrase "something-it-is-like to be a thing" is sometimes used as a stand-in for qualia. What I am talking about when I use that word is "the element of experience which, according to the known laws of physics, does not exist". There is only one level of airplane, and it's quarks. It seems impossible for a quark (electron, atom) or photon to be aware it is inside a mind. So in the standard reductionist model, there is no meaningful difference between minds and airplanes; a mind cannot feel anything for the same reason an airplane or a computer cannot feel anything. The sun is constantly exploding while being crushed, but it is not in pain. A mind is simply a machine with unusual mappings from inputs to outputs. Redness, cool breezes, pleasure, and suffering are just words that represent states which are correlated with past inputs and moderate the mind's outputs. Many computer programs (intelligent or not) could be described in similar terms.

Suppose someone invents a shockingly human-like AGI and compiles it to run single-threaded. I run a copy on the same PC I'm using now, inside a GPU-accelerated VR simulation (maybe it runs extremely slowly, at 1/500 real time, but we can start it from a saved teenager-level model and speak to it immediately via a terminal in the VR). Some would claim this AGI is "phenomenally conscious"; I claim it is not, since the hardware can't "know" it's running an AGI any more than it "knows" it is running a text editor inside a web browser on lesswrong.com. It's just fetching and executing a sequence of instructions like "mov", "add", "push", "cmp", "bnz", just as it always has (and it doesn't know it's doing that, either). I claim that, associated with our minds, there is something additional, aside from the quarks, which can feel things or be aware of feelings. This something is not an abstraction (representing a collection of quarks which could be interpreted by another mind as a state that modulates the output of a neural network), but a primitive of some sort that exists in addition to the quarks that embody the state, and interacts with those quarks somehow. I expect this primitive will, like everything else in the universe, follow computable rules, so it will not associate itself with any arbitrary representation of a state, such as my single-threaded AGI or an arrangement of rocks. (by the way, I also assume that this primitive provides something useful to its host, otherwise animals would not evolve an attachment to them.)

Comment by dpiepgrass on Designing Ritual · 2020-06-02T21:22:26.216Z · LW · GW

So, eight years later... how are the rituals going? Here's hoping the Galaxy Song is a favorite...

Just remember that you're standing on a planet that's evolving
And revolving at 900 miles an hour.
It's orbiting at 19 miles a second, so it's reckoned,
The sun that is the source of all our power.
Now the sun, and you and me, and all the stars that we can see,
Are moving at a million miles a day,
In the outer spiral arm, at 40, 000 miles an hour,
Of a galaxy we call the Milky Way.

Comment by dpiepgrass on Book of Mormon Discussion · 2020-06-02T20:45:23.280Z · LW · GW

LessWrong people generally don't have enough exposure to the LDS church or the Book of Mormon to have a reason to take it seriously, and their worldview is so different that it would be hard for them to even understand why Mormons take it seriously in the first place.

But as a former Mormon, who was born into the church and attended church almost every Sunday for at least 30 years, I'd like to summarize my journey (briefly, as this discussion died off long ago):

  • I read the Book of Mormon three times, taking it seriously each time. I was often confused by parts of the book and other doctrines, but I generally assumed it was something wrong with me instead of the book or the doctrines. (edit: as I matured I was forced to see it as obfuscation by God or, in the most nonsensical cases, evidence that he was evil or did not exist, interpretations I was very reluctant to accept, of course; at most I would ask what the evidence compels me to believe. Remember also that even scientists do not throw out a hypothesis without a better one to replace it: the failed Michelson-Morley experiment did not cause everyone to suddenly decide there was no aether, because there was no Special Relativity to replace it; similarly, the theory of evolution was not a workable alternative to me, partly because I had learned about it only from those who were incompetent to teach me about it, and partly because I was quite sure that I had a soul, which evolutionary theory could not explain. Therefore I often thought about a third alternative, which was the idea that our world was some sort of experiment by 'researchers' who were messing with us for some reason.)
  • I prayed many times, hoping God would keep Moroni's promise but He never did. I have taken morality seriously since I was a child and always tried to do what was right; my only significant failure was my inability to stop masturbating. I supposed (hoped!) that this was the reason why God wouldn't answer my prayers. It was emotionally very painful that God wouldn't answer my prayers; in my depression, I would think of worthiness like a high-jump competition in which the bar is invisible, and you can't measure how high you or other competitors are jumping, and you just have to keep jumping and hoping that someday you will clear the bar. This felt unjust, but I had no way to be sure that God was just. I often met with my bishop, who could not answer my questions and often had to rely on backstop answers like "God hasn't revealed all the answers to us yet".
  • I sometimes heard internet atheists talking about silly aspects of religion, but always in a way that preached to the choir (I had not discovered Luke M's commonsenseatheism.com). Their analyses were superficial and not persuasive to me. It seemed to me that I had to take evolution seriously, but I didn't realize how little I knew about it, and I found an "intelligent design" video persuasive (the video did not talk about God, did not support "creationism", was designed to sound scientific, and was ostensibly about scientists challenging the traditional "assumptions" of evolutionary theory).
  • As I began to take seriously the possibility that God wasn't real, I engaged in a two-pronged strategy where I paid 10% tithing and also gave 10% to charities. If God wasn't real, tithing was useless and I would have to do good some other way, but if He was real, I needed to show Him I was willing to be his faithful servant. This lasted maybe two years, and then I reduced my tithing to 5% as my faith waned.
  • I began to worry that if God was real, he sure wasn't very nice or trustworthy. I found the following quote, which resonated with me: "Live a good life. If there are gods and they are just, then they will not care how devout you have been, but will welcome you based on the virtues you have lived by. If there are gods, but unjust, then you should not want to worship them. If there are no gods, then you will be gone, but will have lived a noble life that will live on in the memories of your loved ones" If God was just, he wouldn't condemn me for the legitimate reasons I might have for suspecting he was unreal or unjust, and if he was unjust, I shouldn't even worship him at all. This quote was crucial in giving me permission to follow the evidence wherever it led.
  • Somehow, I don't remember how, I came across the CES Letter in January 2015, and after reading it, I understood that the church was false and I left it immediately. Having done this, I quickly understood some other important things. First, I noticed that I wasn't willing to go back to church and share the CES letter in any major way. I wasn't willing to print copies and share them, or give a dramatic talk where the Bishop cuts off my microphone, and when I told my best friend about the CES letter, he had no interest in reading it (and remains a Full Tithe Payer to this day). So now I understood survivorship bias—the reason I was surrounded by believers for 30 years, with no one seriously challenging my beliefs (see also evaporative cooling of group beliefs). Second, I noticed something about the evidence that Mormonism was false: the evidence existed only because Mormonism is recent; the church started in the era of the printing press, and there are many surviving records from the 19th century. Suppose for a moment that Christianity is false, and that in the time of Christ there was a lot of clear evidence it was false, just as there was evidence against Mormonism. What would have happened to that evidence? The believers would make many copies of their Holy Book, but the unbelievers - having no printing press, and caring much less about fighting Christianity than the Christians cared to support it - would not reliably preserve the evidence that it was false. I found it very plausible that this indeed had happened.
  • I learned more about LessWrong/EA/SSC in the years afterward and became a regular visitor. I also discovered and signed the Giving What We Can Pledge, which was easy to do. It would have been far better to learn about all this earlier in my life!
Comment by dpiepgrass on When (Not) To Use Probabilities · 2020-06-01T16:14:25.835Z · LW · GW

For instance, suppose you have a certain level of gut feeling X that the papers saying LHC will not destroy the world have missed something, a gut feeling Y that, if something has been missed, the LHC would destroy the world, and a third gut feeling Z that the LHC will destroy the world when switched on. Since humans lack multiplication hardware, we can expect the probability Z ≠ X·Y (and probably Z > X·Y, which might help explain why a girl committed suicide over LHC fears). Should we trust Z directly instead of computing X·Y? I think not. It is better to pull numbers out of your butt and do the math, than pull the result of the math out of your butt directly.

Comment by dpiepgrass on The Allais Paradox · 2020-06-01T05:08:44.495Z · LW · GW

I would further predict that if someone is wealthy enough, or if the winning amount is small, e.g. $24 and $27, they are much more likely to choose 1B over 1A - because of how much less emotionally devastating it would be to lose, or rather, how much less devastating the participant imagines losing to be.

I decided to Google for literature on this and found this analysis. It takes some effort to decode, but if I understand Table 1 correctly, (1) experiments testing the Allais Paradox have results that often seem inconsistent with each other, and strange at first glance (roughly speaking, more people choose 1B & 2A than you'd think), which reflects a bunch of underlying complexity described in section 3; (2) to the extent there is a pattern, I was right about the smaller bets; and (3) the decision to maximize expected financial gain (1B & 2B ≃ RR in Table 1) is the most popular choice in 43% of experiments.

Comment by dpiepgrass on Dark Side Epistemology · 2020-05-24T14:27:45.082Z · LW · GW

Again, as a non-illusionist, I disagree that physiological consciousness necessarily implies qualia (or that an AGI necessary has qualia). It seems merely to be a reasonable assumption (in the human case only).

Comment by dpiepgrass on Dark Side Epistemology · 2020-04-15T20:38:22.841Z · LW · GW

If I take a digital picture, I can convert the file to BMP format and extract the "red" bits, but this is no evidence that my phone has qualia of redness. An fMRI scanning a brain will have the same problem. The idea that everyone has qualia is inductive: I have qualia (I used to call it my "soul"), and I know others have it too since I learned about the word itself from them. I can deduce that maybe all humans have it, but it's doomed to be a "maybe". If someone were to invent a test for qualia, perhaps we couldn't even tell if it works properly without solving the hard problem of consciousness.

Comment by dpiepgrass on Circular Altruism · 2019-12-10T18:39:14.511Z · LW · GW

The loss of $100,000 (or one cent) is more or less significant depending on the individual. Which is worse: stealing a cent from 100,000,000 people, or stealing $100,000 from a billionaire? What if the 100,000,000 people are very poor and the cent would buy half a slice of bread and they were hungry to start with? (Tiny dust specks, at least, have a comparable annoyance effect on almost everyone.)

Eliezer's main gaffe here is choosing a "googolplex" people with dust specks when humans do not even have an intuition for googols. So let's scale the problem down to a level a human can understand: instead of a googolplex dust specks versus 50 years of torture, let's take "50 years of torture versus a googol (1 followed by 100 zeros) dust specks", and scale it down linearly to "1 second of torture verses "6.33 x 10^90 dust specks, one per person" - which is still far more people than have ever lived, so let's make it "a dust speck once per minute for every person on Earth for their entire lives (while awake) and make it retroactive for all of our human ancestors too" (let's pretend for a moment that humans won't evolve a resistance to dust specks as a result). By doing this we are still eliminating virtually all of the dust specks.

So now we have one second of torture versus roughly 2 billion billions of dust specks, which is nothing at all compared to a googol of dust specks. Once the numbers are scaled down to a level that ordinary college graduates can begin to comprehend, I think many of them would change their answer. Indeed, some people might volunteer for one second of torture just to save themselves from getting a tiny dust speck in their eye every minute for the rest of their lives.

The fact that humans can't feel these numbers isn't something you teach by just saying it. You teach it by creating a tension between the feeling brain and the thinking brain. Due to your ego, I would guess your brain can better imagine feeling a tiny dust speck in its eye once per minute for your entire life - 20 million specks - than 20 million people getting a tiny dust speck in their eye once, but how is it any different morally? For most people also, 20 billion people with a dust speck feels just the same as 20 million. They both feel like "really big numbers", but in reality one number is a thousand times worse, and your thinking brain can see that. In this way, I hope you learn to trust your thinking brain more than your feeling one.


Comment by dpiepgrass on Circular Altruism · 2019-12-10T17:45:36.213Z · LW · GW

In a world sufficiently replete with aspiring rationalists there will be not just one chance to save lives probabilistically, but (over the centuries) many. By the law of large numbers, we can be confident that the outcome of following the expected-value strategy consistently (even if any particular person only makes a choice like this zero or one times in their life) will be that more total lives will be saved.

Some people believe that "being virtuous" (or suchlike) is better than achieving a better society-level outcome. To that view I cannot say it better than Eliezer: "A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain's feelings of comfort or discomfort with a plan."

I see a problem with Eliezer's strategy that is psychological rather than moral: if 500 people die, you may be devastated, especially if you find out later that the chance of failure was, say, 50% rather than 10%. Consequentialism asks us to take this into account. If you are a general making battle decisions, which would weigh on you more? The death of 500 (in your effort to save 100), or abandoning 100 to die at enemy hands, knowing you had a roughly 90% chance to save them? Could that adversely affect future decisions? (in specific scenarios we must also consider other things, e.g. in this case whether it's worth the cost in resources - military leaders know, or should know, that resources can be equated with lives as well...)

Note: I'm pretty confident Eliezer wouldn't object to you using your moral sense as a tiebreaker if you had the choice between saving one person with certainty and two people with 50% probability.

Comment by dpiepgrass on Belief in Self-Deception · 2019-11-10T19:01:37.115Z · LW · GW
This is why intelligent people only have a certain amount of time (measured in subjective time spent thinking about religion) to become atheists.

Just a data point. I spent over twenty (20) years, thinking multiple hours every week about subjects related to my religion. I was deeply confused, but I needed too badly for it to be true to go earnestly looking for evidence that is was false. Which reminds me of another Yudkowsky quote:

Existential depression has always annoyed me; it is one of the world's most pointless forms of suffering.

If my religion was false, not only would it mean that the people around me were horrifyingly delusional for believing it, but it would also mean that the wonderful future I was told about would be replaced with the utter destruction of my soul—and everyone's soul—at death.

"The telestial kingdom is so great, if we knew what it was like we would kill ourselves to get there." - Joseph Smith, Apocryphal (the telestial kingdom was the lowest tier of the afterlife, i.e. hell)

As the years passed, very slowly and inevitably, I lost faith. But why did it take me over 20 years between the onset of doubt and my decision to leave the religion? It's easy to yell out "confirmation bias". But everyone has that. I think the real problem is that in all that time, no one gave me a link to cesletter.org. I heard lots of atheists hurling cheap insults at believers, belittling them, talking about how obvious it was that they were right and we were wrong. I heard precious few people making strong but fair and compassionate arguments of the sort I needed to hear.

Comment by dpiepgrass on Dark Side Epistemology · 2019-09-15T16:42:12.454Z · LW · GW

Lacking self-awareness (in the sense described above: habitually declining to engage in metacognitive thinking) is different from lacking consciousness/qualia. I am not claiming that they lack the latter. But, I do wonder if there have been any investigations into whether qualia are universal among humans, and I wonder how one would go about detecting qualia (it's vaguely like a Turing test; a human without qualia would likely not intentionally deceive the tester the way a computer might during a Turing test, but would of course be unaware that there is any difference between his/her experience and anyone else's, and can be expected to deny any difference exists.)

Comment by dpiepgrass on Dark Side Epistemology · 2019-06-02T13:13:49.539Z · LW · GW

Looking at Scott Alexander's Argument From My Opponent Believes Something, I guessed that the general Dark Side technique he's describing was misrepresentation borne out of sloppy analog thinking. But at the end he points out that he has listed a set of Fully General Counterarguments, all of which are tools of the dark side since they can attack any position and lead to any conclusion:

It is an unchallengeable orthodoxy that you should wear a coat if it is cold out. Day after day we hear shrill warnings from the high priests of this new religion practically seething with hatred for anyone who might possibly dare to go out without a winter coat on. But these ideologues don’t realize that just wearing more jackets can’t solve all of our society’s problems. Here’s a reality check – no one is under any obligation to put on any clothing they don’t want to, and North Face and REI are not entitled to your hard-earned money. All that these increasingly strident claims about jackets do is shame underprivileged people who can’t afford jackets, suggesting them as legitimate targets for violence. In conclusion, do we really want to say that people should be judged by the clothes they wear? Or can we accept the unjacketed human body to be potentially just as beautiful as someone bundled beneath ten layers of coats?
Comment by dpiepgrass on Dark Side Epistemology · 2019-06-02T06:14:24.704Z · LW · GW

The acronym FLICC describes techniques of science denial and alludes to a lot of dark side epistemology:

F - Fake Experts (and Magnified Minority): you've got your scientists and I've got mine (and even though There's No Consensus, mine are right and yours are wrong, that's for sure).

L - Logical fallacies

I - Impossible expectations. This refers to an unrealistic expectation of proof before acting on evidence. It tends to be paired with very low demands of evidence for the contrary position (confirmation bias). This is often unnecessary because if the goal is inaction (e.g. don't bother to lower emissions or get vaccinations) you can just have an unreasonable standard of proof for both sides and take no action as a default. Nevertheless this heavily lopsided analysis occurs in practice.

C - Cherry picking of data (perhaps this is just another logical fallacy, but it is more central to science denial than other logical fallacies)

C - Conspiracy theories. One "dark side" thing about conspiracy theories is their self-sealing quality - evidence contrary to one's position can always be explained by assuming it was generated by the conspiracy, so the conspiracy theory tends to grow larger over time until it is a massive global conspiracy with untold thousands of actors hiding the hidden truth. An even more interesting and common dark-side trick, though, is to believe in a conspiracy without ever thinking about the conspiracy. Most people aren't dumb enough to believe in a massive global conspiracy, but they use an assumption of some amount of conspiracy as a "background belief": they rely mainly on FLIC, and just use Conspiracy Theory as a last resort, so Conspiracy serves as a window dressing to cover any remaining issues that otherwise wouldn't make sense in their version of "the truth". Or maybe it just looks that way: the science denier may know that talking about their conspiracy theory would make them sound more nutty, so they outwardly prefer to rely on other arguments and fall back on conspiracy as a last resort.

Comment by dpiepgrass on Dark Side Epistemology · 2019-06-02T05:49:14.745Z · LW · GW
Write obscurely. Never explicitly state your beliefs. Ignore the entire machinery of rationality.

All good stuff. Perhaps dark side epistemology is mainly about behaviors, not beliefs? A list of behaviors I noticed while speaking to climate science deniers:

  • First and foremost, they virtually never admit that they got anything wrong, not even little things. (If you spot someone admitting they were wrong about something, congrats! You may have stumbled upon a real skeptic!)
  • They don't construct a map of the enemy's territory: they have a poor mental model of how the climate system works. After all, they are taught “models can’t be trusted,” even though all science is built on models of some sort. Instead they learn a list of stories, ideas and myths, and they debate mainly by repeating items on their list.
  • They often ignore your most rock solid arguments, as if you'd said nothing at all, and they attack whatever they perceive to be your weakest point.
  • They think they are “scientific”. I was astonished at one of them's ability to sound sciencey.... but then I saw how GPT2 could say plausible things without really understanding what it was saying, and I saw Eliezer talking about the "literary genre" of science, so I guess that's the answer - certain people somehow pick up and mimick the literary genre of science without understanding or caring about its underlying substance.
  • They lack self-awareness. You’ll never ever hear them say “Okay, I know this might sound crazy, but those thousands of climate scientists are all wrong. I can’t blame you for agreeing with a supermajority, but if you’ll just hear me out, I will explain how I, a non-scientist, can be certain the contrarians are right. Just let me know if I’ve made some mistake in my reasoning here...” (which reminds me of I an interesting idea I had after reading about philisophical zombies... is it possible that people who seem to lack self-awareness literally lack self-awareness? That they are zombies?)
  • So, they are not introspective: they’re not thinking about how they think. So they haven’t thought about the Dunning-Kruger effect (meme!), and confirmation bias is something that happens to other people. “Motivated reasoning? Not me! So what if I do? Everybody does it…”
  • It's as if schoolyard irony is an important defense mechanism for them. They take accusations often used against them, and toss them at detractors. They’ll say you’re in a “cult” or “religion” for believing humans cause warming, that you lie, fudge data, are “closed-minded”, etc. One guy called me a “denier” (in denial that it’s all a hoax) even though I had not called him a denier. In general you can expect attacks on your character even if you were careful not to attack them, yet these attacks will seem like plausible descriptions of the attacker. Similarly, they may dismiss talk of the scientific literature or consensus as “appeals to authority”, apparently oblivious to the authorities (Rush Limbaugh, Roy Spencer, and many others) upon which their own opinion is based. Last but not least, they’ll complain of “politicizing the science” while politicizing the science.
  • Lack of knowledge seems to satisfy them as a knowledge substitute — e.g. “I’ve not seen evidence for X, so I can safely assume X is false” or “I’ve not seen evidence against X, so I can safely assume X is true.” Missing knowledge somehow provides not merely hope, but great confidence that the experts are wrong.
Comment by dpiepgrass on Another Critique of Effective Altruism · 2019-05-14T15:35:41.964Z · LW · GW

This post is now five years old. It seems to me that EA has shifted from where is was five years ago (not that I was around back then), and it seems that people in EA largely share your concerns. It is generally recognized that doing good effectively is a very complex challenge, much of which is hard to quantify, but that we should try our best to figure out how to do it (witness the increasing popularity of global priorities research).

I don't like it when people "burn the candle from both ends". You complain about over-reliance on quantitative measures - but also about the use of "heuristic justification" with insufficient research. You don't offer examples, but the latter complaint brings to my mind comments on EA forums - or lesswrong forums for that matter - in which people propose interventions based on their intuition. But can we really expect more from comment sections? At least in EA, unlike most other communities, the norm is to recognize we might be wrong and seriously consider criticism.

Comment by dpiepgrass on [April Fools] User GPT2 is Banned · 2019-04-16T19:16:30.478Z · LW · GW

Maybe by next year they'll have an adversarial anti-GPT AI trained to distinguish GPT2 (GPT3? GPT4?) comments from humans. Then GPT can create 50 replies to every human comment, and of those, the other AI will decide which of the replies sounds the *least* like GPT and post that one.

April Fool's day: the funniest step on the path to weaponized AI.

Comment by dpiepgrass on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-10T04:27:05.028Z · LW · GW

Rationalists have a tendency to sound a little bit like Spock, but in reality we are all human here. I'd say that doing a good job managing relationships with other humans, and learning to be kind, doesn't just fall within the realm of rationalism - it's crucial to our success! There are a number of things I love about Scott. To me he seems insightful and even-handed, but most relevant here is that he seems like a nice person. So I was not at all impressed the first time I heard Jordan, when he said:

"Alright so I just read the worst Slate Star Codex article I've ever read (a new low) and I'm now more determined than ever to host an event where I try to convince members of this community that Scott Alexander is a pseudo-intellectual not worth reading."

This was not followed by any explanation of what Scott had done wrong, or what a "pseudo-intellectual" is. Though reportedly it was meant to be facetious, I just couldn't read it that way. If you're going to criticize Scott, try at least not to make obvious mistakes that Scott himself wouldn't make, such as sounding like a jerk.

That said, I am curious what Jordan has to say. For starters, which writers are sufficiently free from mistakes that they are "worth reading", and what criteria qualify a person as "intellectual"?

I'm sure Scott has made mistakes. Personally, I make mistakes with shocking regularity. And I do think when Scott is talking about a subject where he has little expertise, the disclaimer at the top about that lack of expertise should not be in small text, and Scott may need more expressions of uncertainty (weasel words).

But I think there is a tension between correctness and popularity. The thing is, perfection is not only difficult, it's time consuming. I have been known to review my own articles over a dozen times before posting them (and errors may still slip through). My carefulness in turn leads to a low posting frequency, which probably contributes to my unpopularity. If you want to be popular, you have to put out. Look at The Eliezer - I think his stuff is riddled with defects of various types, but he wrote fast and was popular. His stuff is at least mostly good enough to be worth reading, so I do.

I wonder if we could develop a process where the community's best writers (and the occasional newcomer) could write "drafts" which would be posted semi-publicly in a "draft" area and then be edited by members trusted by the author (with expert input, if the subject matter demands expert input), before being reposted "publicly for reals this time". Though if you ask me, priority one for improvement isn't SSC, it's The Sequences.