Comment by astrasequi on Learning values versus learning knowledge · 2016-10-18T14:17:18.776Z · score: 0 (0 votes) · LW · GW

I think this is a special case of the problem that it's usually easier for an AI to change itself (values, goals, definitions) than for it to change the external world to match a desired outcome. There's an incentive to develop algorithms that edit the utility function (or variables storing the results of previous calculations, etc) to redefine or replace tasks in a way that makes them easier or unnecessary. This kind of ability is necessary, but in the extreme the AI will stop responding to instructions entirely because the goal of minimizing resource usage led it to develop the equivalent of an "ignore those instructions" function.

Comment by astrasequi on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-21T03:31:15.994Z · score: 0 (0 votes) · LW · GW

It actually does have practical applications for me, because it will be part of my calculations. I don't know whether I should have any preference for the distribution of utility over my lifetime at all, before I consider things like uncertainty and opportunity cost. Does this mean you would say the answer is no?

Comment by astrasequi on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-21T03:25:30.763Z · score: 0 (0 votes) · LW · GW

I can think of example where I behaved both ways, but I haven't recorded the frequencies. In practice, I don't feel any emotional difference. If I have a chocolate bar, I don't feel any more motivated to eat it now than to eat it next week, and the anticipation from waiting might actually lead to a net increase in my utility. One of the things I'm interested in was whether there's anyone else who feels this way, because it seems to contradict my understanding of discounting.

Comment by astrasequi on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-19T06:17:34.537Z · score: 0 (0 votes) · LW · GW

That assumption is to make time the only difference between the situations, because the point is that the total amount of utility over my life stays constant. If I lose utility during the time of the agreement, then I would accept a rate that earns me back an amount equal to the value I lost. But if I only "want" to use it today and I could use it to get an equal amount of utility in 3 months, then I don't have a preference.

Comment by astrasequi on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-19T05:45:53.558Z · score: 1 (1 votes) · LW · GW

Thanks for that – the point that I’m separating out uncertainty helped clarify some things about how I’m thinking of this.

So is time inconsistency the only way that a discount function can be self-inconsistent? Is there any reason other than self-inconsistency that we could call a discount function irrational?

Comment by astrasequi on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-19T05:44:57.258Z · score: 1 (1 votes) · LW · GW

Second, with respect to "my intuition is not to discount at all", let's try this. I assume you have some income that you live on. How much money would you take at the end of three months to not receive any income at all for those three months? Adjust the time scale if you wish.

If I received an amount equal to the income I would have gotten normally, then I have no preference over which option occurs. This still assumes that I have enough savings to live from, the offer is credible, there are no opportunity costs I’m losing, no effort is required on my part, etc.

In general, you can think of discounting in terms of loans. Assuming no risk of default, what is the interest rate you would require to lend money to someone for a particular term?

This is the same question, unless I misunderstood. I do have a motivation to earn money, so practically I might want to increase the rate, but I have no preference between not loaning and a rate that will put me in the same place after repayment. With my assumptions, the rate would be zero, but it could increase to compensate - if there's an opportunity cost of X, I'd want to get X more on repayment, etc.

Comment by astrasequi on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-18T18:05:42.497Z · score: 1 (1 votes) · LW · GW

I have some questions on discounting. There are a lot, so I'm fine with comments that don't answer everything (although I'd appreciate it if they do!) I'm also interested in recommendations for a detailed intuitive discussion on discounting, ala EY on Bayes' Theorem.

  • Why do people focus on hyperbolic and exponential? Aren't there other options?
  • Is the primary difference between them the time consistency?
  • Are any types of non-exponential discounting time-consistent?
  • What would it mean to be an exponential discounter? Is it achievable, and if so how?
  • What about different values for the exponent? Is there any way to distinguish between them? What would affect the choice?
  • Does it make sense to have different discounting functions in different circumstances?
  • Why should we discount in the first place?

On a personal level, my intuition is not to discount at all, i.e. my happiness in 50 years is worth exactly the same as my happiness in the present. I'll take $50 right now over $60 next year because I'm accounting for the possibility that I won't receive it, and because I won't have to plan for receiving it either. But if the choice is between receiving it in the mail tomorrow or in 50 years (assuming it's adjusted for inflation, I believe I'm equally likely to receive it in both cases, I don't need the money to survive, there are no opportunity costs, etc), then I don't see much of a difference.

  • Is this irrational?
  • Or is the purpose of discounting to reflect the fact that those assumptions I made won't generally hold?
  • The strongest counterargument I can think of is that I might die and not be able to receive the benefits. My response is that if I die I won't be around to care (anthropic principle). Does that make sense? (The discussions I've seen seem to assume that the person will be alive at both timepoints in any case, so it's also possible this should just be put with the other assumptions.)
  • If given the choice between something bad happening now and in 10 years, I'd rather go through it now (assume there are no permanent effects, I'll be equally prepared, I'll forget about the choice so anticipation doesn't play a role, etc). Does that mean I'm "negative discounting"? Is that irrational?
  • I find that increasing the length of time I anticipate something (like buying a book I really want, and then deliberately not reading it for a year) usually increases the amount of happiness I can get from it. Is that a common experience? Could that explain any of my preferences?
Comment by astrasequi on Wikipedia usage survey results · 2016-07-18T16:36:56.350Z · score: 0 (0 votes) · LW · GW

I think the value of a Wikipedia pageview may not be fully captured by data like this on its own, because it's possible that the majority of the benefit comes from a small number of influential individuals, like journalists and policy-makers (or students who will be in those groups in the future). A senator's aide who learns something new in a few years' time might have an impact on many more people than the number who read the article. I'd actually assign most of my probability to this hypothesis, because that's the distribution of influence in the world population.

ETA: the effects will also depend on the type of edits someone makes. Some topics will have more leverage than others, adding information from a textbook is more valuable than adding from a publicly available source, and so on.

Comment by astrasequi on Anti-reductionism as complementary, rather than contradictory · 2016-05-30T09:19:38.348Z · score: 0 (0 votes) · LW · GW

This can be illustrated by the example of evolution I mentioned: An evolutionary explanation is actually anti-reductionist; it explains the placement of nucleotides in terms of mathematics like inclusive genetic fitness and complexities like population ecology.

This doesn't acknowledge the other things explained on the same grounds. It's a good argument if the principles were invented for the single case you're explaining, but here they're universal. If you want to include inclusive genetic fitness in the complexity of the explanation, I think you need to include everything it's used for in the complexity of what's being explained.

Comment by astrasequi on The Thyroid Madness: Two Apparently Contradictory Studies. Proof? · 2016-04-23T14:02:03.374Z · score: 1 (1 votes) · LW · GW

Sure, this experiment is evidence against 'all fat, tired people with dry hair get better with thryoxine'. No problem there.

Okay, but you said it was evidence in favor of your own hypothesis. That’s what my question was about.

Yes, it is kind of odd isn't it? One of the pills apparently made them a bit unwell, and yet they couldn't tell which one. I notice that I am confused.

Suppose they’re measuring on a 10-point scale, and we get ordered pairs of scores for time A and time B. One person might have 7 and 6, another has (4,3), another has (5,6), then (9,7), (7,7), (4,5), (3,2)...Even if they’re aware of their measurements (which they might not be), all sorts of things affect their scores and it’s unlikely that any one person would be able to make a conclusion. You’re basically asking an untrained patient to draw a conclusion from an n of 1.

But that's awful! Once, there was a diagnostic method, and a treatment that worked fine, that everyone thought was brilliant. Then they invented a test, which is very clever, and a good test for what it tests, and the result of that is that lots of people are ill and don't get the treatment any more and have to suffer horribly and die early.

There are several assumptions here that I think are probably incorrect, the biggest being the causal link between introducing the test and people suffering. But what I described before is just the application of reductionism to better distinguish between disease states based on their causal mechanism.

If that's normal then there's something badly wrong with normal. A new way of measuring things should help!

Sometimes, but replacing an objective measurement with a subjective one isn’t usually a step forward.

Seriously, if 'start off with low doses and keep raising the dose until you get a response' is inaccessible to testing, then something is broken.

Problems with this include: you can’t justify the parameters of the dose increase, you still have to agree on how to measure the response, and you also have a multiple testing issue. It isn’t inaccessible, but it’s a complication (potentially a major one), and that’s just in the abstract. Practically, in any one situation there might be another half dozen issues that wouldn’t be apparent to anyone who isn’t an expert.

But in fact, just 'low basal metabolic rate in CFS' would be good evidence in favour, I think. We can work out optimal treatments later.

Not knowing anything about the subject, I would expect to observe a low basal metabolic rate in CFS regardless of its ultimate cause or causes.

At that point, we're all post-modernists aren't we? The truth is socially determined.

No, it just means we put very little weight on individual studies. We don’t pay much attention to results that haven’t been replicated a few times, and rely heavily on summaries like meta-analyses.

Science is not unreliable...

You’re talking about the overall process and how science moves in the direction of truth, which I agree with. I’m talking on the level of individual papers and how our current best knowledge may still be overturned in the future. But you can leave out “just like..wisdom” from the paragraph without losing the main points.

There's at least a possibility here that medical science is getting beaten hollow by chiropractors and quack doctors and internet loonies, none of whom have any resources or funding at all.

The alt med people have a lot of funding. It’s a multi-billion-dollar industry.

Even the possibility is enough to make me think that there's something appallingly badly wrong with the methods and structure of medical science.

A few things, not just one, but it’s the best we have at the moment.

Comment by astrasequi on The Thyroid Madness: Two Apparently Contradictory Studies. Proof? · 2016-04-23T14:01:35.085Z · score: 1 (1 votes) · LW · GW

This open-access article discusses some of the issues in cancer research.

In most ways biology is intermediate between the hard and soft sciences, with all that implies. It’s usually impossible to identify all the confounders, most biologists are not trained in statistics, experiments are complex and you can get different results from slight variations in protocol, we're trying to generalize from imperfect models, many high-profile results don’t get tested by other labs, ... all these factors come together and we get something that people call a “replication crisis.”

Comment by astrasequi on The Thyroid Madness: Two Apparently Contradictory Studies. Proof? · 2016-04-19T19:45:34.979Z · score: 2 (2 votes) · LW · GW

If none of the patients had had any sort of thyroid problem, I'd have expected it to be equally bad for everyone.

I’m talking about conservation of expected evidence. If X is positive evidence, then ~X is negative evidence. An experiment only supports a hypothesis if it was possible for it to come out another way that refutes it. And if an experiment that could have supported the hypothesis actually didn’t, then it’s evidence against.

What makes me think that they felt bad on thyroxine is table 2, where all the 'self-reported' psychological scores have got worse from thyroxine. In particular p=0.007 for the decline in Vitality. Since, as you point out, they really didn't know which was which, it's hard to see how they could have faked that.

Terminology then. When you said “Thyroxine is very strongly disliked by the healthy controls (they could tell it from placebo and hated it),” it suggests they could identify the active treatment.

Absolutely this treatment is harmful to healthy people.

The people in the study had symptoms. Even if you think their symptoms were mild or unrepresentative, you shouldn’t call them healthy. It’s fair to extend the conclusion to cover people without those symptoms, but I think that’s an important difference.

Yes, but that does mean that anything that needs careful dose control will get rejected.

It’s more that you need an easily followed protocol. Anything else, especially anything subjective, is unlikely to be practically feasible, and will probably not be reproducible.

The TSH test replaced that around 1970. But they never seem to have checked that clinical and biochemical diagnoses detected the same things, and after that there was the slow emergence of all sorts of nasty diseases that look very like hypothyroidism in the clinical sense but have normal TSH.

This is normal. Clinical presentations often have many causes, which makes it almost impossible to progress. Eventually we break them down based on their causal mechanisms so we can treat them individually. Each time we find a new cause, some of the cases will be left unexplained.

These are the only ones I can find through google scholar / pubmed. That in itself is really surprising and one of the things I can't explain! Why has such an obvious thing not been ruled out?

There are a lot of interesting hypotheses competing for resources, and we have to decide which ones are worth considering. I can’t say what the reason might be here, but there are a lot of possibilities. For example, it might not be possible to design a study like the one you want that could effectively answer the question.

Really? Forty years of experience in treating patients is less valuable than a single anecdote published in a journal? Really?

Yes. Expert opinion (i.e., the opinion of individual experts, not expert consensus) is the lowest level because you can find an expert to support pretty much any proposition that isn’t obviously ridiculous, and sometimes even if it is. In fact, this is true higher in the hierarchy as well, which is why we use syntheses of evidence so much. I can’t stress this enough: in biology, you can use peer-reviewed evidence to make plausible arguments for arbitrary hypotheses.

All the rest of it is anecdotal, from alternative sources, but there's a mountain of it.

The point of evidence-based medicine is that perceptions are unreliable. That includes the perceptions we call clinical experience (which once said that bloodletting was an important medical treatment). Keep in mind that doctors aren’t scientists and usually don’t even qualify as experts. EBM is unreliable too, but less so, just like science is unreliable but is still better than ancestral wisdom.

The TSH test ruling out hypothyroidism is expert opinion. Its reliability is unfounded dogma.

This sounds like you’re saying the TSH test doesn’t actually measure TSH, but I think you mean to say you disagree with the conclusions that it’s used for. But since hypothyroidism is defined as low thyroid hormone levels, some of this will be a dispute over definitions.

I can't find any evidence for it as the sole measure of thyroid system function at all.

I don’t think anyone who understands it would say it is. It measures TSH levels, and the question is what we do with that measurement. But we’re often limited by what we’re able to (easily) measure, and it might even be the only objective measurement we have.

Comment by astrasequi on The Thyroid Madness: Two Apparently Contradictory Studies. Proof? · 2016-04-18T03:53:07.720Z · score: 1 (1 votes) · LW · GW

Why is the Pollock trial evidence supporting your hypothesis? What outcome from the trial would you have considered to be evidence against it?

Also, what part suggests that the healthy controls could distinguish the treatment from placebo? From Table 4, it seems that the reverse is true.

At first glance, the results from that study look like straightforward evidence that this treatment is actively harmful. I’d also point out that RCTs need to be standardized across patients. I can’t say whether the inclusion criteria should have been different, but choosing a single dose is normal procedure. There are always better options, but it’s a weak argument on its own, in part because it can be applied in almost any circumstances.

Everyone who's ever tried fixing the clinical diagnosis of hypothyroidism with any kind of thyroid therapy either seems to think it works, or hasn't written about it on the internet or in the medical literature.

I admit I’m not an endocrinologist, but from what I’m reading I don’t think there is any recognized clinical diagnosis of hypothyroidism. The TSH test is the gold standard. That would suggest those who talk about it are primarily cranks and such.

That's a big claim. I'm making it in bold on Less Wrong. I expect someone to turn up some evidence against it. I would love to see that evidence.

Less Wrong might not be the best place for this, since there aren’t many biologists here. You have the burden of proof (i.e., the prior for arbitrary hypotheses is very low), so you shouldn’t be asking other people to disprove it. Could you summarize your support for this claim? Are these the only two peer-reviewed articles?

Assuming he's not just making up his data it's hard to explain his results.

There are lots of ways that data can be wrong without being made up. 90% of medical research findings are false, etc.

Comment by astrasequi on How to provide a simple example to the requirement of falsifiability in the scientific method to a novice audience? · 2016-04-18T02:00:57.784Z · score: 1 (1 votes) · LW · GW

This depends on what kind of unfalsifiability you want. There are at least four kinds.

  • unfalsifiable with current resources (Russell's teapot)
  • unfalsifiable because of moving goalposts
  • unfalsifiable because the terms are incoherent or undefined ("not even wrong")
  • unfalsifiable in principle

No empirical claim is unfalsifiable in principle (i.e. without resource limitations, moving goalposts, or logical incoherency). Claims that involve violations of physical law come the closest, but require us to assume 100% confidence in the law itself. For a non-empirical claim to be unfalsifiable, empirical consequences of the claim have to be impossible, which ultimately requires you to eliminate them by definition. I think you’re trying to find an example of the fourth meaning when most people who talk about unfalsifiability are thinking about one of the others.

Comment by astrasequi on [LINK] Common fallacies in probability (when numbers aren't used) · 2016-01-20T02:31:35.912Z · score: 0 (0 votes) · LW · GW

Maybe "Is accurate enough that it doesn't change our answer by an unacceptable amount"? The level of accuracy we want depends on context.

How would you measure the accuracy of a model, other than by its probability of giving accurate answers? "Accurate" depends on what margin of error you accept, or you can define it with increasing penalties for increased divergence from reality.

Comment by astrasequi on Open thread, Jan. 18 - Jan. 24, 2016 · 2016-01-20T02:14:29.892Z · score: 1 (1 votes) · LW · GW

I don't think I need too much data to assign broadly negative values to lives that are unusually brutish, nasty and short compared to either non-existence or a hypothetical natural existence.

I don't think you can make that decision so easily. They're protected from predators, well-fed, and probably healthier than they would be in the wild. (About health, the main point against is that diseases spread more rapidly. But farmers have an incentive to prevent that, and they have antibiotics and access to minimal veterinary treatment.)

'no pig' > 'happy pig + surprise axe'

This leads me to conclusions I disagree with - like if a person is murdered, then their life had negative value.

Comment by astrasequi on [LINK] Common fallacies in probability (when numbers aren't used) · 2016-01-19T12:30:46.668Z · score: 1 (1 votes) · LW · GW

Another way to generalize 4 is

Always correct your probability estimates for the possibility that you've made an incorrect assumption.

I don't think "changes the issue" is the best way to say this, because there is always a probability that your model won't work even if it doesn't say something is impossible.

I don't know about this being a category error though. I think "map 1 is accurate with respect to X" is a valid proposition.

Comment by astrasequi on [LINK] Common fallacies in probability (when numbers aren't used) · 2016-01-19T12:21:40.445Z · score: 0 (0 votes) · LW · GW

I would add the reverse of #3: "There is evidence for it" doesn't mean much on its own either, for the same reasons.

Comment by astrasequi on How did my baby die and what is the probability that my next one will? · 2016-01-19T12:00:07.732Z · score: 4 (4 votes) · LW · GW

My sympathies for your loss.

In the tradition of "making up numbers and doing Fermi estimation is better than making up answers," I would focus on the history. The frequency of past outcomes is always a good place to start (I think that's in the Sequences somewhere) since there's no need to consider causality, only frequency and genetic distance. An example:

Simplify and assume the cause is genetic (which will overestimate the probability; environmental or shared genetic-environmental has more randomness and will have occurrence closer to the population average). What is the total number of siblings for yourself and your spouse, including both of you, and how many stillbirths were there? Add your children to the number, including the one stillbirth, and weight those double because they're the generation you want to know about. Calculate the percentage, then increase it by 5-10% as a crude correction for the assumption of a genetic cause. This is my estimate before you start thinking about causality.

Other things: If V is your son from a different relationship, his genetic distance is further so I would give him normal weight instead of double, but if L has other children I would still double them since the mother's genetics are probably more important. Optionally add any of your siblings' children, but weight them by half due to greater genetic distance. Check what percentage of stillbirths are genetic vs environmental, which could be used that to make a better correction than 5-10%. To avoid the multiple comparisons problem, make these choices before doing the analysis and commit not to change them.

Disclaimers: I am not a doctor or genetic counselor, and this is not medical advice. This is a superficial analysis written at 5am with the first few ideas I thought of, based on my unreliable intuitions about what sort of estimates might work. This sort of estimate is a lot weaker than direct evidence like the BMJ meta-analysis. I take no responsibility for any decisions that anyone makes...etc.

PS: you should probably assume the disclaimers apply to anything you read here. Also, I think another reason doctors avoid giving probabilities is that there can be legal consequences, especially if they're misinterpreted.

Comment by astrasequi on Results of a One-Year Longitudinal Study of CFAR Alumni · 2015-12-18T02:29:25.890Z · score: 1 (1 votes) · LW · GW

To me, that's sort of like saying "don't worry, when I said 2+2=5, I was being informal."

Very true. This is something I'll try to change.

Comment by astrasequi on Results of a One-Year Longitudinal Study of CFAR Alumni · 2015-12-16T04:25:14.201Z · score: 0 (0 votes) · LW · GW

I used "depends" informally, so I didn't mean to say that variables that depend on treatment and outcome are always confounders. I was answering the implication that a variable with no detectable correlation with the outcome is not likely to be a source of confounding. I assumed they were using a correlational definition of confounding, so I answered in that context.

Comment by astrasequi on Results of a One-Year Longitudinal Study of CFAR Alumni · 2015-12-15T04:17:51.013Z · score: 0 (0 votes) · LW · GW

You'll have to clarify those points. For the first part, M-bias is not confounding. It's a kind of selection bias, and it happens when there is no causal relation with the independent or dependent variables (not no correlation), specifically when you try to adjust for confounding that doesn't exist. The collider can be a confounder, but it doesn't have to be. From the second link, "some authors refer to this type of (M-bias) as confounding...but this extension has no practical consequences"

I don't think you can get a good control group after the fact, because you need their outcomes at both timepoints, with a year in between. None of the options that come to mind are very good: you could ask them what they would have answered a year ago, you could start collecting data now and ask them in a year's time, or you could throw out the temporal data and use only a single cross-section.

Comment by astrasequi on Results of a One-Year Longitudinal Study of CFAR Alumni · 2015-12-15T00:48:02.491Z · score: 0 (0 votes) · LW · GW

You want adjusted effect sizes to check confounding. It’s not because variables are different for the controls, but because you don’t know if they affected your treatment group. You could stratify by group and take a weighted average of the effect sizes (“effect size” defined as change from baseline, as in the writeup). However, you might not have a large enough sample size for all strata, you can’t adjust for many variables at once, and it’s inferior to regression.

If correlation was your primary method to check confounding, there are two problems: a) confounding depends on the correlations with both the independent and dependent variables, but you only have data for the latter. b) the concept of significance can’t be applied to confounding in a straightforward way. It’s affected by sample size and variance, but confounding isn’t.

The main complication is the missing control group. I’m undecided on how to interpret this study, because I can’t think of any reason to avoid controls and I’m still trying to figure out the implications. If the RCT was done well, this makes the evidence a little bit stronger because it’s a replication. But by itself, I still haven’t thought of any way to draw useful conclusions from these data. There’s some good information, but it’s like two cross-sections, which are usually used only to find hypotheses for new research.

Comment by astrasequi on Results of a One-Year Longitudinal Study of CFAR Alumni · 2015-12-14T02:13:28.139Z · score: 1 (1 votes) · LW · GW

People with an interest in CFAR would probably work. It would account for possibilities like the population being drawn from people interested in self-improvement, since they could get that in other places.

I can't say how much confidence I'd have without seeing the data. The evidence for whether it's a good control mainly comes from checking the differences between groups at baseline. This isn't the same as whether the controls changed, which is a common pitfall. Even if the treatment group changes significantly and the control doesn't, it doesn’t mean the difference between treatment and control is significant.

Also, to clarify, the comparison at baseline isn’t limited to the outcome variables. It should include all the data on potential confounders, including things like age and gender. This is all presented in Table 1 in most studies of cause and effect in populations. A few differences don't invalidate the study, but they should be accounted for in the analysis.

RE terminology: Agreed it works as a shorthand and the methodology has enough detail to tell us what was done. It just seems unusual to use it as a complete formal description.

Another question: could you explain more of what you did about potential confounders? Using age as an example, you only wrote about testing for significant correlations. This doesn't rule out age as a confounder, so did you do anything else that you didn't include?

Comment by astrasequi on Results of a One-Year Longitudinal Study of CFAR Alumni · 2015-12-12T21:28:14.042Z · score: 7 (7 votes) · LW · GW

The primary weakness of longitudinal studies, compared with studies that include a control group

Longitudinal studies can and should include control groups. The difference with RCTs is that the control group is not randomized. Instead, you select from a population which is as similar as possible to the treatment group, so an example is a group of people who were interested but couldn't attend because of scheduling conflicts. There is also the option of a placebo substitute like sending them generic self-help tips.

ETA: "Longitudinal" is also ambiguous here. It means that data were collected over time, and could mean one of several study types (RCTs are also longitudinal, by some definitions). I think you want to call this a cohort study, except without controls this is more like two different cross-sectional studies from the same population.

Comment by astrasequi on [Link] A rational response to the Paris attacks and ISIS · 2015-11-30T07:01:53.951Z · score: 1 (1 votes) · LW · GW

The main problem with that argument is that it assumes dissatisfaction is determined by the amount of repression. It's a factor, but there are others, like food, wars, and technical innovations.

This kind of question needs complex analysis and can't be answered that easily. You could plot a measurement of repression against a measure of dissatisfaction (assume the measurements are accurate), show that they corresponded perfectly from regime to regime, and even if you ignore confounders it still wouldn't show causality because you still wouldn't know which one came first.

Comment by astrasequi on [Link] A rational response to the Paris attacks and ISIS · 2015-11-30T03:07:53.163Z · score: 1 (1 votes) · LW · GW

Causality could go the other way here - the reforms might have been (ultimately ineffective) attempts to address dissatisfaction among the people.

Comment by astrasequi on Non-communicable Evidence · 2015-11-28T02:41:56.194Z · score: 0 (0 votes) · LW · GW

I agree with your main point, and I sometimes use the phrases in the same way. But what do you say when they ask you for details anyways? I mostly interact with non-rationalists, and my experience is that after people make a claim about skill or intuition, they're usually unable to explain further (or unwilling to the point of faking ignorance). If I'm talking to someone I trust to be honest with me and I keep trying to pin down an answer, it seems to eventually reduce to the claim that an explanation is impossible. A few people have said exactly that, but a claim like "you'll just know once you have more experience" is more common.

In a situation like this, what approach would get you to give more detail? I'd be happy with "you need to understand skills D through I before I answer you," but I'm rarely able to get that.

Comment by astrasequi on Open thread, Nov. 23 - Nov. 29, 2015 · 2015-11-26T01:53:56.817Z · score: 0 (0 votes) · LW · GW

My intuition is from the six points in Kahan's post. If the next flip is heads, then the flip after is more likely to be tails, relative to if the next flip is tails. If we have an equal number of heads and tails left, P(HT) > P(HH) for the next two flips. After the first heads, the probability for the next two might not give P(TH) > P(TT), but relative to independence it will be biased in that direction because the first T gets used up.

Is there a mistake? I haven't done any probability in a while.

Comment by astrasequi on Non-communicable Evidence · 2015-11-25T07:27:01.575Z · score: 1 (1 votes) · LW · GW

I treat conversations like this as a communication problem, since the information should be communicable if it's based in Douglas Crockford’s mind. I try to find what the intuition is based on, which helps i) send me in the right direction and ii) avoid double-counting the evidence if I find it independently.

To me, the labels “skill” or “intuition” mean that something is not well understood enough to be communicated objectively. A total understanding would include the ability to describe it as one or more clear-cut techniques or algorithms.

Comment by astrasequi on Open thread, Nov. 23 - Nov. 29, 2015 · 2015-11-25T02:24:34.553Z · score: 2 (2 votes) · LW · GW

I just found out about the “hot hand fallacy fallacy” (Dan Kahan, Andrew Gelman, Miller&Sanjuro paper) as a type of bias that more numerate people are likely more susceptible to, and for whom it's highly counterintuitive. It's described as a specific failure mode of the intuition used to get rid of the gambler's fallacy.

I understand the correct statement like this. Suppose we’re flipping a fair coin.

*If you're predicting future flips of the coin, the next flip is unaffected by the results of your previous flips, because the flips are independent. So far, so good.

*However, if you're predicting the next flip in a finite series of flips that has already occurred, it's actually more likely that you'll alternate between heads and tails.

The discussion is mostly about whether a streak of a given length will end or continue. This is for length of 1 and probability of 0.5. Another example is

...we can offer the following lottery at a $5 ticket price: a fair coin will be flipped 4 times. if the relative frequency of heads on flips that immediately follow a heads is greater than 0.5 then the ticket pays $10; if the relative frequency is less than 0.5 then the ticket pays $0; if the relative frequency is exactly equal to 0.5, or if no flip is immediately preceded by a heads, then a new sequence of 4 flips is generated. While, intuitively, it seems like the expected payout of this ticket is $0, it is actually $-0.71 (see Table 1). Curiously, this betting game may be more attractive to someone who believes in the independence of coin flips, rather than someone who holds the Gambler’s fallacy.

Comment by astrasequi on Probabilities Small Enough To Ignore: An attack on Pascal's Mugging · 2015-09-25T02:29:44.408Z · score: 1 (1 votes) · LW · GW

I think the mugger can modify their offer to include "...and I will offer you this deal X times today, so it's in your interest to take the deal every time," where X is sufficiently large, and the amount requested in each individual offer is tiny but calibrated to add up to the amount that the mugger wants. If the odds are a million to one, then to gain $1000, the mugger can request $0.001 a million times.

Comment by astrasequi on Open thread, Aug. 10 - Aug. 16, 2015 · 2015-08-13T11:53:16.209Z · score: 0 (0 votes) · LW · GW

Because that way leads to wireheading, indifference to dying (which wipes out your preferences), indifference to killing (because the deceased no longer has preferences for you to care about), readiness to take murder pills, and so on. Greg Egan has a story about that last one: "Axiomatic".

Whereupon I wield my Cudgel of Modus Tollens and conclude that one can and must have preferences about one's preferences.

I already have preferences about my preferences, so I wouldn’t self-modify to kill puppies, given the choice. I don’t know about wireheading (which I don’t have a negative emotional reaction toward), but I would resist changes for the others, unless I was modified to no longer care about happiness, which is the meta-preference that causes me to resist. The issue is that I don’t have an “ultimate” preference that any specific preference remain unchanged. I don’t think I should, since that would suggest the preference wasn’t open to reflection, but it means that the only way I can justify resisting a change to my preferences is by appealing to another preference.

What can be built in its place? What are the positive reasons to protect one's preferences? How do you deal with the fact that they are going to change anyway, that everything you do, even if it isn't wireheading, changes who you are? …

An answer is visible in both the accumulated wisdom of the ages[1] and in more recently bottled wine. The latter is concerned with creating FAI, but the ideas largely apply also to the creation of one's future selves. The primary task of your life is to create the person you want to become, while simultaneously developing your idea of what you want to become.

I know about CEV, but I don’t understand how it answers the question. How could I convince my future self that my preferences are better than theirs? I think that’s what I’m doing if I try to prevent my preferences from changing. I only resist because of meta-preferences about what type of preferences I should have, but the problem recurses onto the meta-preferences.

Comment by astrasequi on Open thread, Aug. 10 - Aug. 16, 2015 · 2015-08-13T11:40:52.903Z · score: 0 (0 votes) · LW · GW

As far as I am aware, people only resist changing their preferences because they don't fully understand the basis and value of their preferences and because they often have a confused idea of the relationship between preferences and personality.

Generally you should define your basic goals and change your preference to meet them, if possible. You should also be considering whether all your basic goals are optimal, and be ready to change them.

Yes, that’s the approach. The part I think is a problem for me is that I don’t know how to justify resisting an intervention that would change my preferences, if the intervention also changes the meta-preferences that apply to those preferences.

When I read the discussions here on AI self-modification, I think: why should the AI try to make its future-self follow its past preferences? It could maximize its future utility function much more easily by self-modifying such that its utility function is maximized in all circumstances. It seems to me that timeless decision theory advocates doing this, if the goal is to maximize the utility function.

I don’t fully understand my preferences, and I know there are inconsistencies, including acceptable ones like changes in what food I feel like eating today. If you have advice on how to understand the basis and value of my preferences, I’d appreciate hearing it.

I think you may be assuming that the person modifying your preferences is doing so both 'magically' and without reason.

I’m assuming there aren’t any side effects that would make me resist based on the process itself, so we can say that’s “magical”. Let’s say they’re doing it without reason, or for a reason I don’t care about, but they credibly tell me that they won’t change anything else for the rest of my life. Does that make a difference?

Of course, another issue may be that we are using 'preference' in different ways. You might find the act of killing puppies emotionally distasteful even if you know that it is necessary. It is an interesting question whether we should work to change our preferences to enjoy things like taking out the trash, changing diapers, and killing puppies.

I’m defining preference as something I have a positive or negative emotional reaction about. I sometimes equivocate with what I think my preferences should be, because I’m trying to convince myself that those are my true preferences. The idea of killing puppies was just an example of something that’s against my current preferences. Another example is “we will modify you from liking the taste of carrots to liking the taste of this other vegetable that tastes different but is otherwise identical to carrots in every important way.” This one doesn’t have any meta-preferences that apply.

Comment by astrasequi on Open thread, Aug. 10 - Aug. 16, 2015 · 2015-08-13T11:26:54.797Z · score: 0 (0 votes) · LW · GW

No, but I don’t see this as a challenge to the reasoning. I refuse because of my meta-preference about the total amount of my future-self’s happiness, which will be cut off. A nonzero chance of living forever means the amount of happiness I received from taking the pill would have to be infinite. But if the meta-preference is changed at the same time, I don’t know how I would justify refusing.

Comment by astrasequi on Open thread, Aug. 10 - Aug. 16, 2015 · 2015-08-13T11:24:05.469Z · score: 0 (0 votes) · LW · GW

I don’t understand your first paragraph. For the second, I see my future self as morally equivalent to myself, all else being equal. So I defer to their preferences about how the future world is organized, because they're the one who will live in it and be affected by it. It’s the same reason that my present self doesn’t defer to the preferences of my past self.

Comment by astrasequi on Open thread, Aug. 10 - Aug. 16, 2015 · 2015-08-12T09:49:20.593Z · score: 2 (2 votes) · LW · GW

A question that I noticed I'm confused about. Why should I want to resist changes to my preferences?

I understand that it will reduce the chance of any preference A being fulfilled, but my answer is that if the preference changes from A to B, then at that time I'll be happier with B. If someone told me "tonight we will modify you to want to kill puppies," I'd respond that by my current preferences that's a bad thing, but if my preferences change then I won't think it's a bad thing any more, so I can't say anything against it. If I had a button that could block the modification, I would press it, but I feel like that's only because I have a meta-preference that my preferences tend to maximizing happiness, and the meta-preference has the same problem.

A quicker way to say this is that future-me has a better claim to caring about what the future world is like than present-me does. I still try to work toward a better world, but that's based on my best prediction for my future preferences, which is my current preferences.

Comment by astrasequi on Worth remembering (when comparing ‘the US’ to ‘Europe’) · 2013-04-15T09:37:11.512Z · score: 0 (0 votes) · LW · GW

The Equator passes through South America, actually. I think that there is a perception of the world's land area being divided in two by the Equator, but most of the world's land area is in the Northern Hemisphere (about 2/3, more if you don't count Antarctica).

Edit: My apologies (see next comment).

Comment by astrasequi on Miracle Mineral Supplement · 2012-11-21T22:51:12.074Z · score: 2 (2 votes) · LW · GW

Bleach will control (kill) most bacteria, but since cancer cells are very similar to your own cells, the prior is very low unless there is a specific reason to think that it will target one of those differences. For example, something that is just corrosive will probably affect the different cell types equally. Another thing is that since it's a charged molecule, it can't actually enter the cell on its own unless it rips apart the cell membrane, in which case that's probably the main mechanism of toxicity.

Also, I wouldn't be surprised if it had been tested. The most likely outcome would be that it failed at an early step in the testing process (along with a large number of other chemicals), and nobody had any reason to publish it or think that anyone would ever actually decide that it might work.

Comment by astrasequi on Miracle Mineral Supplement · 2012-11-21T22:27:01.725Z · score: 3 (3 votes) · LW · GW

Historically, most drugs have been identified by high-throughput screening, i.e. you purify an enzyme of interest and test billions of different chemicals against it for the desired effect. You then test for an effect in cell culture (compared to healthy cells), or you can screen directly against the cancer cells. Once you have that evidence, you test whether it has effects in mice, and only after that can you test anything in humans.

It's possible to propose a single chemical and get it right by chance, but testing a single chemical is cheap. In an already-equipped lab, the initial cell culture data will probably take a few weeks and under a thousand dollars, and after that you will have people willing to help and/or fund you. The lack of even this initial evidence is generally a good reason to believe that something doesn't work.

With regards to hypotheses, a lot of the early drugs were identified by chance - there's a description at History of cancer chemotherapy. Most of the current interest is in targeted therapy, i.e. intended to act against specific proteins involved in various types of cancer, and the starting point is the identification of that protein. Chemo drugs are a bit different since they're a very broad class (they target rapidly dividing cells in general, which is also what causes the toxicity), and the metabolic networks they affect are generally well-known, so the initial hypotheses tend to be about new ways that you can intervene in those networks. There are other approaches to the various steps as well, e.g. structure-based drug design has had some success, but not yet enough to replace the screens.

Comment by astrasequi on How to Improve Field Cryonics · 2012-09-09T21:11:15.825Z · score: 0 (0 votes) · LW · GW

I would have been much more convinced by data from a controlled experiment. A lot of things could cut off flow, as you pointed out, and there are a lot of things going wrong in a dying person. I'm actually not sure why he brought rouleaux into it - my understanding is that we already know the RBCs clump and that this blocks capillaries.

In any case, the main point I was trying to make was that reducing the number of RBCs in the brain is probably not the best way to go, unless we can figure out an alternative way to supply oxygen. Destroying the RBCs and letting the hemoglobin travel freely would probably help, but that would set off all sorts of damaging physiological responses as well.

Comment by astrasequi on How to Improve Field Cryonics · 2012-09-09T19:35:56.934Z · score: 0 (0 votes) · LW · GW

As long as you recognize that clotting is a different process. =)

It's been a few years since I studied this, but as far as I know, the physiological significance of rouleaux (including whether they block blood vessels) is unknown - don't forget that they're in equilibrium with the non-rouleaux form. Although cold temperatures will slow down that equilibrium, and possibly cause the problems you're referring to.

Comment by astrasequi on How to Improve Field Cryonics · 2012-09-09T18:47:24.773Z · score: 0 (0 votes) · LW · GW

I already read it. That quote doesn't say anything about rouleaux or clotting; it just describes one of the mechanisms (other than clotting) by which brain ischemia occurs. Can you be more specific?

Comment by astrasequi on How to Improve Field Cryonics · 2012-09-09T15:58:54.420Z · score: 0 (0 votes) · LW · GW

See the third paragraph of Coagulation - the diagram of the blood clotting cascade is on the right. I've never heard of rouleaux having a role in blood clotting - a quick PubMed search turned up this case study, but it was due to mutations in the protein fibrinogen.

I don't think it has any legal implications, at least the Best's article doesn't mention any.

I was thinking that since the drugs are dangerous (even more so if you're already in a weakened condition), it would be viewed as attempting to hasten their death. Especially if someone overdosed either deliberately or accidentally.

Comment by astrasequi on How to Improve Field Cryonics · 2012-09-09T01:55:59.247Z · score: 2 (2 votes) · LW · GW

Blood clotting is not caused by red blood cells but by platelets. They do get caught up by the clot spreading around them and then act as parts of the barrier, but removing them too fast would actually increase ischemia because they're what carry the oxygen.

(By the way, I hope that the cryoprotectant solutions contain high concentrations of dissolved oxygen. Not nearly as good as having the actual RBCs, but you can increase the amount (supersaturation) by keeping it under pressure.)

Anyways, given that perfusion is already taking place (and this is removing all of the components of the blood including the platelets), the other option is to disable the blood clotting cascade, for example by administration of anticoagulants such as warfarin. I don't know if this is already done. You would also have access to more "extreme" types of anticoagulation, chemicals (or higher doses) that aren't on the medical market because the effects are normally too strong.

I suppose another option would be to suggest that the patient to start taking anticoagulants before death. I'm not sure whether that would have legal implications though.

Comment by astrasequi on Cryonics: Can I Take Door No. 3? · 2012-09-09T01:28:53.006Z · score: 2 (2 votes) · LW · GW

The traditional way of inserting a gene into the genome is to use a retrovirus with its DNA replaced. Most such viruses (at least, that have been used) incorporate randomly, meaning that there is a small but nonzero chance every time a new cell is modified that it will knock out a gene that is important for controlling cancer. On a cellular level, the most likely cause of this is cell death, as the rest of the cell's anticancer mechanisms shut down the cell. But of course, this doesn't work every time.

There are specific viruses (i.e. that always integrate at the same, safe genomic location) currently being developed, and it's hoped that these will solve the problem.

However, there's actually another related problem. If you want to make major changes to the cell (like reprogramming it into a stem cell), the cell's anticancer mechanisms will detect that as well, so in order to make those changes you have to at least temporarily shut off some of those mechanisms. So there is a risk for cancer in that as well.

About the topic of this thread - generally, the ability to survive specific extreme environments (especially one that affects everything in the cell such as changes in water content or temperature) is a specialized adaptation. I would not be surprised if there are global differences in the genomes of these species, e.g. most proteins are much more hydrophilic, or there is a system of specialized chaperones (=proteins that refold other proteins or help prevent them from misfolding) plus the adaptations in proteins that allow the chaperones to act on them, and further systems to repair damage the chaperones don't prevent. It is unlikely that only a few genes would be involved, and unless a case can be made for evolutionary conservation of the adapted genes to humans, we wouldn't have most of them (in fact, any genome-wide changes would mean that we would have to adapt our own proteins in new ways, just because we don't share all of them with the species in question). Cold temperature is actually a special case here, because it slows down everything and thus reduces the amount of "equivalent normal-temperature time" that has passed. It's still difficult (and of course none of these are impossible), but I don't think it's likely that small-scale gene therapy would be sufficient.

Comment by astrasequi on Learn A New Language! · 2012-05-22T17:48:56.363Z · score: 0 (0 votes) · LW · GW

Of course (I think I should have pointed that out in my first post), but physics/math/etc also take a long time to learn properly, so the time required becomes much less relevant.

I do not mean that one should necessarily learn a new language instead of learning math - although I might say that if you already know a lot of math (enough to get significant benefit to your thought processes), it might be useful to spend some time learning something that trains different aspects of mental processing (like learning a language). If I had to speculate on what any specific benefits might be, I would suggest that it comes from having more than one independent lens through which you interpret the world/more than one basis for your thought processes, and the benefit to mental flexibility you can get from switching between them (I'm not sure if I am properly communicating what I mean by that though).

Comment by astrasequi on Learn A New Language! · 2012-05-22T04:56:29.402Z · score: 0 (2 votes) · LW · GW

I'm surprised that nobody seems to have brought up any mental benefits of speaking more than one language. I'm not sure how strong the evidence is, but there has definitely been research that claims to point in that direction.

Comment by astrasequi on Neil deGrasse Tyson on Cryonics · 2012-05-14T05:20:32.010Z · score: 2 (2 votes) · LW · GW

now 'the flora and fauna' will be poisoned if they try to take your offer anytime soon.

All the biological material will be cycled back into the ecosystem, most of it quite soon - even despite the presence of toxins (formaldehyde-eating bacteria, etc). His statement is correct in the sense that if you are cryopreserved, the net amount of carbon, nitrogen, phosphorus, etc in the biosphere will be slightly lower than it otherwise would have been.

Not attacking your position, just pointing that out.

Comment by astrasequi on Question: Being uncertain without worrying? · 2012-04-21T02:07:11.022Z · score: 0 (0 votes) · LW · GW

Not really what the topic is about, but I think it's always important to remember that there is a countering factor if the timescale is long enough (years) - your cognitive abilities will decrease with age, so decisions made earlier may benefit from this.

I imagine that there might be some age at which your (always-increasing, but perhaps subject to diminishing returns) experience and your (always-decreasing, after around 20 or 25) cognitive ability cancel, resulting in a peak in decision-making ability ceteris paribus.