Open thread, Dec. 14 - Dec. 20, 2015
post by MrMind · 2015-12-14T08:09:14.535Z · LW · GW · Legacy · 90 commentsContents
90 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
90 comments
Comments sorted by top scores.
comment by Panorama · 2015-12-16T23:23:55.255Z · LW(p) · GW(p)
Replies from: Manfred, ChristianKlThe aim of the game is simple. try to guess how correlated the two variables in a scatter plot are. The closer your guess is to the true correlation, the better.
comment by mako yass (MakoYass) · 2015-12-18T08:17:02.741Z · LW(p) · GW(p)
I was in the programming channel of the lesswrong slack this morning (it's a group chat web thing, all are welcome to ask for an invite if you'd like to chat with rationalists in a place that is not the archaic, transient mess that is IRC. (though irc.freenode.net/#lesswrong is not so terrible a place to hang out either, if you're into that))
, and a member expressed difficulty maintaining their interest in programming as a means to the end of earning to give. I've heard it said more than once that you can't teach passion, but I'd always taken that as the empty sputtering of those who simply do not know what passion is or what inspires it, so I decided, since we two have overlapping aesthetics and aspirations, that I would try to articulate my own passion for programming. Maybe it would transfer.
Here's what I wrote, more or less
So, the problem that most philosophers in academia trip over, get impaled on, and worship for the rest of their careers, is that they're using great clumbering conceptual frameworks that they do not and cannot ever understand, that is, natural language and common-sense reasoning, as it were evolved by a blind, flawed process that has never embarked to write any apologia or documentation for its subjects. In programming, any ill-defined abstract concept is far more obvious, widely acknowledged, and mitigable. The act of programming is essentially the act of taking conceptual chimeras(the requirements) apart and reforming them into well-defined, practically computable processes. Debugging is the process of figuring out how you failed to articulate a concept properly, and identifying the problem in it. For an analytic philosopher, programming is not just an occupation, it's an opportunity to get paid to do good work, with the side effect of examining the nature of concept and thought and developing one's sense for details and definition.
Aside from that, I've always felt like programming is a very progressive process. In theory, once a FOSS dev abstracts a concept properly everyone else in the world can then build on top of that concept. No other field can progress as quickly and concretely as programming. (In practice, though, that is false. The tower of babel is collapsed and rebuilt every decade. Still, I've found it to be a helpful delusion where passion's concerned.)
It was well received. Maybe it was enough? I don't know. But I think more should be written on the relationship between the act of programming and the analysis of concepts. Every time I meet a programmer who clearly has enough talent to.. let's say.. put together sanely architectected patches to the source code of our culture.. but who instead recedes into their work and never devotes any time to analytic philosophy, it breaks my heart a little.
Replies from: None↑ comment by [deleted] · 2015-12-20T01:14:44.630Z · LW(p) · GW(p)
I've heard it said more than once that you can't teach passion, but I'd always taken that as the empty sputtering of those who simply do not know what passion is or what inspires it.
Could you elaborate on this? You sound very certain for someone whom I wouldn't expect to have much background on the subject.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2015-12-20T22:50:21.434Z · LW(p) · GW(p)
=/ The conveyance of passion is not an esoteric subject. Anyone who's spent a significant portion of their life as a student will have seen it happen, on and off. We might be talking about different things, of course. I'm only talking about passion the spark, which is liable to fizzle out if it's not immediately and actively fed, whereas I'd expect more extensive investigations into passion to focus on passion the blaze, a phenomenon has greater measurable impact, a passion well enough established to spread itself over new resources and keep feeding itself. (although with programming there's less of a difference between the two, since there's an abundance of resources.)
Aside from that, my prior for the probability of a complex of human thought being impossible to transmit from one mind to another is just extremely low. IME when a person who is not a poet, or a writer, or a rationalist or an artist says that a thought can't be communicated or explained, that's coming from a place of ignorance. People who are not rationalists rarely ever properly explain anything, nor do they usually require proper explanations. People who are not poets, who do not read poetry, have no sense of the limits of what can be expressed. When one of these people says that something can't be expressed, they are bullshitting. They do not know. They could not possibly know.
Replies from: joshuxcomment by Daniel_Burfoot · 2015-12-14T22:47:08.468Z · LW(p) · GW(p)
Why haven't the good people at GiveWell written more about anti-aging research?
According to GiveWell, the AMF can save a life for $3.4e3. Let's say it's a young life with 5e1 years to live. A year is 3.1e7 seconds, so saving a life gives humanity 1.5e9 seconds, or about 5e5 sec/$.
Suppose you could invest $1e6 in medical research to buy a 50-second increase in global life expectancy. Approximating global population as 1e10, this buys humanity 5e11 seconds, or about the same value of 5e5 sec/$.
Buying a 50-second increase in life expectancy for a megabuck seems very doable. In practice, any particular medical innovation wouldn't give 50 seconds to everyone, but instead would give a larger chunk of time (say, a week) to a smaller number of people suffering from a specific condition. But the math could work out the same.
Of course, it could turn out that the cost of extending humanity's aggregate lifespan with medical research is much more than $5e5/sec. But it could also turn out to be much cheaper than that.
ETA: GiveWell has in fact done a lot of research on this theme, thanks to ChristianKl for pointing this out below.
Replies from: ChristianKl, Fluttershy, mwengler, Soothsilver↑ comment by ChristianKl · 2015-12-15T01:05:11.330Z · LW(p) · GW(p)
For AMF it's a lot easier to estimate the effect than it is for anti-aging research. GiveWell purposefully started with a focus on interventions for which the can study the effect.
GiveWell writes:
Medical research : As of November 2011, we are just beginning to consider the cause of medical research. Conceptually, we find this cause promising because it is possible that a relatively small amount spent on research and development could result in new disease-fighting technology that could be used to save and improve many lives throughout the world. However, we do not yet have a good sense of whether this cause has a strong track record of turning charitable dollars into lives saved and improved.
You find a bit of data gathering under http://www.givewell.org/node/1339
More recently GiveWell Labs which then was renamed into the Open Philanthropy project will put more emphasis in that direction.
Articles that were written are:
http://blog.givewell.org/2013/12/26/scientific-research-funding/
Why explore scientific research? We expect it to be a difficult and long-term project to gain competence in scientific research funding.
http://blog.givewell.org/2014/01/07/exploring-life-sciences-funding/
“What are the best opportunities for funders aiming to contribute to progress in life sciences (i.e., biology and medicine)?” This post lays out what we’ve done to date and how we plan to move forward.
http://blog.givewell.org/2014/01/15/returns-to-life-sciences-funding/
“What are the best opportunities for funders aiming to contribute to progress in life sciences (i.e., biology and medicine)?” This post lays out what we’ve done to date and how we plan to move forward.
GiveWell Labs managed get Steve Goodman and John Ioannidis matchmaked with the Laura and John Arnold Foundation at the tune of $6 million.
Meta-Research doesn't sound as sexy as anti-aging research but if we want to have good anti-aging research we need a good basis in biology as a whole.
Anti-aging research is a catch-phrase and it makes sense that it's decently funded but alone it won't work. Biology as a whole needs to progress and chasing after shiny anti-aging targets might not always be the most effective use of money. Do you have a reason why you think it makes more sense to speak about anti-aging research than it makes sense to speak about life-science research?
Buying a 50-second increase in life expectancy for a megabuck seems very doable.
Please do a Fermi estimation of how you arrive at that conclusion.
↑ comment by Fluttershy · 2015-12-15T02:08:56.668Z · LW(p) · GW(p)
Ooh, I know! So, Holden is aware of SENS. However, by default, GiveWell doesn't publish any info on charities it looks at and decides not to recommend, if they don't ask GiveWell to. This is to encourage other charities to go through GiveWell's recommendation process--it keeps GiveWell from lowering a charity's reputation by evaluating them.
Anyways, GiveWell did some sort of surface-level look at SENS a while back, and didn't recommend them. I think the only way to get more info about this would be to email Aubrey about his interaction with GiveWell.
↑ comment by mwengler · 2015-12-15T15:16:10.973Z · LW(p) · GW(p)
When doing the calculations be sure to QA your LYs. Spending an extra week lying doped up and in pain in a hospital bed may not be worth all that much. Also with medical research, you often wind up with a patented drug which then costs $1e5 per patient treated at least for the first decade or two of its use at least as used in the USA and other non-single payer countries. Or it requires $1e5 of medical professional intervention per patient to implement. My priors are that the low-hanging fruit is not in turning 90 year olds into 91 year olds, and won't be for many decades.
↑ comment by Soothsilver · 2015-12-14T23:57:15.913Z · LW(p) · GW(p)
I think their argument was that they don't support Pascal's Mugging and they don't see any proof of medical research within reach that could end aging with a significant probability.
EDIT: ...and I should have read the comment in more detail. You are talking about stuff such as donating to curing diseases. I think they just didn't assign analysts to this yet. I guess it's hard to measure scientific progress.
comment by Lumifer · 2015-12-15T18:33:10.769Z · LW(p) · GW(p)
An interesting blog post which argues that in medical studies the great majority of improvement in non-intervention arms that is attributed to the placebo effect actually comes from regression to the mean.
Replies from: IlyaShpitser, Vaniver, hyporational, James_Miller, Douglas_Knight↑ comment by IlyaShpitser · 2015-12-16T18:35:15.287Z · LW(p) · GW(p)
To really test this, we should see if placebo is much smaller in studies where this can't happen (certain chronic diseases for example).
Replies from: Lumifer↑ comment by Lumifer · 2015-12-16T19:06:30.564Z · LW(p) · GW(p)
The issue is distinguishing placebo (defined as a psychosomatic effect) from "natural healing" and I suspect it will be not easy -- in diseases where psychosomatic placebo "can't happen", can natural healing happen?
Replies from: Manfred↑ comment by Manfred · 2015-12-16T19:37:06.082Z · LW(p) · GW(p)
Pretty sure Ilya suggested the reverse - diseases where natural healing doesn't happen, but the placebo effect is possible.
Replies from: ChristianKl, Lumifer↑ comment by ChristianKl · 2015-12-16T19:45:48.760Z · LW(p) · GW(p)
The question is whether those coherently exist.
If the placebo works for the disease humans in their natural enviroment might do something they believe will cure the disease and thus you have natural healing
.
↑ comment by Lumifer · 2015-12-16T20:15:51.777Z · LW(p) · GW(p)
Same objection: do such exist? Can you give any examples?
The problem is that the difference between (psychosomatic) placebo and natural healing is just the involvement of the mind. If no natural healing is possible, what kind of magic is the mind doing?
It's easier to exclude placebo -- e.g. if the patient is in a long-term coma, no placebo effects seem to be possible.
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-12-16T21:00:36.527Z · LW(p) · GW(p)
Physical injury, chronic disease.
I meant placebo as baseline effect (from all sources, psychosomatic or statistical), and the falsifiable prediction is it should drastically decrease in situations where regression to the mean should not happen.
Not clear why psychosomatic effects happen, may work in coma. Very clear why regression to the mean happens, well understood issue in sampling from a distribution. So: easier to exclude well-understood thing.
Actually, you can view this as a causal issue, the blog post is really about a type of selection bias, or "confounding by health status."
edit: Lumifer, this is curious. I mentioned chronic disease in my original response. Do you ... parse what people write before you respond?
Replies from: Vaniver↑ comment by Vaniver · 2015-12-16T21:52:17.729Z · LW(p) · GW(p)
I meant placebo as baseline effect (from all sources, psychosomatic or statistical), and the falsifiable prediction is it should drastically decrease in situations where regression to the mean should not happen.
I think the core point of that article (and one I agree with) is that if we want to attribute the 'placebo effect' to medical care, we need to measure not the difference between the patient before and after placebo treatment, but the difference between the after for no treatment and the after for placebo treatment. And so it seems very useful (for determining the social benefit of medicine / homeopathy / etc.) to separate out psychosomatic effects (which are worth paying for) from statistical effects (which aren't worth paying for).
Replies from: IlyaShpitser↑ comment by IlyaShpitser · 2015-12-16T22:30:37.687Z · LW(p) · GW(p)
Sure, I agree. If the article is right.
↑ comment by Vaniver · 2015-12-16T18:34:49.621Z · LW(p) · GW(p)
I think this part is a bit too strong, which corrupts one of the main points of the whole post:
The other contribution comes from the get-better-anyway effect. This is a statistical artefact and it provides no benefit whatsoever to patients.
It's not called the stay-the-same-anyway effect, it's called the get-better-anyway effect. The patient who reports lower pain a week later actually is in less pain. Health isn't repeated draws from an urn: if you crack a rock one day it won't regress to the mean. It'll stay cracked. That people heal is not a statistical artefact.
That is, I agree much more with the O'Connell quote (emphasis mine):
If this finding is supported by future studies it might suggest that we can’t even claim victory through the non-specific effects of our interventions such as care, attention and placebo. People enrolled in trials for back pain may improve whatever you do. This is probably explained by the fact that patients enrol in a trial when their pain is at its worst which raises the murky spectre of regression to the mean and the beautiful phenomenon of natural recovery.
Regression to the mean plays a part, especially for chronic variable conditions like lower back pain or depression, but even there natural recovery plays a huge part (otherwise the condition would be a degenerative one).
Replies from: Lumifer↑ comment by Lumifer · 2015-12-16T19:05:49.925Z · LW(p) · GW(p)
It's not called the stay-the-same-anyway effect, it's called the get-better-anyway effect.
I agree, but here I am (uncharacteristically :-/) inclined to the charitable reading and treat "it" in "it provides no benefit whatsoever" as referencing placebo.
I would also think of regression to the mean (in this context) as an observable manifestation of "natural recovery" and not oppose them.
Replies from: Vaniver↑ comment by Vaniver · 2015-12-16T20:03:05.764Z · LW(p) · GW(p)
I think the structure of the paragraph is pretty clear (differentiating sentence, name A, explain A, name B, explain B, compare A and B), and the rest of the article matches my interpretation.
I would also think of regression to the mean (in this context) as an observable manifestation of "natural recovery" and not oppose them.
Yes, one could say that natural recovery is the mechanism by which regression to the mean works.
The chief thing I'm objecting to is the idea that the regression is in some way illusory or nonexistent. In the discussion of the NSLBP, for example, DC claims "none of the treatments work" when I think the result is the opposite, that "all of the treatments work." Now, DC and I agree on the right course of treatment (do nothing) for the same reason (why spend more to get the same effect as doing nothing?), but we disagree on the presentation. Instead of "treatment" vs "no treatment," both of which are equally ineffective, cast it as "natural recovery plus treatment" vs. "natural recovery alone," both of which are equally effective.
Here you might get into an object level vs. meta level debate. I argue that one should talk up doing nothing instead of talking down treatments that are no better than doing nothing, because it will be hard to convince the man on the street reasoning by post hoc ergo propter hoc that his attempts did not actually lead to recovery, but if convinced to try doing nothing then the same fallacy will, when doing nothing turns out to work, cause him to gain trust in doing nothing. One could respond that the important point is not that he get the object level question right, but that he avoid fallacious reasoning.
Replies from: Lumifer↑ comment by Lumifer · 2015-12-16T20:14:26.634Z · LW(p) · GW(p)
cast it as "natural recovery plus treatment" vs. "natural recovery alone," both of which are equally effective
That naturally leads to the effect of treatment being zero which is conventionally called "the treatment does not work".
When you have some baseline process and some zero-effect interventions on top of it, I think it's misleading to say that all these interventions work.
I argue that one should talk up doing nothing instead of talking down treatments that are no better than doing nothing
These, of course, are not mutually exclusive. Besides, you need to do something to counteract the proponents of the no-effect treatments -- such people exist (typically they are paid for providing these treatments) and if you just ignore them they will dominate the debate.
↑ comment by hyporational · 2015-12-19T04:19:02.060Z · LW(p) · GW(p)
The placebo group is called such because it receives the placebo treatment, not because medical researchers think all improvement in it is attributable to the placebo effect. Results are reported as improvement in the treatment arm vs. the placebo arm, and never have I seen these differences explicitly reported as treatment effect vs. placebo effect, and I've read hundreds of medical papers. The real magnitude of the placebo effect is almost never of interest in these papers. Some professionals in the medical community could have such a misconception because of the usual lack of scientific training, but I'd like to think they are a small minority.
If the placebo effect is of real importance, I think a more significant problem would be the lack of use of active placebos that mimick side effects since most drugs have them and this is a potential source of breaking the blinding of RCTs.
Replies from: Lumifer↑ comment by Lumifer · 2015-12-20T21:55:54.508Z · LW(p) · GW(p)
The placebo group is called such because it receives the placebo treatment, not because medical researchers think all improvement in it is attributable to the placebo effect.
Sure. But the question under discussion here is what actually is the placebo effect and how much of it can you attribute to psychosomatic factors and how much to just regression to the mean (aka natural healing).
You are correct in that most intervention studies don't care about the magnitude of the placebo effect, they just take the placebo arm of the trial as a baseline. But that doesn't mean that we couldn't or shouldn't ask questions about the placebo effect itself.
Replies from: hyporational↑ comment by hyporational · 2015-12-22T11:15:03.512Z · LW(p) · GW(p)
the question under discussion here is what actually is the placebo effect and how much of it can you attribute to psychosomatic factors and how much to just regression to the mean (aka natural healing).
In that case your opener is slightly polemical :)
But that doesn't mean that we couldn't or shouldn't ask questions about the placebo effect itself.
Agreed. The problem with nonintervention arms for studying the placebo effect is that there aren't clear incentives for adding them and they cost statistical power.
↑ comment by James_Miller · 2015-12-18T17:33:08.134Z · LW(p) · GW(p)
My n=1 experiment evidences against this. When my son was much younger and complained some part of him was hurting (because, say, he bumped against a wall) I would put lotion on the part and say it was powerful medicine. It usually made him feel better. And I wasn't even lying because the medicine I had in mind was the placebo effect.
Replies from: Jiro, Lumifer↑ comment by Jiro · 2015-12-20T08:37:53.063Z · LW(p) · GW(p)
You were lying, because you were making a statement that you knew would be understood as an untruth and with the intention of it being understood as that untruth. The fact that it may be true using a definition that isn't used by the target doesn't change that.
Replies from: James_Miller, Tem42↑ comment by James_Miller · 2015-12-21T02:27:05.606Z · LW(p) · GW(p)
Disagree. I believed that my statement would be interpreted as "this will reduce your pain." Because of my belief in the placebo effect I really thought that the lotion would reduce my son's pain.
↑ comment by Lumifer · 2015-12-18T17:59:30.621Z · LW(p) · GW(p)
You were not measuring actual improvement -- you were measuring the amount of whining/complaining.
Replies from: James_Miller↑ comment by James_Miller · 2015-12-18T19:57:46.889Z · LW(p) · GW(p)
Which is strongly correlated with pain. A reduction in pain is an actual improvement.
Replies from: Lumifer↑ comment by Douglas_Knight · 2015-12-29T03:02:28.594Z · LW(p) · GW(p)
Right, which is why the effect in the placebo arm is not called the placebo effect.
comment by RaelwayScot · 2015-12-14T12:42:09.971Z · LW(p) · GW(p)
Here they found dopamine to encode some superposed error signals about actual and counterfactual reward:
http://www.pnas.org/content/early/2015/11/18/1513619112.abstract
Could that be related to priors and likelihoods?
Significance
There is an abundance of circumstantial evidence (primarily work in nonhuman animal models) suggesting that dopamine transients serve as experience-dependent learning signals. This report establishes, to our knowledge, the first direct demonstration that subsecond fluctuations in dopamine concentration in the human striatum combine two distinct prediction error signals: (i) an experience-dependent reward prediction error term and (ii) a counterfactual prediction error term. These data are surprising because there is no prior evidence that fluctuations in dopamine should superpose actual and counterfactual information in humans. The observed compositional encoding of “actual” and “possible” is consistent with how one should “feel” and may be one example of how the human brain translates computations over experience to embodied states of subjective feeling.
Abstract
Replies from: IlyaShpitserIn the mammalian brain, dopamine is a critical neuromodulator whose actions underlie learning, decision-making, and behavioral control. Degeneration of dopamine neurons causes Parkinson’s disease, whereas dysregulation of dopamine signaling is believed to contribute to psychiatric conditions such as schizophrenia, addiction, and depression. Experiments in animal models suggest the hypothesis that dopamine release in human striatum encodes reward prediction errors (RPEs) (the difference between actual and expected outcomes) during ongoing decision-making. Blood oxygen level-dependent (BOLD) imaging experiments in humans support the idea that RPEs are tracked in the striatum; however, BOLD measurements cannot be used to infer the action of any one specific neurotransmitter. We monitored dopamine levels with subsecond temporal resolution in humans (n = 17) with Parkinson’s disease while they executed a sequential decision-making task. Participants placed bets and experienced monetary gains or losses. Dopamine fluctuations in the striatum fail to encode RPEs, as anticipated by a large body of work in model organisms. Instead, subsecond dopamine fluctuations encode an integration of RPEs with counterfactual prediction errors, the latter defined by how much better or worse the experienced outcome could have been. How dopamine fluctuations combine the actual and counterfactual is unknown. One possibility is that this process is the normal behavior of reward processing dopamine neurons, which previously had not been tested by experiments in animal models. Alternatively, this superposition of error terms may result from an additional yet-to-be-identified subclass of dopamine neurons.
↑ comment by IlyaShpitser · 2015-12-14T17:24:25.035Z · LW(p) · GW(p)
Interesting, thanks!
comment by Fluttershy · 2015-12-15T02:19:41.044Z · LW(p) · GW(p)
I wonder if starting a GiveWell-like organization focused on evaluating the cost-effectiveness of anti-aging research would be a more effective way to fund the most effective anti-aging research than earning-to-give. Attracting a Moskovitz-lever funder would allow us to more than completely fund SENS (provisional on SENS still seeming like the best use of funds after more research was done).
Replies from: Vaniver, ChristianKl↑ comment by Vaniver · 2015-12-15T03:01:55.439Z · LW(p) · GW(p)
The product of meta-orgs is taste. If boardgamegeek thinks that Twilight Struggle is a good game, then you, not having played it, should expect that it's likely a 'good' game. If Givewell thinks that AMF is a good charity, then you, not having looked at it yourself, should expect that it's likely a 'good' charity.
With games that many people play, a website can average together those ratings and then sort them to generate a solid taste measure. With charities that have done things in the past and have models of what they can do in the future, an organization can evaluate the things done and the models for how things would change and estimate impacts and then sort by those impacts.
But with scientific projects, this seems much more difficult, because you're extrapolating past the fringes of current knowledge. It's not "which board games that already exist are best?" but "which board game would be best, if you made it?" This is a skill that people have to various degree--someone is designing these things, after all--but I think it's very difficult to communicate, and more importantly, for listeners to have a good sense of why they should or should not trust another person's taste.
Another way of looking at this is, with medical research, all of the cost-effectiveness is driven by whether or not the technology works. If the research is to validate or invalidate a theory, the usefulness of that theory (and thus the evidence) is determined by the technologies enabled by that theory (or the attention spared to work on other technologies that do work). But this is the thing that, by definition, we don't know yet, and is the thing that, say, SENS leadership spends its time thinking about. Do we approve this grant, or that grant?
(This comment may oversell the degree to which tech growth is about creating new knowledge / tech rather than spreading old knowledge / tech, but I think the point still gets at what you're talking about here.)
Replies from: ChristianKl↑ comment by ChristianKl · 2015-12-15T10:25:19.524Z · LW(p) · GW(p)
Another way of looking at this is, with medical research, all of the cost-effectiveness is driven by whether or not the technology works.
That depends on how you define "technology". Knowledge about which lifestyle choices that result in healthier living has an effect but I wouldn't call it "technology" in the narrow sense. I think there a good chance that there's a bias of people focusing too much on trying to use technology to solve problems.
Replies from: Vaniver↑ comment by Vaniver · 2015-12-15T14:17:42.553Z · LW(p) · GW(p)
Agreed but I think I'm more willing to call lifestyle choices, and in particular the means by which medical experts can guide the lifestyle choices of their patients, 'cultural technology' or something similar. One can know that some exercises will fix the patient's back pain, but not know how to get the patient to do those exercises. (Even if the patient is you!)
Replies from: ChristianKl↑ comment by ChristianKl · 2015-12-15T15:59:43.748Z · LW(p) · GW(p)
Strechting the meaning of the term technology that way is baily-and-moat. For large parts of the medical community "technology" refers to something you can in principle patent.
Even if you see the notion more broadly the mental model of medical experts using cultural technology to get patients to comply isn't the only way you can think.
You can also practice the values of what Kant called enlightenment where individuals engage in self-chosen actions because they can reason. With enlightment values it becomes important to educate people but how the body works. If you think as patients as subjects that benefit from eduction you have a different health system then if you think of them as objects to forced into engaging in certain actions.
If easy to make the moral argument that what Kant calls enlightment is good but it might also be in practice the paradigm that produces better health outcomes.
If you care about radical process in medicine than it's important to be open for different paradigms to produce medical progress. Scientific paradigms are in flux and it's important to be open for the possibility that different paradigms from how we do medicine might have advantages. I think ideally we have pluralism in medicine with many different paradgims getting explored.
How can different paradigms lead to a different science? Take an area like the question whether a single sperm is enough to get a woman pregnant. You will find a lot of mainstream sex advice from sources like WebMD say that a single sperm is enough. That's likely wrong.
If you believe that the point of sex education is to get people to always use condoms it can be helpful to teach the wrong fact that a single sperm is enough. A system that however would focus on true eduction would rather teach the truth. Knowing the truth in this instance isn't a "technology" that does anything specific. I don't trust biology to progress if it doesn't care for the truth and just tries to find out facts that get people to comply with what their doctor tells them.
I seriously engaged with NLP and it's "change is what matters, truth of statements is secondary"-ideology. NLP is actually much more honest about this but once you accept the technology frame, you get there. I think that relationship to the truth is flawed.
↑ comment by ChristianKl · 2015-12-15T10:20:52.773Z · LW(p) · GW(p)
One of the key characteristics of research of the unknown is that you don't know the cost-effectiveness beforehand.
SENS (provisional on SENS still seeming like the best use of funds after more research was done).
What kind of research do you think could prove that claim?
The interesting thing of that claim is the idea that effective anti-aging research is research that's branded as anti-aging. I would guess that one of the most effective investments to further anti-aging research was the NHI decision to give out grants to DNA sequencing companies.
Investigating SENS more closely is also an interesting proposition. Doing so will show that it's over-optimistic and driven by assumptions that are likely wrong. However it scores high in the "clarity of vision" department that YCombinator uses to decide which startups to fund. SENS doesn't have to be right on it's core assumption to produce useful knowledge.
Startups don't profit from highly critical outside scrutiny into how they invest their money. Critical scrutiny might harm SENS.
comment by [deleted] · 2015-12-15T01:33:26.533Z · LW(p) · GW(p)
Thoughts this week:
Effective Altruism
(1)
All I want for Christmas...is for someone from the effective altruism movement to take the prospect of using sterile-insect techniques and more advanced gene drives against the Tsetse fly seriously. This might control African Sleeping Sickness, a neglected disease, and more importantly, unlock what is largely suspected to be THE keystone cause, according to GiveWell of malnutrition in Africa through an extensive causal pathway. I feel EA's are getting too stuck into causes that were identified early in the movement and are neglected the virtue of cause neutrality.
(2)
Isn't it time effective altruists matured to using standardised measures of impact on an individual such as the impact on psychological distress. Then, approximated where interventions sat on a scale of magnitude of cummulative K10 scores. They're a simple metric, you can teach NGO/Aid orgs how to understand them quickly and measures of psychological well-being are the 'net result' of individual differences in changes to health and SES.
(3)
Any thoughts on the prospective impact of a documentary about effective altruism? Looks like the best we got are Vaughan's great speeches from effective altruism global and other little to no view YouTube clips, and Singer's TED talk.
(4)
Kidney donation saves 14 QALY. Death organ donation saves perhaps 10 people with donations, that's 140 QALY's. GiveWell gets a QALY for about 80 bucks, so being an organ donor is worth about 80*140=11200 dollars. Upon Googling I found cyronics has a 90 percent chance of success. That sounds wildely optimistic so I'm going to half that and estimate that unintended consequences will kill my (guess) 500 years into my life. So, assuming that extends from a 80 year average lifestyle, I'd have to make 500-80 years of additional life = 420. Maths isn't needed to suggest I'll have a donation capacity and propensity for as if not more effectiveness donation opportunities than GiveWell's in those years. So, cyronics is more altruistic for EA's than organ donors, no?
Update, the lesswrong survey says the probability is 7% at a glance. So that's around 1/15.
420/15 = 28 years. In 28 years I still imagine I'd be able to donate that amount, assuming 10%/y income donation into a trust that actualises upon my death.
Productivity
(1)
I want to contionue to streamline my workflow. Screw SMS, I'm gonna phase out to email alone with Google Voice forwarding on my SMS's to email
Relationships
Anyone on LessWrong wanna get together ha ... ha ...? Just assume the worst traits for me, and don't ask about them. Then just evaluate my writing here as my best trait and make a choice on that ;)
Info diet
Last read with an open mind: Zero to One by Peter Thiel
Take-aways from this one include:
the importance of thinking carefully about marketshare
the value of ‘value-capture’ and thinking like a monopolist to private gains
Last listened to with an open mind: Danger and Play podcast by Mike Cenovich
Take-aways from this one include:
- ‘mindset is like a conversation, I wouldn’t be as harsh on myself as on others’ There are take aways in every watching and it suprises me everytime. Even now I feel my attitude towards they show hasn’t upward callibrated that I don’t feel the need to publicly declare my interest in it in the hope of prompting a re-watch at a time when it will help. They same goes for my ‘last watched:’
Last watched with an: Mark Freeman (youtube)
- A world of insights in every video. They are not new insights when I rewatch videos, but they are sufficienty abstract, complex and result in a high enough cognitive load that I forget between days of watching. Too bad it’s extremely boring to listen to them, and, somewhat shame-inducing since mental illness is a taboo topic.
Medical malpractice
Australia has the highest rate of medical error in the world according to the World Health Organisation. Counterintuitive as it may seem in Australia there are negligible institutional incentive to fight medical malpractice Instead over the past couple of years, extensive lobbying has taken place by the medical profession for changes to the law for medical negligence in Australia. Medical lobby groups have sought to have the governments legislate what is known as the Bolam test- where the negligence of a doctor is determined solely on the basis of other doctor’s opinions about the doctor’s conduct, regardless of what judges and the courts have to say
From the last article, some other interesting points made:
In a 1999 study reported in the Medical Journal of Australia, it was found that most of health care complainants were not satisfied with either the process or the outcome. Typically they wanted stronger measures taken. Only a few wanted compensation; more wanted acknowledgement of harm done; and most wanted the doctor punished.
Following medical negligence in Australia not every patient sues whenever something goes wrong. Most patients just want the mistake to be acknowledged, and for the doctor to apologise.
Getting a lawyer to deal with medical negligence in Australia is becoming more difficult and a common myth is that lawyers take on ANY case regardless of its merits, in order to make $$$.
The commercial reality is that lawyers only take cases on if they believe there is a good chance of winning i.e if they are meritorious claims. After all, most cases are done on a No Win No Fee basis, so if the lawyer loses, then they won't get paid; there is just no financial incentive in running a frivolous claim.
The government has almost completely done away with legal aid for medical negligence victims. If it wasn't for lawyers taking on the financial risks of running a medical negligence case, most Australian citizens would not be able to afford to pursue their rights.
Looks like a pretty tangly situation with no clear fix
Replies from: ChristianKl↑ comment by ChristianKl · 2015-12-15T10:35:15.706Z · LW(p) · GW(p)
Isn't it time effective altruists matured to using standardised measures of impact on an individual such as the impact on psychological distress.
EA mostly is about using statistics that are already out there.
The K10 scores has questions that are strongly culturally dependent. 1. During the last 30 days, about how often did you feel tired out for no good reason?
depends heavily on what people consider to be "good reasons which differes a lot from culture to culture. It might very well be interpreted by some people as: Did you do anything that produced karma that you have to pay of by being tired
comment by moridinamael · 2015-12-19T19:41:31.549Z · LW(p) · GW(p)
As a pampered modern person, the worst part of my life is washing dishes. (Or, rinsing dishes and loading the dish washer.) How long before I can buy a robot to automate this for me?
Replies from: MrMind, username2↑ comment by MrMind · 2015-12-21T08:34:37.583Z · LW(p) · GW(p)
It's weird, of all the people I know who hate the whole process of getting clean dishes, you are the only one who hates loading the dish washer instead of unloading it.
Anyway, I think that loading the dish washer is sufficiently complicated (manipulating fragile objects, allocating weird shapes into boxes efficiently, moving around a human environment, etc.) that only full fledged robotic butler could do it. I'd say more than 10 years, with 0.9 confidence.
In the mean time, you should really not rinse the dishes before putting it in, otherwise the device will have no benefit: just remove the bigger food residues and chuck 'em dishes in as they are.
↑ comment by username2 · 2015-12-20T14:31:35.592Z · LW(p) · GW(p)
Where I live you don't often see dishwashers, so I always assumed they were as convenient as it could get.
Washing dishes is less bad if you don't have to wash LOADS of dishes, so you could wash them (and negotiate with your family that everyone ought to do the same) (your plates specifically) right after you're finished with your food. They're also easier to wash when the food had'nt dried to the plates. If you have kids too small to even reach the sink, that's harder.
I don't really know how dishwasher's oughta work, is there some reason one can not load their plates right into it? (after some initial rinsing, I presume).
As for pots, and other stuff for which immediate cleansing is not an option... well, those are still gonna suck, especially since they are the things hardest to clean, and neither would it be clear who would have to clean it (if, as is customary, your missus is the one cooking, it is only proper that you help relieve her from some of the load). If you were always to cook just enough, you could at least clean those while they are easier to, but we specifically tend to make more days worth of food in one go.
comment by ChristianKl · 2015-12-18T12:02:50.459Z · LW(p) · GW(p)
Short papers get cited more often. Should we believe that the correlation is due to causal factors? Should aspriring researchers keep their titels as short as possible?
Replies from: username2, Richard_Kennaway↑ comment by Richard_Kennaway · 2015-12-19T16:47:36.914Z · LW(p) · GW(p)
Given that all of the correlations reported in the paper are smaller in magnitude than -0.07, and when lumped by journal, smaller than -0.2, I don't think that these observations, statistically "significant" though they are, can be taken as a basis for advice on choosing a title.
comment by Panorama · 2015-12-16T23:41:06.551Z · LW(p) · GW(p)
The science myths that will not die
False beliefs and wishful thinking about the human experience are common. They are hurting people — and holding back science.
comment by Panorama · 2015-12-16T23:40:00.737Z · LW(p) · GW(p)
The Strangest, Most Spectacular Bridge Collapse (And How We Got It Wrong)
Replies from: ManfredBridge building has been bedeviling humans for a long time, probably since the 1st century. That may explain why, even when they can't carry lots of people or things, bridges are particularly good at carrying lots of meaning: breaking, burning, going too far, going nowhere; the bridges between cultures, across generations, the ones we’ll cross when we come to them. To this day, however, the meanings of Gertie's collapse and that unforgettable footage—"among the most dramatic and widely known images in science and engineering," wrote one engineer—remain murky.
For physics teachers, the footage of Gertie has proved irresistible as a lesson in wave motion—and, specifically, a textbook example of the power of forced resonance. The image of the undulating bridge left its mark on scores of students (including me) as a demonstration of what one canonical version of the film calls “resonance vibrations." Since then, scores of books and articles, from Encyclopedia Britannica to a Harvard course website, have reported that the Tacoma Narrows was destroyed by resonance.
But it turns out it wasn't. And yet, while science has known that for years, lots of people (including me) apparently didn't get the memo.
comment by Panorama · 2015-12-16T23:13:35.089Z · LW(p) · GW(p)
Notes on the Oxford IUT workshop by Brian Conrad
Replies from: username2Since he was asked by a variety of people for his thoughts about the workshop, Brian wrote the following summary. He hopes that a non-specialist may also learn something from these notes concerning the present situation. Forthcoming articles in Nature and Quanta on the workshop will be addressed at the general public. This writeup has the following structure:
Background
What has delayed wider understanding of the ideas?
What is Inter-universal Teichmuller Theory (IUTT = IUT)?
What happened at the conference?
Audience frustration
Concluding thoughts
Technical appendix
comment by Gunnar_Zarncke · 2015-12-14T21:44:45.269Z · LW(p) · GW(p)
This is a kind of repost of something I share on the LW slack.
Someone mentioned that "the ability to be accurately arrogant is good". This was my reply:
One aspect of arrogance is that it is how some competent people with a high self-esteem are perceived to be. I certainly was often perceived as arrogant. At least I got called that way quite often when I was younger and judging from some recent discussions which heavily reflected on that I probably made that impression for most of my life. I didn't and couldn't understand why. I certainly didn't want to give other people the feeling of inferiority. But I also did nothing to diminish my competence or my self-worth. Sadly doing nothing apparently is enough to give many other people the feeling of inferiority. And also apparently a natural response is to compensate in one of many ways:
- lashing out and trying to diminish the others self-worth by trying to make them smaller
- defensiveness and/or boasting: making self larger
- avoiding the competent person to avoid the feelings associated (being around high self-esteem competent persons is often un-fun)
I was unaware of these affects and being an introvert implied that it caused me little pain that I was avoided. People who knew me well knew that I wasn't arrogant per se and otherwise nice to be around but outside my circle I had to extensively rely on my competence to get things done. I played nice - but that cause little active reciprocation. Before I knew that arrogance - or signalling 'I'm smarter than you' - is a bad move Only recently did I acquire the language and experience to really notice and understand the impression I made - and was devastated. I don't want people to feel bad next to me. And I'm working on fixing that.
Note that other people who apparently fully understand the effects do sometimes choose differently. For example they might accept the impression they make as theirs and totally accept that they are shunned.
What do you think? Do others have this pattern?
...apparently they do: This post is about how dealing with this can fail.
See also this other post about another aspect of arrogance.
Replies from: cousin_it, username2, ChristianKl, Lumifer↑ comment by cousin_it · 2015-12-15T13:35:07.419Z · LW(p) · GW(p)
Yes, when you imply that you're smarter than someone, you make them feel bad. And yes, many smart people don't realize that. But such behavior can also be attractive to onlookers, especially on the internet. I think Eliezer's arrogance played a big role in his popularity. Personally, I try to avoid being arrogant, but sometimes I can't help it :-)
↑ comment by username2 · 2015-12-18T12:03:58.321Z · LW(p) · GW(p)
You might have been more arrogant when you were young because you might have actually been smarter than most people around you. As people grow up they self select into careers that require intelligence and most of them are no longer smarter than most of their peers and signaling 'I'm smarter than you' becomes unfounded and starts to look silly.
Replies from: Lumifer, Gunnar_Zarncke↑ comment by Lumifer · 2015-12-18T15:55:01.803Z · LW(p) · GW(p)
The classic example of this is when a smart kid from a middling high school finds herself at a good university. She was so used to being the smartest one around and not having to work hard to get good grades, and then... BAM! The level of effort she's used to is now clearly insufficient and there are smarter people all around her. The adjustment can be difficult.
↑ comment by Gunnar_Zarncke · 2015-12-18T17:03:37.164Z · LW(p) · GW(p)
To avoid arrogance signalling let's instead poll for it:
I think I was smarter than my class-mates in school [pollid:1078]
I think I was smarter than my co-students during university [pollid:1079]
I think I was smarter than my colleagues on the job [pollid:1080]
I have appeared arrogant in school [pollid:1081]
I have looked silly in school [pollid:1082]
I have appeared arrogant during university [pollid:1083]
I have looked silly during university [pollid:1084]
I have appeared arrogant on the job [pollid:1085]
I have looked silly on the job [pollid:1086]
For reference: My IQ [pollid:1087]
Use IQ 138 if you don't know or don't want to say. Assume present tense where applicable. Use the middle option to see results or if it doesn't apply.
↑ comment by ChristianKl · 2015-12-14T22:31:35.693Z · LW(p) · GW(p)
There are probably instances were I do come across as arrogant but I don't think it's an automatic effect of being coimpetent and having high self-esteem.
Valentine from CFAR would be a counter-example. He's competent and self-confident but he has the social skills that prevent it from coming across as arrogant.
↑ comment by Lumifer · 2015-12-14T22:00:48.843Z · LW(p) · GW(p)
I am, of course, an arrogant smartass :-)
I deal with this problem by being aware of it and by having the (apparently rare) ability to shut up. I also find it easy to go meta, so when I notice that the status layer of the conversation becomes tumescent and starts to dominate the subject layer, I adjust accordingly.
This works not all the time, but well enough so that I find it acceptable.
comment by WhyAsk · 2015-12-20T17:15:56.508Z · LW(p) · GW(p)
Here's a letter to an editor.
"The Dec. 6 Wonkblog excerpt “Millions and millions of guns” [Outlook] included a graph that showed that U.S. residents own 357 million firearms, up from about 240 million (estimated from the graph) in 1995, for an increase of about 48 percent. The article categorically stated that “[m]ore guns means more gun deaths.” How many more gun deaths were there because of this drastic increase in guns? Using data from the FBI Uniform Crime Reports, total gun murders went from 13,673 in 1995 to 8,454 in 2013 — a decrease in gun deaths of about 38 percent resulting from all those millions more guns. I’m not going to argue causation vs. correlation vs. coincidence, but I can say that “more guns, more gun deaths” is wrong, as proved by the numbers."
Getting into lurking variables is one way of handling this but I'm wondering why the author just didn't "go all the way" and declare that more guns = less deaths rather than just more guns <> more deaths.
Maybe making false statements or lying while sounding credible is not so easy. Maybe the statement can't be too counterintuitive to too many people.
E.g., I complained to a chain store about customer service via their e-mail link, and the cust. service rep. said he couldn't help me because he works the night shift and the store in question is open in the daytime.
Also see https://www.psychologytoday.com/blog/extreme-fear/201005/top-ten-secrets-effective-liars
comment by G0W51 · 2015-12-20T02:43:33.390Z · LW(p) · GW(p)
How much should you use LW, and how? Should you consistently read the articles on Main? What about discussion? What about the comments? Or should a more case-by-case system be used?
Replies from: None↑ comment by [deleted] · 2015-12-20T04:38:53.501Z · LW(p) · GW(p)
Should is one of those sticky words that needs context. What are your goals for using LW?
Replies from: G0W51↑ comment by G0W51 · 2015-12-22T00:06:07.877Z · LW(p) · GW(p)
Improving my rationality. Are you looking for something more specific?
Replies from: None↑ comment by [deleted] · 2015-12-22T04:07:29.056Z · LW(p) · GW(p)
Yes.
Epistemic rationality or Instrumental rationality? If the former, what specific aspects of it are you looking to improve, if the latter, what specific goals are you looking to achieve.
Replies from: G0W51↑ comment by G0W51 · 2015-12-23T21:48:32.097Z · LW(p) · GW(p)
I would like to improve my instrumental rationality and improve my epistemic rationality as a means to do so. Currently, my main goal is to obtain useful knowledge (mainly in college) in order to obtain resources (mainly money). I'm not entirely sure what I want to do after that, but whatever it is, resources will probably be useful for it.
comment by Bryan-san · 2015-12-19T16:26:39.172Z · LW(p) · GW(p)
What are the strongest arguments that you've seen against rationality?
Replies from: fubarobfusco, Tem42, None, None, None, Dagon↑ comment by fubarobfusco · 2015-12-20T21:08:13.070Z · LW(p) · GW(p)
Well, it depends on what you mean by "rationality". Here's something I posted in 2014, slightly revised:
If not rationality, then what?
LW presents epistemic and instrumental rationality as practical advice for humans, based closely on the mathematical model of Bayesian probability. This advice can be summed up in two maxims:
- Obtain a better model of the world by updating on the evidence of things unpredicted by your current model.
- Succeed at your given goals by using your (constantly updating) model to predict which actions will maximize success.
Or, alternately: Having correct beliefs is useful for humans achieving goals in the world, because correct beliefs enable correct predictions, and correct predictions enable goal-accomplishing actions. And the way to have correct beliefs is to update your beliefs when their predictions fail.
We can call these the rules of Bayes' world, the world in which updating and prediction are effective at accomplishing human goals. But Bayes' world is not the only imaginable world. What if we deny each of these premises and see what we get? Other than Bayes' world, which other worlds might we be living in?
To be clear, I'm not talking about alternatives to Bayesian probability as a mathematical or engineering tool. I'm talking about imaginable worlds in which Bayesian probability is not a good model for human knowledge and action.
Suppose that making correct predictions does not enable goal-accomplishing actions. We might call this Cassandra's world, the world of tragedy — in which those people who know best what the future will bring, are most incapable of doing anything about it.
In the world of heroic myth, it is not oracles (good predictors) but rather heroes and villains (strong-willed people) who create change in the world. Heroes and villains are people who possess great virtue or vice — strong-willed tendencies to face difficult challenges, or to do what would repulse others. Oracles possess the truth to arbitrary precision, but they accomplish nothing by it. Heroes and villains come to their predicted triumphs or fates not by believing and making use of prediction, but by ignoring or defying it.
Suppose that the path to success is not to update your model of the world, so much as to update your model of your self and goals. The facts of the external world are relatively close to our priors; not much updating is needed there — but our goals are not known to us initially. In fact, we may be thoroughly deceived about what our goals are, or what satisfying them would look like.
We might consider this to be Buddha's world, the world of contemplation — in which understanding the nature of the self is substantially more important to success than understanding the external world. In this world, when we choose actions that are unsatisfactory, it isn't so much because we are acting on faulty beliefs about the external world, but because we are pursuing goals that are illusory or empty of satisfaction.
There are other models as well, that could be extrapolated from denying other premises (explicit or implicit) of Bayes' world. Each of these models should relate prediction, action, and goals in different ways: We might imagine Lovecraft's world (knowledge causes suffering), Qoheleth's world (maybe similar to Buddha's), Job's world, or Nietzsche's world.
Each of these models of the world — Bayes' world, Cassandra's world, Buddha's world, and the others — does predict different outcomes. If we start out thinking that we are in Bayes' world, what evidence might suggest that we are actually in one of the others?
Replies from: Bryan-san↑ comment by Bryan-san · 2015-12-21T20:08:43.262Z · LW(p) · GW(p)
This is a perspective I hadn't seen mentioned before and helps me understand why a friend of mine gives low value to the goal-oriented rationality material I've mentioned to him.
Thank you very much for this post!
Replies from: fubarobfusco↑ comment by fubarobfusco · 2015-12-21T20:49:28.522Z · LW(p) · GW(p)
It's worth noting that, from what I can tell at least (having not actually taken their courses), quite a bit of CFAR "rationality" training seems to deal with issues arising not directly from Bayesian math, but from characteristics of human minds and society.
↑ comment by Tem42 · 2015-12-20T04:39:17.620Z · LW(p) · GW(p)
Rationality takes extra time and effort, and most people can get by without it. It is easier to go with the flow -- easier on your brain, easier on your social life, and easier on your pocketbook. And worse, even if you decide you like rationality, you can't just tune into the rationality hour on TV and do what they say -- you actually have to come up with your own rationality! It's way harder than politics, religion, or even exercise.
↑ comment by [deleted] · 2015-12-20T11:46:24.227Z · LW(p) · GW(p)
"It's cold-hearted."
This isn't actually a strong argument, but many people find it persuasive.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-12-21T12:06:38.666Z · LW(p) · GW(p)
It applies to certain kinds of rationality but I don't think it applies to rationality!CFAR or the rationality I see at LW events in Germany.
↑ comment by [deleted] · 2016-01-06T14:55:54.006Z · LW(p) · GW(p)
It is hard, sometimes, to follow epistemic rationality when it seems in conflict with instrumental one. Like, when a friend and colleague cries me a river about her ongoing problems, I try to comfort her but also to forget the details, in case I betray her confidence next minute speaking to our other coworkers. Surely epistemic rationality requires committing information to memory as losslessly as possible? And yet I strive to remember the voice and not the words.
(A partial case of what people might mean by 'rationality is cold', I guess.)
Replies from: polymathwannabe↑ comment by polymathwannabe · 2016-01-06T15:01:42.011Z · LW(p) · GW(p)
You need to forget a fact lest you accidentally mention it?
Replies from: Nonecomment by SodaPopinski · 2015-12-16T20:09:20.454Z · LW(p) · GW(p)
Does anyone know of some good program for eye training. I would like to try to become a little less near-sighted by straining to make out things which are at the edge of my range of good vision. I know near-sighted means my eyeball is squashed, but I am hoping my brain can fix a bit of the distortion in software. Currently I am doing random printed out eye charts, and I have gotten a bit better over time, but printing out the charts is tedious.
Replies from: ChristianKl, Lumifer↑ comment by ChristianKl · 2015-12-16T21:48:19.252Z · LW(p) · GW(p)
An acquaintance runs http://eye-track.me/ for measuring vision
↑ comment by Lumifer · 2015-12-16T20:30:01.978Z · LW(p) · GW(p)
How nearsighted are you (in diopters)?
Replies from: SodaPopinski↑ comment by SodaPopinski · 2015-12-16T20:49:40.709Z · LW(p) · GW(p)
About 20/50, I don't know if that can be unambiguously converted to diopters. I measure by performance by sitting at a constant 20 feet away and when I am over 80% correct I shrink the font on the chart a little bit. I can currently read a slightly smaller font than what corresponds to 20/50 on an eye chart.
Replies from: Lumifer↑ comment by Lumifer · 2015-12-16T21:07:52.108Z · LW(p) · GW(p)
So that's fairly minor myopia.
Eye training programs train eye muscles, it's not an issue of fine-tuning "brain software". You can train your eye muscles to compensate, somewhat, but the downside is that if you're e.g. just tired or stressed your vision degrades back to baseline.
Replies from: ChristianKl↑ comment by ChristianKl · 2015-12-16T21:54:14.428Z · LW(p) · GW(p)
Eye training programs train eye muscles,
Not only muscles directly at the eye but also at the back of the head.