Posts
Comments
Interesting. Something's a bit odd, though. If the events are rare, then it's hard to know what the correlations are with any precision. If the events are common, then, yes, we should be able to see the anti-correlation, but this would be a really bad sign -- there'd be no reason to think that the disastrous event where both co-occur isn't right around the corner.
ETA: I exaggerate a bit. There'd be no reason if the independence model was true. If, in reality, there was some circumstance specially protecting us somehow the situation wouldn't have to be dire.
That's a pretty cool histogram in figure 2.
Thanks! Lest it confuse anyone else, please note that that review is all about effects of tDCS on the cerebellum and not a review of tDCS on the cerebrum or other brain structures. The cerebelar tDCS itself seems to have many effects including cerebellar motor cortical inhibition, gait adaptation, motor behaviour, and cognition (learning, language, memory, attention), though.
Here's a general review of the effect of tDCS on language
""" Despite their heterogeneities, the studies we reviewed collect- ively show that tDCS can improve language performance in healthy subjects and in patients with aphasia ( fi gure 4). Although relatively transient, the improvement can be remark- able: Monti and colleagues 52 found an improvement of approxi- mately 30% and Holland and Crinion 63 report a gain of approximately 25% in speech performance in aphasic patients. Intriguingly, no report described negative results in aphasic patients. """
EDIT:
Interestingly, there's limited evidence that it can be effective for patients suffering from autism too. E.g. case study finding 40% reduction in abnormal behavior for a severe case and improved language learning for minimally verbal children with autism.
This seems really interesting. I'd like to learn more about it. So far I'm frustrated with the quality of information I've found. Here's a PMC search and a review behind a firewall.
Yootling is one good approach to the problem.
[LINK] Givewell discusses the progress of their thinking about the merits of funding efforts to ameliorate global existential risks.
create community norms whereby the the amount social praise you get is proportional to the strength of your case for the impact of your action is.
Agreed. We need more thinking/work on this. "Thumbs up" for example, don't seem to cut it because some things are so easy to like, whether they actually have real impact or not that they are not at all proportional to merit.
Thanks!
Whoa. Fascinating! Thanks! I really like the idea of this approach. I'm, ironically, not sure I'm decisive enough to decide that decisiveness is a virtue, but this is worth thinking about. Where should I go to read more about the general idea that if I can decide that something is a virtue and practice acting in accord with that virtue that I can change myself?
Thinking about it just for a minute, I realize that I need a heuristic for when it's smart to be decisive and when it's smart to be more circumspect. I don't want to become a rash person. If I can convince myself that the heuristic is reliable enough, then hopefully I can convince myself to put it into practice like you say. I don't know if this means I'm falling into the rationalization trap that you mentioned or not, though. I don't think so; it would be a mistake to be decisive for decisiveness sake.
I can spend some time thinking more about role-models in this regard and maybe ask them when they decide to decide versus decide to contemplate, themselves. In particular, I think my role-models would not spend time on a decision if they knew that making either decision, now, was preferable to not making a decision until later.
Heuristic 1a: If making either decision now is preferable to making the decision later, make the decision promptly (flip coins if necessary).
In the particular case that prompted my original post, my current heuristics said it was a situation worth thinking about -- the options had significant consequences both good and bad. On the other hand, agonizing over the decision wouldn't get me anywhere and I knew what the consequences would be in a general sense -- I just didn't want to accept that I was responsible for the problems that I could expect to follow either decision, I wanted something more perfect. That's another situation my role-models would not fall prey to. Somehow they have the stomach to accept this and get on with things when there's no alternative....
Goal: I will be a person with the self-respect to stomach responsibility for the bad consequences of good decisions.
Heuristic 1b: When you pretty-much know what the consequences will be of all the options and they're all unavoidably problematic to around the same degree (multiply the importance of the decision by the error in the degree to define "around"), force yourself to pick one right away so you can put the decision-making behind you.
Am I on the right track? I'm not totally sure about how important it is to be decision-making behind yourself.
fixed. Thanks.
I notice that I have a hard time getting myself to make decisions when there are tradeoffs to be made. I think this is because it's really emotionally painful for me to face actually choosing to accept one or another of the flaws. When I face making such a decision, often, the "next thing I know" I'm procrastinating or working on other things, but specifically I'm avoiding thinking about making the decision. Sometimes I do this when, objectively, I'd probably be better off rolling a dice and getting on with one of the choices, but I can't get myself to do that either. If it's relevant, I'm bad at planning generally. Any suggestions?
[Link] why do people persist in believing things that just aren't true
Just in case anyone wants pointers to existing mathematical work on "unpredictable" sequences: Algorithmically random sequences (wikipedia)
An example of using Bayes to "generate hypotheses" that's successful is the mining/oil industry that makes spatial models and computes posterior expected reward for different drilling plans. For general-science type hypotheses you'd ideally want to put a prior on a potentially very complicated space (e.g. the space of all programs that compute the set of interesting combinations of reagents, in your example) and that typically isn't attempted with modern algorithms. This isn't to say there isn't room to make improvements on the state of the art with more mundane approaches.
the "you're a simulation" argument could explain anything and hence explains nothing. He managed to predict scoffing, but that wasn't a consequence of his hypothesis, that was just to be expected.
Links: Young blood reverses age-related impairments in cognitive function and synaptic plasticity in mice (press release)(paper)
I think the radial arm water maze experiment's results are particularly interesting; it measures learning and memory (see fig 2c which is visible even with the paywall). There's a day one and day two of training and the old mice (18 months) improve somewhat during the first day and then more or less start over on the second day in terms of the errors they are making. This is also true if the old mice are treated with 8 injections of old blood over the course of 3 weeks (the new curves lie pretty much on top of the old curves in supplemental figure 7d). Young mice (3 months) perform better than the old mice (supplemental figure 5d) they learn faster on the first day and retain it when the second day starts (supp 7d).
However, if you give 8 injections of 100 micro liters of blood from 3 month old mice to 18 month old mice, the treated mice perform dramatically better than the old-blood treated old mice (2c) and much more like young mice (this comparison is less certain; I'm comparing one line from 2c to one line from supp. 7d, but that's how it looks by eye).
One factor in the new blood that plays a role is GDF11. From another paper: "we show that GDF11 alone can improve the cerebral vasculature and enhance neurogenesis"
The New York Times gives an overview and other known effects of young blood such as rejuvenating the musculature / heart / vasculature of old mice with young blood. Young Blood May Hold Key to Reversing Aging, e.g. Restoring Systemic GDF11 Levels Reverses Age-Related Dysfunction in Mouse Skeletal Muscle
I think the use of exclamation points should be tastefully rare, or it does give the wrong impression.
Thanks Badger. This is great!
a market where your scoring was based on how much you updated the previous bet towards the truth.
This is interesting. Can someone point me to documentation of the scoring? Thanks. (unless it's a CFAR secret or something)
but a reaction to an environment in the broadest sense inherently unsuitable to humans.
So, can you say more about what aspect of your environment is bugging you? Captivity?? Do you want to try living somewhere more "outdoors"?
An interesting take is to have a game where programming is an integral part of solving the puzzles.
yes, that's what I meant; thank you.
Sorry I guess it wasn't clear. I was contrasting two naive utility functions: a flat one which adds up the utilons of all people versus one that only counts the utilons of stock brokers. I'm not asserting that one or the other is "right". Both utilities would have some additional term giving utility for preserving resources, but I'm not being concrete about how that's factored in. [I'm also not addressing in any depth the complications that a full utilitarian calculation would need like estimated discounted future utilons, etc.] Did I clear it up or make it worse?
I don't know about documentation, but you can start looking here.
The "canceled out" part depends on whether your interested in the utility of stockholders and the reduced resource consumption of the manufacturing process or the utility of the general population which might have to consume less of the product than they'd otherwise be able (because of higher prices) or more generally have less capital left to buy other things they need/want. Monopolies with regulated price structures sometimes work, I guess, though it's complicated.
One possibility is computer games, e.g. I've certainly lost a good chunk of hours to the game Diablo. Modern things like Farmville seem especially pernicious. [This is not to be construed as all gaming is bad, etc.]
I suggest reading a translation.
Why are we thinking about this again?
It seems to me these are obvious targets for regulation. I'd guess the OP is worried that we've overlooked something. The game theory of it might make it difficult to implement in practice: e.g. if one country bans casinos that just makes casinos more profitable for the nearby ones. ... but that's what treaties are for.
Your question makes me think of what economists call negative externalities. Wikipedia has a list of them
I have observed different color temperatures in my left or right eyes some times and observed that these can be changed after wearing red/blue glasses; by swapping which lens covered which eye, I could correct them both back to a more balanced condition.
I use a subset of the extensions you mentioned. I also use this bookmarklet to hide nested comments in long threaded lesswrong pages like the open thread; then I open only the interesting threads selectively to limit distractions.
I think it was clear and good.
A new study in mice (popular article) establishes that elevated levels of fatty tissue cause cognitive deficits in mice with potential significance for humans suffering from obesity or diabetes. They hypothesize that the mechanism of action involves the inflammatory cytokine interleukin 1 beta. Interventions that restored cognitive function included exercise, liposuction, and intra-hippocampal delivery of IL1 receptor antagonist (IL1ra).
You may find better ideas under the phrase "stochastic optimization," but it's a pretty big field. My naive suggestion (not knowing the particulars of your problem) would be to do a stochastic version of Newton's algorithm. I.e. (1) sample some points (x,y) in the region around your current guess (with enough spread around it to get a slope and curvature estimate). Fit a locally weighted quadratic regression through the data. Subtract some constant times the identity matrix from the estimated Hessian to regularize it; you can choose the constant (just) big enough to enforce that the move won't exceed some maximum step size. Set your current guess to the maximizer of the regularized quadratic. Repeat re-using old data if convenient.
As a counterargument to my previous post, if anyone wants an exposition of the likelihood principle, here is reasonably neutral presentation by Birnbaum 1962. For coherence and Bayesianism see Lindley 1990.
Edited to add: As Lindley points out (section 2.6), the consideration of the adequacy of a small model can be tested in a Bayesian way through consideration of a larger model, which includes the smaller. Fair enough. But is the process of starting with a small model, thinking, and then considering, possibly, a succession of larger models, some of which reject the smaller one and some of which do not, actually a process that is true to the likelihood principle? I don't think so.
To be a Bayesian in the purest sense is very demanding. One need not only articulate a basic model for the structure of the data and the distribution of the errors around that data (as in a regression model), but all your further uncertainty about each of those parts. If you have some sliver of doubt that maybe the errors have a slight serial correlation, that has to be expressed as a part of your prior before you look at any data. If you think that maybe the model for the structure might not be a line, but might be better expressed as an ordinary differential equation with a somewhat exotic expression for dy/dx then that had better be built in with appropriate prior mass too. And you'd better not do this just for the 3 or 4 leading possible modifications, but for every one that you assign prior mass to, and don't forget uncertainty about that uncertainty, up the hierarchy. Only then can the posterior computation, which is now rather computationally demanding, compute your true posterior.
Since this is so difficult, practitioners often fall short somewhere. Maybe they compute the posterior from the simple form of their prior, then build in one complication and compute a posterior for that and compare and, if these two look similar enough, conclude that building in more complications is unnecessary. Or maybe... gasp... they look at residuals. Such behavior is often going to be a violation of the (full) likelihood principle b/c the principle demands that the probability densities all be laid out explicitly and that we only obtain information from ratios of those.
So pragmatic Bayesians will still look at the residuals Box 1980.
It's easy to be sympathetic with these two scenarios -- I get frustrated with myself, often enough. Would it be helpful to discuss an example of what your thoughts are before a social interaction or in one of the feedback loops? I'm not really sure how I'd be able to help, though... Maybe your thoughts are thoughts like anyone would have: "shoot! I shouldn't have said it that way, now they'll think..." but with more extreme emotions. If so, my (naive) suggestion would be something like meditation toward the goal of being able to observe that you are having a certain thought/reaction but not identify with it.
naive question (if you don't mind): What sort of things trigger your self-deprecating feelings, or are they spontaneous? E.g. can you avoid them or change circumstances a bit to mitigate them?
Humans vote as if they are making declarations of support in a public arena.
Interesting. Can you point me to an example of something surprising that's predicted by this interpretation? I'm a little confused, though, because for many people they're very public about how they voted anyway (it seems unlikely they're lying), so it is effectively public, no?
government benefits to low-income workers are a subsidy to their employers.
This isn't true, literally. Why do you think it's true figuratively? If you have in mind the counterfactual situation in which benefits to low-income workers were removed, well, I think the economic consequences of that are complicated -- much more complicated than a simple subsidy.
If the government awarded benefits only to the unemployed, many low-income workers would find preferable to quit their jobs if their employers didn't increase their wage. Since employers need employees, employers would find preferable to increase their employees' wages enough that they don't need government benefits.
None of this makes it a subsidy.
[the might is right position I grew up under states:] the strong are morally justified - in a sense, morally compelled - to dominate and torment the weak, because they can. And the weak deserve every minute of it, because fuck them.
... it's hard for me to imagine what you've been through. I'm sorry.
When you say that you operate under this belief system, I don't quite believe you. You don't seem to identify with it. Maybe you've updated out of it in some regards but not others? Maybe you apply it to the way you would let others treat you / how you treat yourself, but not to the way you treat others?
Also, I'm going to guess that you're still punishing yourself for your mistakes of the day. I hope you can let them go. You're obviously working through something painful. Have you given yourself credit for taking the bold step of making this post to try to find a way out?
As for your original question, the only approach I know of for failure, generally, is to try again the next day, possibly trying something different/smaller, possibly with help. Failure to act according to your "system 2"-intention happens to everyone, so I'd say the most important things are (1) not being to hard on yourself (2) setting things up for a new trial with a high success probability (3) recognizing small successes. E.g. set things up so you can avoid most of what your averse to, without completely avoiding all of it, and/or find ways to be less averse to it.
I hope this post isn't too off-base. I wish you well.
Am I still supposed to just say "I took it" and get more Karma without commenting anything more of value? Well, I took it.
Yup. Your points on the earlier comments were just the "ordinary" kind.
Thanks for writing the followup.
Looks like the makings of a good main post, to me. (Haven't read it all yet)
Five experiments demonstrate that people pay forward behavior in the sorts of fleeting, anonymous situations that increasingly typify people’s day-to-day interactions. These data reveal that—in contrast to the focus of media, laypeople, and prior research—true generosity is paid forward less than both greed and equality. Equality leads to equality and greed leads to greed, but true generosity results only in a return to equality—an asymmetry driven by the greater power of negative affect.
About 30. Fun. Just finished. (16:21, 83 deaths) Edit: uhoh there's a 31. hmmm.
If you have a point to make, I think it can be made more effectively than "Read this article".
But Eugine made a point and his point was:
I don't think treating human behavior as a simple random variable is a good model.
He then backed up his point to give context to suggest what a better model might be, i.e. one that models a human as a temporal process with habits.
As an example of a flash game with similar story branches (albeit a pretty different plot), there's endeavor.
... researchers isolated about 100 neurons from three people posthumously. The scientists took a high-level view of the entire genome -- looking for large deletions and duplications of DNA called copy number variations or CNVs -- and found that as many as 41 percent of neurons had at least one unique, massive CNV that arose spontaneously, meaning it wasn't passed down from a parent. The CNVs are spread throughout the genome, the team found.
Edit: see the paper for more precise statements.