Posts
Comments
Cross validation is actually hugely useful for predictive models. For a simple correlation like this, it's less of a big deal. But if you are fitting a local linearly weighted regression line for instance, chopping the data up is absolutely standard operating procedure.
Maybe I'm way off base here, but it seems like average utilitarianism leads to disturbing possibility itself. That being 1 super happy person is considered a superior outcome to 1000000000000 pretty darn happy people. Please explain how, if at all, I'm misinterpreting average utilitarianism.
Two notes: First, the term "genius" is difficult to define. Someone may be a "genius" at understanding the sociology of sub-Saharan African tribes, but this skill will obviously command a much lower market value compared to someone who is a "genius" as a chief executive officer of a large company. A more precise definition of genius will narrow the range of costs per year.
Second, and related to the first, MIRI is (to the extent of my knowledge) currently focusing on mathematics and formal logic research rather than programming. This makes recruiting a team of "geniuses" much cheaper. While skilled mathematicians can attract quite strong salaries, highly skilled programmers can demand significantly more. It seems the most common competing job for MIRI's researchers would be that of a mathematics professor (which have a median salary ~88,000$). Based on this, MIRI could likely hire high quality mathematicians while offering them relatively competitive salaries.
Give machine A one nickel and have it produce a random sequence of 499 characters. Have machine B write a random sequence of 500 characters. Code machine A to pay machine B one nickel for its "book" whenever it has a nickel. Code machine B to give a nickel to machine A for its book whenever it has a nickel. Wait perhaps a few days, and you will have two bestselling authors reminiscent of Zach Weiner's Macroeconomica http://www.smbc-comics.com/?id=2855
Sorry, a more applicable study is behind a pay-wall. http://www.jstor.org/discover/10.2307/351391?uid=3739640&uid=2&uid=4&uid=3739256&sid=21103313626383 Summary: data from six surveys suggest negative correlation between having children and several measures of life satisfaction. Standard caveats that correlation doesn't imply causation, etc.
A study suggests that happiness is negatively affected by having children http://www.npr.org/2013/02/19/172373125/does-having-children-make-you-happier Note, there seem to be some issues with the methodology used in the study, but it also seems to be fairly well respected in academia.
Nitpick, the link in the first sentence reads "Definability of Truth in Probabilistic Locic" rather than logic.
Could you elaborate? I'm relatively familiar with and practice mindfulness meditation, but I've never heard of loving-kindness meditation.
Correct, it is enjoyable but I wish to make it more so. Hence why I used "more".
I find myself happier when I act more kindly to others. In addition, lowering suffering/increasing happiness are pretty close to terminal values for me.
Thanks! And out of curiosity, does the first book have much data backing it? The author's credentials seem respectable so the book would be useful even if it relied on mostly anecdotal evidence, but if it has research backing it up then I would classify it as something I need (rather than ought) to read.
Any good advice on how to become kinder? This can really be classified as two related goals, 1) How can I get more enjoyment out of alleviating others suffering and giving others happiness? 2) How can I reliably do 1 without negative emotions getting in my way (ex. staying calm and making small nudges to persuade people rather than getting angry and trying to change people's worldview rapidly)?
It's a quirk of the community, not an actual mistake on your part. LessWrong defines probability as Y, the statistics community defines probability as X. I would recommend lobbying the larger community to a use of the words consistent with the statistical definitions but shrug...
Then let me respecify what I should have stated originally, Christians who evangelize for Christianity are effective at persuading others to join the cause. I am concerned with how bugging people about a cause (aka evangelizing for it) will effect the number of people in that cause. The numbers shown suggest that if we consider evangelizing Christians to be a group, then they are growing as support of my hypothesis.
Oh, I'm well aware that this technique could be used to spread irrational and harmful memes. But if you're trying to persuade someone to rationality using techniques of argument which presume rationality, it's unlikely that you'll succeed. So you may have to get your rationalist hands dirty.
Your call on what's the better outcome: successfully convincing someone to be more rational (but having their agency violated through irrational persuasion) or leaving that person in the dark. It's a nontrivial moral dilemma which should only be considered once rational persuasion has failed.
This would be the explanation http://lesswrong.com/lw/oj/probability_is_in_the_mind/ It really should be talked about more explicitly elsewhere though.
In light of the downvotes, I just wanted to explain that probability is frequently used to refer to a degree of belief by LessWrong folks. You're absolutely right that statistical literature will always use "probability" to denote the true frequency of an outcome in the world, but the community finds it a convenient shorthand to allow "probability" to mean a degree of belief.
The proxy I am specifically looking at for evangelical Christianity is people who claim to have spread the "good news" about Jesus to someone. In other words, asking people whether they themselves have evangelized (the data on this is the fairly clear 47% to 52% upward trend). To me, it makes a lot of sense to call someone an Evangelical Christian if they have in fact evangelized for Christianity. And if we disagree on that definition, then there is really nothing more I can say.
The margin of sampling error is +- 3% while the difference the 1980 percentage and the 2005 percentage is 5%. I do think that a trend which has a p value less than .05 is statistically significant.
False, according to both the source you cited and http://www.gallup.com/poll/16519/us-evangelicals-how-many-walk-walk.aspx
Apologies, I should have been clearer in using donations to the AMF as an analogy to persuading people to be more rational and not a direct way to persuade people to be more rational. I don't claim that these people are more rational simply because they donate to the AMF.
If we are really trying to persuade people, however, guilt tripping should be considered as an option. Logical arguments will only change the behavior of a very small segment of society while even self-professed rationalists can be persuaded with good emotional appeals.
While there are many people who are annoyed by Christian Evangelicals, I feel that it is difficult to argue against their effectiveness. They exist because they are willing to talk to people again and again about their beliefs until those people convert.
Do you have any reason to believe that Christian Evangelicals are ineffective at persuading people? Keep in mind that a 5% conversion rate is doing a pretty damn good job when it comes to changing people's minds.
The following advice is anecdotal and is a very clear example of "other optimizing". So don't take it with a grain of salt, take it with at least a table spoon.
I've found that engaging people about their rationality habits is frequently something that needs to be done in a manner which is significantly more confrontational than what is considered polite conversation. Being told that how you think is flawed at a fundamental level is very difficult to deal with, and people will be inclined to not deal with it. So you need to talk to people about the real world consequences of their biases and very specifically describe how acting in a less biased manner will improve their life and the lives of those around them.
Anecdotally I've found this to be true in convincing people to donate money to the AMF. My friends will be happy to agree that they should do so, but unless prodded repeatedly and pointedly they will not actually take the next step of donating. I accept that my friends are not a good sample to generalize from (my social circle tends to include those who are already slightly more rational than the average bear to begin with). So if you want to convince someone to be more rational, bug them about it. Once a week for two months. Specificity is key here, talk about real life examples where their biases are causing problems. The more concrete the better since it allows them to have a clear picture of what improvement will look like.
The issue of "watering down" one's GPA by taking more classes is already being significantly addressed by colleges and high schools.
Most top colleges examine unweighted GPAs rather than weighted ones. Unweighted GPAs cannot be watered down by non honors classes, and have better predictive validity for college grades than weighted GPAs. One might be inclined to think that this provides incentives for taking easy classes, but the top schools are simply not going to take you seriously if you adopt this strategy (speaking from personal experience at a top liberal arts college and having seen the data on the average number of AP classes taken).
On the high school end, many high schools (including my own former school) have switched away from a weighted average system for class rank. Instead, they use a system where one's GPA for class rank purposes = 36 (unweighted GPA) + .5 (number of honors classes taken) + 1*(number of AP classes taken). The additive system prevents the possibility of having one's GPA watered down. Some high schools go further by adding additional points for taking extra classes beyond the number required for graduation, further encouraging the taking of additional classes regardless of their honor/AP status.
After doing a little research on the Pomodoro technique, I couldn't really find any studies on their effectiveness. The anecdotal evidence is enticing (and preliminary trials of my own have been positive), but has anyone seen good research done on it or similar productivity methods?
I agree with the arguments against diversification (mainly due to its effect on lowering the incentive for becoming more efficient), but here's a concrete instance of how diversification could make cheating nonviable.
Example: Cheating to fake the signals costs 5,000$ (in other words, 5,000$ to make it look like you're the best charity). There are 10,000$ of efficient altruism funds that will be directed to the most efficient charity. By faking signals, you net 5,000$.
Now if diversification is used, let's say at most 1/4 of the efficient altruism funds will be directed to a given charity (maybe evenly splitting the funds among the top 4 charities). Faking the signals now nets -2,500$. Thus, diversification would lower the incentive to cheat by reducing the expected payoff.