Open Thread, November 8 - 14, 2013

post by witzvo · 2013-11-08T20:13:00.805Z · LW · GW · Legacy · 141 comments

Contents

141 comments
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

141 comments

Comments sorted by top scores.

comment by niceguyanon · 2013-11-12T19:02:03.112Z · LW(p) · GW(p)

How can we spread rationality and good decision making to people who aren't inclined to it?

I recently chatted up a friendly doorman who I normally exchange brief pleasantries with. He told me that he was from a particularly rough part of town and that he works three jobs to support his family. He also told me not to worry because he has a new business that makes a lot of money, although had to borrow gas money to get to work that day. He said that he was part of a "travel club". I immediately felt bad because I had a gut feeling he was talking about some multi-level marketing scheme. I asked him if it was and he confirmed it was, but disagreed that it was a "scheme". He told me that he is trying to recruit his family, and that the business model encourages recruiting among family and friends. Skip 45 minutes of him selling it to me, I left him with a warning to be cautious because these things can be fly by night operations and that most people need to lose for a very few to win, and that he can win only if he is part of the very best promoters/sellers. I said that on purpose to gauge whether he is a true believer or a wolf in sheep's clothing but his response was a sort of genuine disbelief that his business was zero sum.

I walked away feeling sad. This guy is really trying to better his life for him and his family and what does life give him? A shitty MLM scheme that will likely harm him and he doesn't know any better. I have this special hatred for MLM schemes, right up there with religion. The deviousness lies in the difficulty to see MLM schemes for what they are prima facie when it is obfuscated with legitimate business practices, as is what most of these companies do.

I should talk to him again and maybe get him to change his mind. Back to my original question; the people who I want to help can not afford or want to attend things like a CFAR work shop, how do you help these people? And introducing people to LW is tricky. LW isn't really accessible, it doesn't find you, you find it.

Replies from: hyporational, Dan_Weinand
comment by hyporational · 2013-11-13T03:36:07.069Z · LW(p) · GW(p)

The problem with these kinds of schemes is they make people think they've found a clever way to make money and happily signal it. It's not just the money (they're not usually getting) they're getting from it. Convincing them they're stupid instead of clever will be extremely difficult. You can say they're not stupid, but irrational, but most people won't know the difference so good luck explaining them that.

The weirdest brand of rampant irrationality is working your ass off to buy a lot of expensive stuff you really don't need and then wondering why you're poor. I haven't had success in convincing these people they don't need their stuff to be happy.

Even some of my med school friends who you'd expect to be intelligent enough to notice the problem buy diamond coated gold watches, BMWs and live in expensive houses and then complain how much they have to work. Perhaps the complaining is a facade though and they actually know what they're doing. Some people need to signal they're richer than they actually are.

Replies from: lmm
comment by lmm · 2013-11-13T12:38:01.923Z · LW(p) · GW(p)

Or to signal that they enjoy work less than they do.

Replies from: hyporational
comment by hyporational · 2013-11-13T13:56:57.379Z · LW(p) · GW(p)

It's weird how this didn't even cross my mind. I think being a workaholic in these circles is more often admired than not.

comment by Dan_Weinand · 2013-11-13T02:35:00.768Z · LW(p) · GW(p)

The following advice is anecdotal and is a very clear example of "other optimizing". So don't take it with a grain of salt, take it with at least a table spoon.

I've found that engaging people about their rationality habits is frequently something that needs to be done in a manner which is significantly more confrontational than what is considered polite conversation. Being told that how you think is flawed at a fundamental level is very difficult to deal with, and people will be inclined to not deal with it. So you need to talk to people about the real world consequences of their biases and very specifically describe how acting in a less biased manner will improve their life and the lives of those around them.

Anecdotally I've found this to be true in convincing people to donate money to the AMF. My friends will be happy to agree that they should do so, but unless prodded repeatedly and pointedly they will not actually take the next step of donating. I accept that my friends are not a good sample to generalize from (my social circle tends to include those who are already slightly more rational than the average bear to begin with). So if you want to convince someone to be more rational, bug them about it. Once a week for two months. Specificity is key here, talk about real life examples where their biases are causing problems. The more concrete the better since it allows them to have a clear picture of what improvement will look like.

Replies from: Lumifer, hyporational
comment by Lumifer · 2013-11-13T04:05:54.397Z · LW(p) · GW(p)

I've found that engaging people about their rationality habits is frequently something that needs to be done in a manner which is significantly more confrontational than what is considered polite conversation. Being told that how you think is flawed at a fundamental level is very difficult to deal with, and people will be inclined to not deal with it. ... So if you want to convince someone to be more rational, bug them about it.

Let me make just a small change...

I've found that engaging people about their belief in Jesus is frequently something that needs to be done in a manner which is significantly more confrontational than what is considered polite conversation. Being told that how you live is flawed at a fundamental level is very difficult to deal with, and people will be inclined to not deal with it. ... So if you want to convince someone to love Jesus, bug them about it.

Do you have any reason to believe that people will react to the first better than to the second?

Replies from: Dan_Weinand
comment by Dan_Weinand · 2013-11-13T21:08:10.737Z · LW(p) · GW(p)

While there are many people who are annoyed by Christian Evangelicals, I feel that it is difficult to argue against their effectiveness. They exist because they are willing to talk to people again and again about their beliefs until those people convert.

Do you have any reason to believe that Christian Evangelicals are ineffective at persuading people? Keep in mind that a 5% conversion rate is doing a pretty damn good job when it comes to changing people's minds.

Replies from: Lumifer, DanielLC
comment by Lumifer · 2013-11-13T21:16:48.865Z · LW(p) · GW(p)

Do you have any reason to believe that Christian Evangelicals are ineffective at persuading people?

Yes. Their mind share in the US is not increasing.

Replies from: Dan_Weinand
comment by Dan_Weinand · 2013-11-13T21:34:24.150Z · LW(p) · GW(p)

False, according to both the source you cited and http://www.gallup.com/poll/16519/us-evangelicals-how-many-walk-walk.aspx

Replies from: Lumifer
comment by Lumifer · 2013-11-13T21:44:20.588Z · LW(p) · GW(p)

False, really? So looking at the data in these two links you think you see a statistically significant trend?

Don't forget that your (second) link is concerned with proxies for being an Evangelical...

Replies from: Dan_Weinand
comment by Dan_Weinand · 2013-11-13T21:54:04.037Z · LW(p) · GW(p)

The margin of sampling error is +- 3% while the difference the 1980 percentage and the 2005 percentage is 5%. I do think that a trend which has a p value less than .05 is statistically significant.

Replies from: Lumifer
comment by Lumifer · 2013-11-13T22:09:17.549Z · LW(p) · GW(p)

I am not sure which data you are looking at.

My link shows the percentage of people who self-identify as Evangelicals. The data starts in 1991 and ends in 2005. The first values (1991-1993) are 41%, 42%, 46%, 44%, 43%, and the last values (2004-2005) are 42%, 39%, 42%, 47%, 40%.

I see no trend.

Your link shows the percentage of people who answer three proxy questions. The data starts in 1976 and ends in 2005. Over that time period one question goes up (47% to 52%), one goes down (38% to 32%) and the third goes up as well (35% to 48%). Do note that the survey says "When looking at the percentage of Americans who say yes to all three of these questions, slightly more than one in five (22%) American adults could be considered evangelical" and that's about *half* of the number of people who self-identify as such.

Given all this, I see no evidence that the mind share of the Evangelicals in the US is increasing.

Replies from: Dan_Weinand
comment by Dan_Weinand · 2013-11-14T02:59:56.191Z · LW(p) · GW(p)

The proxy I am specifically looking at for evangelical Christianity is people who claim to have spread the "good news" about Jesus to someone. In other words, asking people whether they themselves have evangelized (the data on this is the fairly clear 47% to 52% upward trend). To me, it makes a lot of sense to call someone an Evangelical Christian if they have in fact evangelized for Christianity. And if we disagree on that definition, then there is really nothing more I can say.

Replies from: Lumifer
comment by Lumifer · 2013-11-14T04:55:16.783Z · LW(p) · GW(p)

To me, it makes a lot of sense to call someone an Evangelical Christian if they have in fact evangelized for Christianity.

The Pope would be surprised to hear that, I think.

All Christians of all denominations are supposed to spread the Good Word. Christianity is an actively proselytizing religion and has always been one. The Roman Catholic Church, in particular, has been quite active on that front. As have been Mormons, Adventists, Jehova's Witnesses, etc. etc.

Replies from: Dan_Weinand
comment by Dan_Weinand · 2013-11-14T08:12:37.964Z · LW(p) · GW(p)

Then let me respecify what I should have stated originally, Christians who evangelize for Christianity are effective at persuading others to join the cause. I am concerned with how bugging people about a cause (aka evangelizing for it) will effect the number of people in that cause. The numbers shown suggest that if we consider evangelizing Christians to be a group, then they are growing as support of my hypothesis.

comment by DanielLC · 2013-11-14T06:21:44.499Z · LW(p) · GW(p)

If it works regardless of what it is you're telling people to do, that makes it dark arts.

Replies from: Dan_Weinand
comment by Dan_Weinand · 2013-11-14T08:08:50.365Z · LW(p) · GW(p)

Oh, I'm well aware that this technique could be used to spread irrational and harmful memes. But if you're trying to persuade someone to rationality using techniques of argument which presume rationality, it's unlikely that you'll succeed. So you may have to get your rationalist hands dirty.

Your call on what's the better outcome: successfully convincing someone to be more rational (but having their agency violated through irrational persuasion) or leaving that person in the dark. It's a nontrivial moral dilemma which should only be considered once rational persuasion has failed.

comment by hyporational · 2013-11-13T03:15:17.859Z · LW(p) · GW(p)

It's not clear to me that donating to AMF is a reliable sign of their increased rationality. How do you know you're not simply guilt tripping them?

Replies from: Dan_Weinand
comment by Dan_Weinand · 2013-11-13T21:14:35.074Z · LW(p) · GW(p)

Apologies, I should have been clearer in using donations to the AMF as an analogy to persuading people to be more rational and not a direct way to persuade people to be more rational. I don't claim that these people are more rational simply because they donate to the AMF.

If we are really trying to persuade people, however, guilt tripping should be considered as an option. Logical arguments will only change the behavior of a very small segment of society while even self-professed rationalists can be persuaded with good emotional appeals.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-11-16T01:32:28.702Z · LW(p) · GW(p)

Apologies, I should have been clearer in using donations to the AMF as an analogy to persuading people to be more rational and not a direct way to persuade people to be more rational.

No, you were using it as anecdotal evidence that your method works.

I don't claim that these people are more rational simply because they donate to the AMF.

Well, you're argument does rely on that premise.

comment by ChrisHallquist · 2013-11-11T00:06:42.439Z · LW(p) · GW(p)

Would there be interest in me writing a post, or a series of posts, summarizing Richard Feldman's Epistemology textbook? Feldman's textbook is widely used in philosophy classes, and contains some surprisingly reasonable views (given what you may have heard about mainstream philosophy).

I'm partly considering it because it might be a useful way to counteract some common myths about what all philosophers supposedly know about evidence, the problem of induction, and so on. But I seem to have given away my copy, and a replacement would be $40 for a volume that's under 200 pages. So I want to gauge interest first.

Replies from: fubarobfusco
comment by fubarobfusco · 2013-11-11T08:24:30.507Z · LW(p) · GW(p)

I would read it. I'm interested in there being more careful checking of LW-ideas against relevant mainstreams.

Replies from: selylindi
comment by selylindi · 2013-11-11T16:12:43.129Z · LW(p) · GW(p)

Another valuable service, if you (ChrisHallquist) decide to write the proposed article, is to provide a glossary translating between LW idiom and conventional terminology.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2013-11-14T00:59:45.262Z · LW(p) · GW(p)

Honestly that might be difficult, the mapping would be far from perfect.

That said, I might be able to do something. Any terminology in particular you care about? Would it be better to focus on LW terms --> conventional terminology, or vice versa, or both?

comment by maia · 2013-11-09T18:29:30.786Z · LW(p) · GW(p)

Question about EA and CFAR. I think I've heard some people express sentiments that CFAR might be a good place for EAs to donate, due to the whole "raising the sanity waterline" thing.

On its face, this seems silly to me. From the outside view, CFAR just looks like a small self-help organization, though probably better than most such organizations, and it seems unlikely that it'll affect any significant portion of the population.

I think CFAR is great; I went to minicamp, and I think it probably improved my life, although I suspect I'm not as enthusiastic about it as most people who went. But if I were to give CFAR any money, it would be because it helps me and people I know, not because I think it's actually likely to have a large impact on the world.

Are there people around here who believe CFAR is actually likely to have a large impact on the world? Could you explain your reasoning why?

Replies from: Benito, ChristianKl, drethelin, ChrisHallquist, Douglas_Knight
comment by Ben Pace (Benito) · 2013-11-12T06:57:13.139Z · LW(p) · GW(p)

CFAR is working to discover systematic training methods for increasing rationality in humans.

If they discover said methods, and make them publicly available, that could massively increase the sanity waterline on a global scale.

This will require much work, but I think that it's really important work.

comment by ChristianKl · 2013-11-09T19:35:11.539Z · LW(p) · GW(p)

On its face, this seems silly to me. From the outside view, CFAR just looks like a small self-help organization, though probably better than most such organizations, and it seems unlikely that it'll affect any significant portion of the population.

The difference between CFAR and most self-help organisation is that CFAR is committed to publishing research about it's interventions.

Published research into effective change work is important.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2013-11-09T20:15:41.053Z · LW(p) · GW(p)

CFAR is committed to publishing research about it's interventions.

Source?

Replies from: somervta, ChristianKl
comment by ChristianKl · 2013-11-09T20:40:23.304Z · LW(p) · GW(p)

Eliezer said so on facebook.

comment by drethelin · 2013-11-11T19:42:18.379Z · LW(p) · GW(p)

I think the self-help aspects are intended to be a first step preliminary to ideally getting CFAR concepts into schools and colleges. CFAR is still very new and small, and haven't done a lot apart from reaching out to a few smart kids in that direction but I believe that is the goal.

comment by ChrisHallquist · 2013-11-10T22:50:04.422Z · LW(p) · GW(p)

Luke has suggested that part of what CFAR does is the movement-building work that MIRI used to do. I'm not quite sure how to interpret this suggestion, but maybe idea is that CFAR is set up in such a way that spread "worrying about x-risk" memes ends up being an important side-effect of what they do. This is something I will probably start a thread on next time I have $ I want to donate to charity.

comment by Douglas_Knight · 2013-11-12T15:20:04.807Z · LW(p) · GW(p)

What does the size of the organization matter?

Roughly speaking, if the value of sending a person to CFAR is the same, regardless of whether a hundred people go or a million go. If you are paying for a scholarship today, the benefit is largely about the effect on that person, regardless of future students. What is the alternative charity? If you spend to save a life, that's just one person, too.

Here are two reasons why scale could matter. One is room for funding. If you think CFAR will never get big, then it will never consume that much money. So it wouldn't have room for a lot of funding. But the important question is whether it has room for your funding. Eventual size doesn't tell us much about that.

Another reason is gains from scale. The value of sending the millionth person may be the same as the value of the hundredth, but the cost may be much smaller. Curriculum development is amortized across students. If the next bit of funding is going to pay for curriculum development and two people agree about the value of the curriculum for the average student, but may disagree about the total value because they disagree about how many students it will reach.

comment by NancyLebovitz · 2013-11-11T15:08:22.725Z · LW(p) · GW(p)

Longitudinal study of men and happiness

“At a time when many people around the world are living into their tenth decade, the longest longitudinal study of human development ever undertaken offers some welcome news for the new old age: our lives continue to evolve in our later years, and often become more fulfilling than before. Begun in 1938, the Grant Study of Adult Development charted the physical and emotional health of over 200 men, starting with their undergraduate days. The now-classic ‘Adaptation to Life’ reported on the men’s lives up to age 55 and helped us understand adult maturation. Now George Vaillant follows the men into their nineties, documenting for the first time what it is like to flourish far beyond conventional retirement. Reporting on all aspects of male life, including relationships, politics and religion, coping strategies, and alcohol use (its abuse being by far the greatest disruptor of health and happiness for the study’s subjects), ‘Triumphs of Experience’ shares a number of surprising findings. For example, the people who do well in old age did not necessarily do so well in midlife, and vice versa. While the study confirms that recovery from a lousy childhood is possible, memories of a happy childhood are a lifelong source of strength. Marriages bring much more contentment after age 70, and physical aging after 80 is determined less by heredity than by habits formed prior to age 50. The credit for growing old with grace and vitality, it seems, goes more to ourselves than to our stellar genetic makeup.”

Replies from: komponisto
comment by komponisto · 2013-11-13T16:12:07.756Z · LW(p) · GW(p)

Begun in 1938, ... starting with their undergraduate days.

Sample bias warning: people who went to college in the 1930s constitute a highly atypical subset of humanity.

Replies from: itaibn0
comment by itaibn0 · 2013-11-13T23:44:56.533Z · LW(p) · GW(p)

I don't see why this would be more biased than people who went to college in the 1990s (other than the fact that the latter make up a larger proportion of the current population).

Edit: I misunderstood your comment. I thought you made a point about the 1930s in general, rather than going to college in the 1930s. I now agree.

Replies from: gwern
comment by gwern · 2013-11-14T00:31:32.084Z · LW(p) · GW(p)

(other than the fact that the latter make up a larger proportion of the current population).

That does change things... Post-1930 saw an incredible expansion of college going, democratizing to a large fraction of the population. The enrolled population is going to change since it was very far from a random sample in the first place.

comment by fubarobfusco · 2013-11-11T23:55:45.891Z · LW(p) · GW(p)

Last night I found myself thinking, "Well, suppose there's no Singularity coming any time soon. The FAI project will still have gotten a bunch of nerds working together on a project aimed at the benefit of all humanity — including formalizing a lot of ethics — who might otherwise have been working on weapons, wireheading, or something else awful. That's gotta be a good thing, right?"

Then I realized this sounds like rationalization.

Which got me to thinking about what my concerns are about this stuff.

My biggest AI risk worries right now are more immediate than paperclip optimizers. They're wealth optimizers, profit optimizers; probably extrapolations of current HFT systems. The goal of such a system isn't even to make its owners happy — just to make them rich — and it certainly doesn't care about anyone else. It may not even have beliefs about humans, just about flows of capital and information.

Even assuming that such systems believe that crashing the economy would be bad for their owners, I expect that for the vast majority of living and potential humans, world dominance by such systems would constitute a Bad Ending.

It does not seem to me that it would require self-modifying emergent AI to bring about such a Bad Ending; and no exotic technologies such as computronium — just the continuation of current trends.

Replies from: Lumifer, None, RomeoStevens, TheOtherDave
comment by Lumifer · 2013-11-12T04:43:29.301Z · LW(p) · GW(p)

probably extrapolations of current HFT systems

Current HFT systems have little to do with AI. They are basically statistical models of a very narrow slice of reality (specifically the dynamics of the market microstructure) that can forecast these dynamics to some extent.

comment by [deleted] · 2013-11-14T03:12:31.115Z · LW(p) · GW(p)

My biggest AI risk worries right now are more immediate than paperclip optimizers. They're wealth optimizers, profit optimizers; probably extrapolations of current HFT systems. The goal of such a system isn't even to make its owners happy — just to make them rich — and it certainly doesn't care about anyone else. It may not even have beliefs about humans, just about flows of capital and information.

I contend that those that exist are already a problem.

Replies from: lmm
comment by lmm · 2013-11-14T14:27:13.431Z · LW(p) · GW(p)

How? Because they took some money off other speculators? Because some of them went bankrupt?

Replies from: Moss_Piglet
comment by Moss_Piglet · 2013-11-14T16:05:24.492Z · LW(p) · GW(p)

Most likely because there have been some alarming failures of automated traders, such as the 2010 "Flash Crash" or the April Flash Crash caused by a Twitter hoax. From a layman's perspective, it seems like all the regular problems of speculation with the added benefit of trades taking place faster than any human regulator could react. So far there hasn't been any serious damage but it's not clear to me whether that's a point in the traders' favors or just blind luck.

Of course, this isn't a Friendliness issue so much as a competence one and I'm fairly sure there isn't much of an existential risk involved in these programs undergoing an intelligence explosions. So it might not be what the other posters here were thinking of.

Replies from: lmm
comment by lmm · 2013-11-14T18:27:42.802Z · LW(p) · GW(p)

Speculators are good for a market - they smooth out price fluctuations and give fundamentals traders better prices. And when they screw up the effect is usually to give money to other people, as with the flash crash. So I don't see the problem.

comment by RomeoStevens · 2013-11-13T08:05:10.648Z · LW(p) · GW(p)

You'll have fun reading Accelerando. The solar system gentrifies by essentially HFTs on steroid driving up rents in the prime real estate closest the sun and thus energy dense.

comment by TheOtherDave · 2013-11-12T03:55:07.007Z · LW(p) · GW(p)

Accepting for the sake of comity that the endpoint of those trends is indeed an Ending, are there historical events that you would similarly class as an Ending, or would this Ending be in a class by itself?

Replies from: lmm, fubarobfusco
comment by lmm · 2013-11-13T12:43:27.180Z · LW(p) · GW(p)

One could argue that China's inward-turning, burn-the-boats collapse around 1500 was a result of similar wealth concentration? Though I don't know the history in any detail.

comment by fubarobfusco · 2013-11-12T05:10:49.132Z · LW(p) · GW(p)

I'd compare it to some of the hypothetical sociopolitical risk scenarios in Bostrom's "Existential Risks". Bostrom specifically mentions a "misguided world government" (driven by "a fundamentalist religious or ecological movement") and a "repressive totalitarian global regime" (driven by "mistaken religious or ethical convictions"), but doesn't mention scenarios driven by business or financial forces.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-12T05:34:12.974Z · LW(p) · GW(p)

I'm sorry... this appears to be my evening for just not being able to communicate questions clearly. What I meant by "historical events" is events that actually have occurred in our real history, as distinct from counterfactuals.

Replies from: fubarobfusco
comment by fubarobfusco · 2013-11-12T05:58:54.845Z · LW(p) · GW(p)

Oh. Well, no.

comment by Lumifer · 2013-11-11T18:47:00.732Z · LW(p) · GW(p)

An interesting paper: http://www.econ.ucsb.edu/papers/wp01-12.pdf

tl;dr -- Abstract (emphasis mine)

"We document a lower bound for the control premium: agents’ willingness to pay to control their own payoff. Participants choose between an asset that will pay only if they later answer a particular quiz question correctly and one that pays only if their partner answers a different question correctly. However, they first estimate the likelihood that each asset will pay off. Participants are 20% more likely to choose to control their payoff than a group of payoff-maximizers with accurate beliefs. While some of this deviation is explained by overconfidence, 34% of it can only be explained by the control premium. The average participant expresses a control premium equivalent to 8% to 15% of the expected asset-earnings. Our resu lts show that even agents with accurate beliefs may incur costs to avoid delegating and suggest that to correctly infer beliefs from choices, one should account for the control premium."

comment by Kaj_Sotala · 2013-11-09T09:44:12.856Z · LW(p) · GW(p)

How to make it easier to receive constructive criticism?

Typically finding out about the flaws in something that we did feels bad because we realize that our work was worse than we thought, so receiving the criticism feels like ending up in a worse state than we were in before. One way to avoid this feeling would be to reflect on the fact that the work was already flawed before we found out about it, so the criticism was a net improvement, allowing us to fix the flaws and create a better work.

But thinking about this once we've already received the criticism rarely helps that much, at least in my experience. It's better be to consciously remind yourself that your work is always going to have room for improvement, and that it is certain to have plenty of flaws you're ignorant of, before receiving the criticism. That way, your starting mental state will be "damn, this has all of these flaws that I'm ignorant about", and ending up in the post-criticism state where some of the flaws have been pointed out, will feel like a net improvement.

Another approach would be to take the criticism as evidence of the fact that you're working in a field where success is actually worth being proud about. Consider: if anyone could produce a perfect work in your field, would it be noteworthy that you had achieved the same thing that anyone else could also achieve? Not really. And if you could easily produce a work that was perfect and had no particular flaws worth criticizing, that would also be evidence of your field not being particularly deep, and of your success not being very impressive. So if you get lots of constructive criticism, that's evidence that your field is at least somewhat deep, and that success in it is non-trivial. Which means that you should be happy, since you have plenty of room to grow and develop your talents - and you've just been given some of the tools you need in order to do so.

Replies from: hyporational, Ben_LandauTaylor, ChristianKl
comment by hyporational · 2013-11-09T12:28:50.305Z · LW(p) · GW(p)

I think most difficulty with receiving criticism is knowing with certainty the intention behind it is constructive. If I'm sure I actually made a serious and relevant mistake, it's much easier to receive criticism.

comment by Ben_LandauTaylor · 2013-11-09T20:11:25.845Z · LW(p) · GW(p)

Would it be fair to rephrase your question as "How can we make receiving constructive criticism feel good?"

If so, then I endorse the first technique you mentioned. (My mantra for this is "bad news is good news," which reminds me that now I can do something about the problem.) I intend to try the second technique.

I have a third tactic, which is to use my brain's virtue ethics module. I've convinced myself that good people appreciate receiving constructive criticism, so when it happens, I have an opportunity to demonstrate what a good and serious person I am. (This probably wouldn't work if I didn't surround myself with people who also think this is virtuous and who do, in fact, award me social points for being open to critique.)

Admonymous has some good advice on giving and receiving criticism. Also, use Admonymous. Mine is here.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-11-10T10:26:27.006Z · LW(p) · GW(p)

Would it be fair to rephrase your question as "How can we make receiving constructive criticism feel good?"

Yes.

Replies from: hyporational
comment by hyporational · 2013-11-11T15:31:46.612Z · LW(p) · GW(p)

I have a strong intuition that making it feel good, or even just less bad, might take away some of its usefulness and make it less memorable. Actually, if it felt good instead of just less bad, wouldn't that incentivize you to make more mistakes?

There are individual differences in sensitivity to criticism, so your advice should be mainly aimed at people who are oversensitive in this regard.

Replies from: Kaj_Sotala, TheOtherDave
comment by Kaj_Sotala · 2013-11-11T16:32:24.498Z · LW(p) · GW(p)

If I feel bad about a piece of criticism, I automatically become defensive and incapable of learning from it (until I can distance myself from the bad feeling and thus become less defensive).

I doubt making mistakes on purpose would realistically be a problem, at least for me. Even if it did feel good, having done a great work and knowing that I'd done my best would still be even better.

Replies from: hyporational
comment by hyporational · 2013-11-11T17:14:56.189Z · LW(p) · GW(p)

If I feel bad about a piece of criticism, I automatically become defensive and incapable of learning from it (until I can distance myself from the bad feeling and thus become less defensive).

I have this problem too, but the timespan is pretty short. I think receiving criticism in person has even a bigger problem, that is the critic senses I get hurt and tones it down too much. When directly asking for criticism I'm tempted to declare "I will look butthurt at first but keep going and later I'll be thankful for learning so much more." The best teachers I've had gave criticism regardless of my feelings.

I doubt making mistakes on purpose would realistically be a problem

There's an important difference between making intentional mistakes, and becoming careless. By incentivization of mistakes I meant the latter.

Replies from: Kaj_Sotala, TheOtherDave
comment by Kaj_Sotala · 2013-11-12T04:12:09.600Z · LW(p) · GW(p)

There's an important difference between making intentional mistakes, and becoming careless. By incentivization of mistakes I meant the latter.

Ah, that does sound more plausible. If I'm in an environment where I can trust others to catch my mistakes, and I don't feel bad about those mistakes being pointed out, then I could definitely see myself getting more sloppy and relying on others to catch the mistakes instead of looking for them myself. In fact, I'm pretty sure that I have done that on a few occasions...

On the other hand, this might also make for a useful cure for perfectionism. It's not obvious that trying to catch every mistake yourself would be the optimal division of labor, assuming that you really are in an environment where you can trust on others to correct some of the mistakes. Of course, it could be a problem if you develop lazy habits and carry them over to an environment without that external assistance.

Replies from: hyporational
comment by hyporational · 2013-11-12T05:40:23.736Z · LW(p) · GW(p)

It's not obvious that trying to catch every mistake yourself would be the optimal division of labor, assuming that you really are in an environment where you can trust on others to correct some of the mistakes.

I agree that we could probably rely more on others to catch our mistakes in certain contexts where equal expertise can be assumed. The problem is, if you're writing an article or a book for example, you're usually the expert compared to your readership, so you can't really expect others to reliably correct your mistakes, and some of your mistakes get cluelessly adopted.

comment by TheOtherDave · 2013-11-11T19:15:10.389Z · LW(p) · GW(p)

When directly asking for criticism I'm tempted to declare "I will look butthurt at first but keep going and later I'll be thankful for learning so much more."

My usual version of this is "I don't like receiving criticism, and I don't promise to take it well, though I promise to make my best efforts to do so and I usually succeed. That said, still less do I like having earned criticism withheld from me, so my preference is to receive criticism where I've earned it. If you remind me of this, I will do my best to be grateful."

comment by TheOtherDave · 2013-11-11T16:11:35.771Z · LW(p) · GW(p)

Actually, if it felt good instead of just less bad, wouldn't that incentivize you to make more mistakes?

Well, one way to subvert this would be to also arrange to get praise for my successes, and make the praise-for-success noticably more rewarding than the criticism-for-failure. But if for some reason that's not possible, then sure.

your advice should be mainly aimed at people who are oversensitive in this regard.

Are you deliberately implying a normative statement about how sensitive a person ought to be to criticism here, or is it accidental?

Replies from: hyporational
comment by hyporational · 2013-11-11T17:03:08.976Z · LW(p) · GW(p)

Well, one way to subvert this would be to also arrange to get praise for my successes, and make the praise-for-success noticably more rewarding than the criticism-for-failure.

True. Note that failing is massively easier than succeeding. You don't really have to plan for it. Perhaps the problem doesn't arise if you feel worse for making the mistake than you feel good about receiving criticism for it. However, I strongly suspect we mostly feel bad about our mistakes precisely because of the social context. I'm pretty sure I wouldn't want to feel good about my mistakes.

Are you deliberately implying a normative statement about how sensitive a person ought to be to criticism here, or is it accidental?

The normativity of such a statement depends on the values of the person in question. If those values are a known factor, I do believe there is an optimal range of sensitivity one should try to gauge.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-11T19:08:45.250Z · LW(p) · GW(p)

The normativity of such a statement depends on the values of the person in question. If those values are a known factor, I do believe there is an optimal range of sensitivity one should try to gauge.

Ah. So "people who are oversensitive," here, means people who are more sensitive to criticism than is optimal according to their own values? Fair enough... thanks for clarifying that.

Replies from: hyporational
comment by hyporational · 2013-11-11T19:27:48.149Z · LW(p) · GW(p)

Exactly. Admittedly there are a lot of people I would like to have different sensitivities to criticism than they have or even want to have, like psychopaths for example. Even that of course doesn't imply any universal normativity.

comment by ChristianKl · 2013-11-09T21:28:13.210Z · LW(p) · GW(p)

I don't think that the fact that you receive criticism means anything. A smart person can find criticism for anything.

The goal isn't to create work that isn't criticised but works that achieves a purpose. Maybe work that sells. Maybe work that influences people. Not work that isn't criticised.

If you get criticism, ask yourself whether that criticism is relevant to the goals that you want to achieve.

comment by gwern · 2013-11-08T20:55:20.574Z · LW(p) · GW(p)

Some IRC discussion reminded me that LWers might enjoy a SF short story I wrote some time ago: "Men of Iron".

comment by Kawoomba · 2013-11-13T11:45:47.880Z · LW(p) · GW(p)

Orthodox statistics beware: Bayesian radicals spotted:

A group of international Bayesians was arrested today in the Rotterdam harbor. According to Dutch customs, they were attempting to smuggle over 1.5 million priors into the country, hidden between electronic equipment. The arrest represents the largest capture of priors in history.

“This is our biggest catch yet. Uniform priors, Gaussian priors, Dirichlet priors, even informative priors, it’s all here,” says customs officers Benjamin Roosken, responsible for the arrest. (…)

Sources suggest that the shipment of priors was going to be introduced into the Dutch scientific community by “white-washing” them. “They are getting very good at it. They found ghost-journals with fake articles, refer to the papers where the priors are allegedly based on empirical data, and before you know it, they’re out in the open. Of course, when you look up the reference, everything is long gone,” says Roosken.

comment by moridinamael · 2013-11-13T01:46:51.973Z · LW(p) · GW(p)

This seems like a big deal:

http://www.pnas.org/content/early/2013/10/28/1313476110.full.pdf

Basically, dude illustrates equivalence between p-values and Bayes factors and concludes that 17-25% of studies with a p-value acceptance threshold of 0.05 will be wrong. This implies that the lack of reproducibility in science isn't necessarily due to egregious misconduct, etc., but rather insufficiently strict statistical standards.

So is this new/interesting, or do I just naively think so because it's not my field?

Replies from: gwern, Lumifer
comment by gwern · 2013-11-13T02:47:00.043Z · LW(p) · GW(p)

Not a big deal. The estimate you're impressed by can be done from power and prior odds like in Ioannides's famous paper and are similar to Leek's estimates from p-value distributions, and the recommendations baffle me - increase alpha?! P-value hacking is part of how we got here in the first place!

Replies from: hyporational
comment by hyporational · 2013-11-13T03:47:44.510Z · LW(p) · GW(p)

Is there a lower hanging fruit you have in mind?

Replies from: gwern
comment by gwern · 2013-11-13T15:56:26.121Z · LW(p) · GW(p)

I don't know any easy solutions to the low replication rate of many areas right now. It seems to be fundamentally a systematic problem of incentives. Even the easiest and most basic remedies like clinical trial registries are not being enforced, so it's hopeless to expect reforms like making all studies well-powered. I do think that increasing alpha is unlikely to fix the problems and is likely to backfire by making things worse and rewarding cheaters & punishing honest researchers: the smaller the p-value required, the more you reward people who can run hundreds of analyses to get a p-value under the threshold and the more you punish honest researchers who did one analysis and stuck with it.

comment by Lumifer · 2013-11-13T02:33:24.425Z · LW(p) · GW(p)

and concludes that 17-25% of studies with a p-value acceptance threshold of 0.05 will be wrong

That's not what the dude concludes.

To quote the article itself (emphasis mine), "Although it is difficult to assess the proportion of all tested null hypotheses that are actually true, if one assumes that this proportion is approximately one-half, then these results suggest that between 17% and 25% of marginally significant scientific findings are false."

comment by Locaha · 2013-11-11T19:23:37.132Z · LW(p) · GW(p)

I'm planning to do a series of posts of myself systematically reading the Sequences and commenting on them. Anyone did this before?

Replies from: somervta, Nisan
comment by Nisan · 2013-11-12T17:44:15.907Z · LW(p) · GW(p)

Lukeprog did something like this on his blog.

comment by ChrisHallquist · 2013-11-09T22:56:29.024Z · LW(p) · GW(p)

Trying to find a link I saw about CFAR doing some publishing some preliminary research on rationality techniques, including a finding that a technique they expected to work didn't actually work. Does anyone know what I'm talking about? My Google fu is failing me, to the point that I'm wondering if I'm imagining it.

Replies from: JayDee
comment by fubarobfusco · 2013-11-09T18:31:02.345Z · LW(p) · GW(p)

There is an atheist argument, "Religious people are only religious because they want to control other people or are controlled by them. Religion is a system of authoritarian control."

There is a religious argument, "Atheists are only atheists because they want to rebel against God. Atheism is an act of rebellion."

Are these extensionally equivalent?

Are there other common arguments from opposed viewpoints that pair up like this?

Replies from: Manfred, FiftyTwo, Emile
comment by Manfred · 2013-11-09T19:22:09.137Z · LW(p) · GW(p)

The "control" argument predicts more specific things than the "rebellion" argument, and so is a more useful hypothesis. But then again, it's not the whole story at all (desire for community, actual belief, glaring cognitive biases), and once you start inserting caveats the testability goes way down. So I'd say neither argument is worth making.

Replies from: ChristianKl
comment by ChristianKl · 2013-11-09T21:13:27.175Z · LW(p) · GW(p)

The "control" argument predicts more specific things than the "rebellion" argument, and so is a more useful hypothesis.

Actually a rebellion argument also predicts something. It would predict that atheists also rebel against other social norms.

Replies from: Manfred
comment by Manfred · 2013-11-09T21:25:06.103Z · LW(p) · GW(p)

Because the competing hypothesis ("atheists are willing to state a true thing even when most of society disagrees") also predicts some degree of general rebelliousness, I think the prediction is more about pointless and self-destructive behaviors.

And if atheists are just allowed to be tricked by the devil, then I don't know how that pans out into other behaviors.

comment by FiftyTwo · 2013-11-11T19:58:55.706Z · LW(p) · GW(p)

I don't think its accurate to describe them as an opposite pair, more that they both share the same premise (people consider control important/motivating) and derive different conclusions.

You could generate an arbitrarily large number of other predictions from that premise, e.g. Greens support geen policies because they want to control people, blues support blue policies because they don't like the idea of being controlled.

comment by Emile · 2013-11-09T23:31:45.544Z · LW(p) · GW(p)

I think a lot of disagreement about religion isn't really about the metaphysical claims, but rather about whether pastors/priests should have more or less influence on people compared to teachers, scientists, writers ... so the people who tend to agree with the pastors' values worry about those values getting lost, and so fret about atheism and rebellion. Seen like that, the disagreement is about authoritarian control vs. rebellion, and the "does god exist" thing is just tribal flag-waving.

comment by ArisKatsaris · 2013-11-09T02:22:36.732Z · LW(p) · GW(p)

A scenario which occurred to me and I found strange at first glance: Consider a fair coin, and two people -- Alice who is 99.9% sure the coin is fair and who can update on evidence like a fine Bayesian, and Bob who says he's perfectly sure the coin is biased to show heads and does not update on the evidence at all.

Nonetheless the perfectly correct Alice (who effectively needs choose randomly and might as well always say 'heads') and the perfectly incorrect Bob (who always says 'heads' because he's always certain that'll be the correct answer) have the same chance (50%) to correctly predict the next coin's toss. Even when the experiment is repeated multiple times, its progress further confirming to Alice that she is right to believe the coin fair, Alice's predictive ability isn't improved over non-updating Bob's on a toss-by-toss basis.

I found that initially perplexing -- If we consider accuracy alone, Alice's more accurate beliefs can only be perceived if she's allowed to make predictions over large patterns (e.g. she'd expect a roughly equal number of heads to tails). If she's not given that ability, and if a third party is only told the number of times each of the participants were correct in their guesses, they couldn't tell who is who.

One more thing that distinguishes them: If Alice and Bob were allowed to bet on their guesses, Alice would accept only favorable odds, and Bob would soon go bankrupt...

Replies from: Kaj_Sotala, RolfAndreassen, Manfred, passive_fist
comment by Kaj_Sotala · 2013-11-09T07:26:11.394Z · LW(p) · GW(p)

Doesn't seem very strange to me. For any (realistic) situation, there are any number of irrelevant false beliefs that you could have while still managing to predict the result correctly. Or even relevant false beliefs that nonetheless produced the right prediction: e.g. a tribe that believed in spirits might believe that sexual intercourse attracted a disembodied spirit into a woman's body and caused it to grow a new body for itself, which would be false but still lead to the correct prediction of (intercourse -> pregnancy).

comment by RolfAndreassen · 2013-11-09T03:39:31.648Z · LW(p) · GW(p)

The case of a fair coin seems particularly bad for Alice, being as it were maximally entropic.

comment by Manfred · 2013-11-09T19:35:03.956Z · LW(p) · GW(p)

The difference between them becomes apparent once they start betting on other things, like the number of tails in a series of 10 coinflips. The question is: what is special about betting on heads vs. tails of a fair coin that doesn't allow Alice to do any better than Bob?

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-11-09T22:23:17.366Z · LW(p) · GW(p)

A fair coin is maximally entropic. There is no skill that will let you do anything with sheer chaos.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-11-09T23:18:04.110Z · LW(p) · GW(p)

I think it is better to say that bet on offer is fair. It is not a property of just the coin, but also of the bet. We do not notice that there is a choice of bet because it is even odds (which corresponds to max ent), but for any weighted coin there is a corresponding fair bet.

Fair bets do have lots of special properties, but we would have the same situation if a correct choice of tails paid 1 and a correct choice of heads paid 2: Alice and Bob would both always bet H. (except in the 1/1000 chance that we start with 10 Ts and Alice updates wrongly; but the asymptotics are the same)

comment by passive_fist · 2013-11-09T06:52:56.484Z · LW(p) · GW(p)

I think you're assuming that Alice has to pick H or T randomly and then ask the third party if it's correct. But she doesn't have to do that. She can just ask the third party whether it's H, each time. Over time it will be confirmed that the coin is fair.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-11-09T15:30:23.742Z · LW(p) · GW(p)

Yes, but my point was that her knowledge that the coin is fair doesn't help her improve her guesswork on the text toss over Bob, and someone judging her on the basis of her toss-by-toss successes wouldn't be able to ascertain that she has more accurate beliefs than Bob...

comment by Tenoke · 2013-11-09T10:38:49.116Z · LW(p) · GW(p)

Even less important than it was last week, but if anyone wants to come out and tell me why they went on a mass-downvoting spree on my comments again, please feel free to do so.

Replies from: hyporational
comment by hyporational · 2013-11-09T12:12:48.654Z · LW(p) · GW(p)

What's your probability estimate on that happening?

Replies from: Tenoke
comment by Tenoke · 2013-11-09T12:24:52.839Z · LW(p) · GW(p)

A whole 5-10%.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-11-09T16:07:20.893Z · LW(p) · GW(p)

How did you get this number? Is it lower or higher than Laplace's rule of succession would suggest? Have you ever seen such a comment work?

I have once seen such a comment produce an admission, but I don't think it was very productive. In fact, I think the two people disagreed on what happened.


added: maybe you are distinguishing between your explicitly asking for the person from everyone else complaining to the general public, with your method a priori better, but untested. From my observation so once working out of fifty, Laplace tells me 4%, compatible with your higher 5-10%.

Replies from: Tenoke
comment by Tenoke · 2013-11-11T19:57:23.518Z · LW(p) · GW(p)

I also PMed some people which I thought would increase the chance of them coming out by maybe 1-2%.

comment by niceguyanon · 2013-11-08T21:38:35.944Z · LW(p) · GW(p)

Not sure what to label or call this thinking error I had recently, but it seems as if it might come up more often, although I can not come up with another example for now.

I know someone who signed up for a marathon and I was to be part of the coordination of getting her to the official transportation area. After signing up online she was able to choose from a set of departure times for the bus take runners to the start line. Her wave is set to start at 10:15 a.m. She wanted to select the latest possible departure to avoid idle time standing around trying to keep warm and to her surprise the latest departure was 730 a.m. The ride to the start line is short so that would mean at least 2 hours of standing around in the cold. Why? It was rather annoying for both of us. It just seems terribly inefficient. Of course I am sure the organizers have their reasons. So I started to wonder about what those reasons might be. Take a minute to think up of some of your own.

I came up with some satisfactory reasons why 7:30 is the last bus out:

  • Traffic logistic reasons – shutting down a major city is a big job, crowd control and tight timelines are good things.

  • Later start – Daylight savings gave everyone an "extra" hour and this was already considered a late marathon anyway, 7:30 a.m. doesn't sound that bad.

  • Security

For a few days I was happy with my reasoning but this problem, although trivial, nagged at me. On the morning of the marathon day something went wrong and the 7:30 a.m. departure had probably already left. I cursed the marathon, and we both wondered, yet again, why the hell was it so early to begin with? A two hour wait for her, would be an almost three hour wait for those in waves later than her's. This absolutely screws over the people in the last wave regardless if they caught the last bus. Why didn't they have a later option so that the poor person starting at 10:45 isn't a Popsicle before they start? That way, if she was late and there was still room she could catch a ride outside of her scheduled transportation and still make the race with plenty of time to spare. We were frustrated but what else could we do but to try anyway. There was probably a non advertised "real" last departure to catch all the people that missed the last departure, but even so, I just don't get it. The happy ending is that she was able to make her race. And I figured it all out once we got there why 7:30 a.m was the last departure.

7:30 isn't the last departure, it never was. The system that processed the runners knew their wave start times and gave them appropriate albeit early departure times. People in later waves received later choices. We both had assumed that just because we were offered 7:30 a.m. that had to mean that was the last time for everyone. It was not. And suddenly everything made sense. They wanted everyone wave to be at least 2 hours early, and restricted the the choices so that not everyone would cram into the last slot.

I was angry with myself for not catching something so simple. I was asking the wrong damn question. I made an assumption that had fundamentally set me on the wrong path since the beginning, and I was never going to solve it no matter how reasonable I was with my thinking because my original premise is false. It is especially infuriating that I had made a mental effort to think outside the box when thinking about this problem yet I missed this completely. Crap, it seems like I needed to think outside of outside of the box. What else have a missed? There are probably many times in life where I am trying to understand people's motives and completely failed. I guess we can file this under the "jumping to conclusions" thinking error?

Replies from: None, Gurkenglas
comment by [deleted] · 2013-11-11T06:11:42.010Z · LW(p) · GW(p)

We both had assumed that just because we were offered 7:30 a.m. that had to mean that was the last time for everyone.

Well, you applied a heuristic (assuming that everyone else had the same experience you did), and the heuristic turned out to be wrong in this case. One of the following must be true:

  • This heuristic isn't a good heuristic to follow in general.
  • This heuristic is good to follow in general, but there's something that should have indicated it would be wrong here.
  • This heuristic is good to follow in general, and there isn't something that should have indicated it would be wrong here.

I'd start by figuring out which one it is.

comment by Gurkenglas · 2013-11-08T22:48:42.178Z · LW(p) · GW(p)

Why wouldn't they want to cram everyone into their last slot? It's the same number of transported people per time, only the starting and stopping of the transportations changes, and everyone has to wait 2 hours in the cold or be relatively rewarded for being late.

Replies from: niceguyanon
comment by niceguyanon · 2013-11-09T05:17:09.736Z · LW(p) · GW(p)

Why wouldn't they want to cram everyone into their last slot? It's the same number of transported people per time, only the starting and stopping of the transportations changes,...

Presumably because there is a maximum amount of people that could be transported per given slot. 10k+ of people per wave could not possibly fit into three departure times. And as the slots filled up your choices would be further restricted to available but undesirable earlier departures.

comment by ThrustVectoring · 2013-11-11T04:58:51.764Z · LW(p) · GW(p)

I'm starting practice drills for stenographic typing. The software (Plover) and the theory/typing drills (I'm using http://qwertysteno.com/Home/) are available for free, and the hardware is cheap (and I already have it).

What I'm really curious about, though, is the value you can get out of roughly doubling my typing speed from 80 WPM to 160. There's the time saved, but that's offset by the time spent learning steno. Really the big benefit is time-shifting the work of typing out English words, from "in the middle of having a thought" to the stenotyping drills.

And I have no goddamn idea how to estimate the value of that, since I've only ever been able to get my thoughts down at roughly 80 WPM. I'm hoping that it'll make lower-value composition worth doing. An alternative valuation is comparing salaries of professional stenographers versus administrative assistants.

I think speed-reading practice can be subjected to a similar analysis - it moves work from time-critical contexts to non-critical ones, at the expense of the next best thing you could be doing instead (obvious candidate: reading for fun).

Replies from: None, hyporational, bbleeker, FiftyTwo
comment by [deleted] · 2013-11-11T06:18:00.414Z · LW(p) · GW(p)

Personally, I estimate the value of learning to type faster at approximately zero, because I can type faster (about 70 WPM) than I can decide what I want to type. How much time do you spend wishing you were able to type faster, because your fingers aren't keeping up with your brain?

Replies from: ThrustVectoring
comment by ThrustVectoring · 2013-11-11T14:26:21.324Z · LW(p) · GW(p)

It's less a question of average composition (deciding what to write) speed, and more a question of how much I'm keeping in memory. With a slower typing speed, I have to keep more in memory about how I want to finish the thoughts I'm having, and have more difficulty and frustration involved in the process.

In other words, composition isn't a marathon, but a series of sprints. Each sprint is a race to get the thoughts you have out of short term memory and into storage. You'd probaply find your composition speed increase with your typing speed, as you can focus on the next thing to write rather than remembering what you have decided to write.

And I just thought of another way to estimate it - think of the difference in your willingness and ability to write things on a phone or tablet (40 wpm) versus a keyboard (80 wpm) and extrapolate.

comment by hyporational · 2013-11-12T04:45:57.027Z · LW(p) · GW(p)

What I'm really curious about, though, is the value you can get out of roughly doubling my typing speed from 80 WPM to 160.

You can already talk at that speed or faster. Why not invest in a speech recognition program? Even if you think speech recognition isn't up to par yet, it will be in a few years. You could at least test if you get any benefit from increasing your speed by speech recognition, before you invest time in learning stenotyping.

I'm a doctor, so I dictate a lot. The main advantage is quickly recording information I already have. I don't think there's much speed gain when you're recording and coming up with stuff at the same time.

Replies from: ThrustVectoring
comment by ThrustVectoring · 2013-11-12T05:36:07.925Z · LW(p) · GW(p)

I'm an extremely visual thinker, and have a strong preference for communicating by typing. Very, very visual - to the point where I notice myself having difficulty expressing myself verbally. I do much better without the pressure to keep a verbal continuity going, and allowing myself to backtrack and edit as I go without mucking with how the communication turns out in the end.

Replies from: hyporational
comment by hyporational · 2013-11-12T05:58:01.148Z · LW(p) · GW(p)

I'm extremely visual too. Learning to dictate effectively was a weird experience and was significantly slower at first than typing (80 WPM). A five minute dictation could take half an hour the first few times. It took a few dozens of dictations before I got the gist of it. I bet it was still easier to learn than stenotyping.

These days I roughly visualize the text in my head while I'm dictating. Corrections can't be as easily made on the fly because the text is produced afterwards by a human and not in real time by a computer. If you're using a dictation program, you can quickly edit the text on the fly and combine typing and dictation, so the problems you're imagining might be more surpassable than you think.

Of course, there are other downsides to dictation like nonprivacy and straining your voice, but being able to move freely is a nice upside. Would you like to be able to express yourself better verbally? You could see this as a chance to learn.

comment by Sabiola (bbleeker) · 2013-11-13T11:22:23.503Z · LW(p) · GW(p)

I use a text expander, a little program called PhraseExpress (basic version is free for non-commercial use). It lets you type a few characters and expands them into a long word or phrase, or corrects typos (like Word's autocorrect, except everywhere - and it can import Word's autocorrect list). It's also very handy for typing special characters. Depending on what you're typing, it could save you a lot of time.

comment by FiftyTwo · 2013-11-11T20:01:27.364Z · LW(p) · GW(p)

Aside: What did you do to reach 80 wpm?

Replies from: ThrustVectoring, lmm
comment by ThrustVectoring · 2013-11-11T20:07:20.073Z · LW(p) · GW(p)

Start with (mostly) correct typing habits, was encouraged to start touch-typing while in elementary school, and used a computer often to do things (video games, forums, etc). I didn't have to put much deliberate work into trying to learn how to type faster - it was more a byproduct of being on the computer all the time.

comment by lmm · 2013-11-13T12:50:44.175Z · LW(p) · GW(p)

I got to 80wpm in a weekend by switching to dvorak in software but not hardware, forcing myself to touch type correctly.

Replies from: hyporational
comment by hyporational · 2013-11-17T05:26:13.205Z · LW(p) · GW(p)

What was your speed with qwerty? One weekend to learn a new layout sounds insanely fast. How did you train?

Replies from: lmm
comment by lmm · 2013-11-17T22:09:33.829Z · LW(p) · GW(p)

I was 60wpm on qwerty; I'd taken a couple of classes several years before, but I hadn't done any practice drills or anything since, just normal typing. I didn't do any specific training; I just typed a lot (it could easily have been nanowrimo or similar, I don't remember), alt-tabbing back and forth with an onscreen layout diagram when I needed to. I agree that it sounds insanely fast, but that's how I remember it going.

comment by ChristianKl · 2013-11-14T14:05:51.587Z · LW(p) · GW(p)

While having heard of AutoHotkey a long time ago I just started using it and it's extremly useful.

One example would be opening wikipedia with the clipboard content as search string. It just takes 3 lines to assign that task to 'Windows Key + W'. I can't grasp why they didn't recommend us student at computer science to get proficient with it. It"s useful for automating common tasks.

It"s much easier to get results for learning programming through automating task with autohotkey than it's through learning it with simple python programs that serve no use and are just for pratice.

 #w::
        Run, https://en.wikipedia.org/w/index.php?search=%clipboard%
        return

I wish someone would have told me to get proficient in it years ago.

Replies from: niceguyanon
comment by niceguyanon · 2013-11-14T20:46:55.180Z · LW(p) · GW(p)

It's great that there are already so many useful scripts written by other people that you could use.

comment by Metus · 2013-11-10T22:07:00.777Z · LW(p) · GW(p)

I have a constant feeling that I had a great idea or an important thought just now or just a few minutes ago. I know I have recurring thoughts - not of the bad kind, mind you - that I deem quite useful but I am never sure that this feeling of forgetting is with respect to those recurring ideas or something new. Does this kind of thing ring a bell?

Replies from: None, bramflakes
comment by [deleted] · 2013-11-11T06:23:21.322Z · LW(p) · GW(p)

I occasionally get the feeling that I had an important thought just now/a few minutes ago. More often than not, I can remember it by thinking about it for a few seconds.

Replies from: Metus
comment by Metus · 2013-11-11T06:26:03.805Z · LW(p) · GW(p)

Well that is the usual case of it, retrace my steps or think about something else for a few seconds, but in the described cases I can't for the life of me figure it out again.

comment by bramflakes · 2013-11-11T16:29:03.373Z · LW(p) · GW(p)

I used to get it while drowsy and slipping in and out of sleep. I attributed it to maybe falling into REM for a few seconds and then getting pulled out.

comment by niceguyanon · 2013-11-14T20:16:25.504Z · LW(p) · GW(p)

So the latest patches from Microsoft on Tuesday crashes my internet browsers. I'm sure something happens to someone every time, this is a reminder to make sure you have adequate space for system restore points. I didn't for some reason.

comment by Roxolan · 2013-11-12T19:06:48.117Z · LW(p) · GW(p)

I've announced a meetup but got the day and year wrong (it should be December 14, 2013). Can someone tell me how to fix it, please? I can't figure it out.

[insert obvious joke about meetup topic]

Replies from: efim
comment by efim · 2013-11-12T19:51:29.726Z · LW(p) · GW(p)

On the page of your announcement there got to be link "Edit meetup". It will let you edit anithing you need.

Replies from: Roxolan
comment by Roxolan · 2013-11-12T20:00:23.808Z · LW(p) · GW(p)

Thank you. Problem solved.

comment by Omid · 2013-11-11T15:38:18.146Z · LW(p) · GW(p)

PSA: Sign up for Medfusion (or your region's equivalent) if your doctor offers it.

Yesterday I asked my doctor's nurse a question electronically. I had a symptom and I was unsure if it required a visit to the practice. The nurse responded the next day saying the symptom was benign and would go away. This saved me a copayment and a trip outside.

Replies from: selylindi
comment by selylindi · 2013-11-11T16:16:58.917Z · LW(p) · GW(p)

Which Medfusion? Google finds several organizations by that name, and all seem like implausible referents to me.

comment by Joshua_Blaine · 2013-11-11T02:29:50.285Z · LW(p) · GW(p)

Is there a LW consensus on the merits of Bitcoin? Namely, is it the optimal place to invest money, especially in regards to mining equipment?

I think the value is liable to increase fairly dramatically over time, and that buying/mining Bitcoins will prove incredibly profitable, but I'd like the input of this community before I decide whether or not to put money forth for this venture.

Replies from: gwern, Adele_L, ChristianKl, Vaniver, Izeinwinter
comment by gwern · 2013-11-11T04:16:14.390Z · LW(p) · GW(p)

My general impression about mining is that right now it's a horrible idea to get involved in it as the necessary investment/expertise keeps increasing and there seems to be a problem where there's a big pipeline of already-paid-for ASICs which cannot justify their purchase cost but where the least lossy strategy is to run them and recoup as much of the loss as possible (which pushes up the difficulty massively and makes additional capital investments awful ideas). If one wants exposure, buying bitcoins seems like the best approach right now.

comment by Adele_L · 2013-11-11T03:46:28.912Z · LW(p) · GW(p)

Here are some previous discussions of Bitcoin on LW. There doesn't seem to be a clear consensus.

Personally, I find this argument to be a compelling reason for optimism. I put a toy amount of money into bitcoin several months ago, and I am quite pleased with that experiment, and I'm considering putting some more money in.

Incidentally, does anyone know if there is a good prediction market site for bitcoins? I know of a few, but I've heard bad things about them.

Replies from: gwern
comment by gwern · 2013-11-11T04:17:02.885Z · LW(p) · GW(p)

Incidentally, does anyone know if there is a good prediction market site for bitcoins? I know of a few, but I've heard bad things about them.

As far as I know, there is not. betsofbitcoin is completely screwed up, and Predictious is low-volume and a limited number of contracts.

Replies from: Adele_L
comment by Adele_L · 2013-11-11T04:27:55.182Z · LW(p) · GW(p)

That's disappointing. If the main problem with Predictious is low-volume, it might be worth using anyway, but the limited contracts really puts a damper on its utility.

Replies from: gwern
comment by gwern · 2013-11-11T17:31:57.098Z · LW(p) · GW(p)

If you're interested in simply making some bitcoin, Predictious might be a good idea because the low volume implies mispriced contracts. (Similarly, I think if one carefully studies Betsofbitcoin in detail, it may be possible to make steady profits off it: the rules are so bizarre that there must be inefficiencies.) Another advantage of Predictious is that it's operated by Pixode, which seems to be a reasonably legitimate company (more than one can say for most things in the Bitcoin space).

comment by ChristianKl · 2013-11-11T17:05:19.173Z · LW(p) · GW(p)

For bitcoin being an efficient online currency, the transaction fees make it impractical. Ripple provides a much better way of doing micropayments.

If one would want to build a way to make a router provide payed access to anyone who comes along, Ripple is a better technology. The same goes for renting VPN on demand and similar tasks.

It much easier to imagine that some third world country shifts from using prepayed mobile cards for distant currency transfers to using Ripple than that they shift to using bitcoins.

Ripple allows an entity in the country to play bank and issue currency. That means that a village in Africa where everyone has a smart phone could just decide that the village government issues currency and demands that taxes get payed in that currency. The village can issue enough currency that the whole economy of the village runs in the currency.

On the other hand a village in Africa can't simply switch to bitcoin, because they would have to buy them expensively and they don't have money to do so.

Ripple also has the advantage of payments clearing much faster than bitcoin payments

Ripple allows to make payments in Dollar or in Euro if you want to do so without having any risk of fluctuating exchange risks that you have with bitcoin.

Maybe another process will even improve on Ripple but I think Ripple is superior to bitcoin for most purposes, so Bitcoin won't stand a chance over the long run.

Ripple has also the advantage that it has a business model that can pay for developers so I would expect it to get more development hours than bitcoin.

comment by Vaniver · 2013-11-11T14:26:04.772Z · LW(p) · GW(p)

I have not seen a significant LW consensus.

My view is that it is unreasonable to expect one can time the market effectively, and so one should invest in Bitcoin based on the long term prospects, which are either $0 per coin or hundreds of thousands / a million per coin, in which case the probability of hitting the upper end is the primary factor of interest.

Unfortunately, my estimate of the probability that it'll take off is roughly linear in the price, which means I don't consider price shifts very informative, which means it's always a hard decision to be in or out.

comment by Izeinwinter · 2013-11-11T14:59:10.639Z · LW(p) · GW(p)

Bitcoin is not an investment at all - Currency is a medium of exchange, not capital equipment, and a currency which consumes real resources to create (in this case, computation) represents a complete misunderstanding of what money is for, and is also inherently worse for the economy than a fiat currency. Because all resources diverted to the production of coin are taken from the real economy, making everyone poorer than they have to be. It is possible to make money separating fools from their wealth in this market, but doing so is a negative sum game overall - you are actively making the world worse when trading in bitcoins - and since there is no underlying real value being produced here at all, winning at this market is 100% a game of predicting the psychology of the crowd. Which means loosing your shirt is very easy, no matter how clever you are.

So basically, invest in something else. Nearly anything else. Playing poker would have greater net social utility because at least that has some entertainment value to the players.

comment by [deleted] · 2013-11-15T19:58:31.087Z · LW(p) · GW(p)

Hello. I've made a model for the ultimate program, i.e. "operating system, programming language, programs, applications; and artificial intelligence.". The project is open-source. I looked especially for logical people, because there should be a better chance for understanding the model and it's importance; which would likely be very low with normal people. It's written in regular language, and anyone can read it. It also contains the two principles I found, by which anything can be made in the optimal way. Anyone who want's to read it and maybe give an opinion or help, can ask in the mail markshif at Gmail.

Replies from: Lumifer
comment by Lumifer · 2013-11-15T20:33:07.047Z · LW(p) · GW(p)

You don't sound credible.

Post a link and probably some will be curious to look and opine.

Replies from: None
comment by [deleted] · 2013-11-15T22:12:25.365Z · LW(p) · GW(p)

I already see what kind of rational people there are in the site. :-) You'd think prejudice wouldn't be encouraged in a place like this. To be honest - I didn't expect more. I'm working on making my site, so there's no link. That would be one kind of help I'd like to get. If someone would care about something like that - I'm sure it wouldn't be too much effort for them to write a mail; although it's easier to click on a link. The world is full of credible people. I guess that's the reason it's such a wonderful place.

Replies from: Lumifer
comment by Lumifer · 2013-11-16T00:30:48.691Z · LW(p) · GW(p)

I'm working on making my site, so there's no link.

A link doesn't have to lead to your site. It can lead to Googledocs, to Dropbox, to Pastebin, etc.

Replies from: None
comment by [deleted] · 2013-11-16T01:02:52.387Z · LW(p) · GW(p)

Shouldn't make a difference to someone serious.

Replies from: Lumifer
comment by Lumifer · 2013-11-16T03:42:33.958Z · LW(p) · GW(p)

Ah. Well, that certainly isn't me.

How are art lessons doing, by the way?