Ethical Diets 2015-01-12T23:38:33.864Z
Meetup : Berkeley LW meetup - CFAR test session 2014-06-03T18:37:26.222Z
Berkeley meetup May 28 outdoors in Ohlone Park 2014-05-28T16:42:44.404Z


Comment by pcm on Feedback on LW 2.0 · 2017-10-04T20:27:37.586Z · LW · GW

There's something about reading the new style that makes me uncomfortable, and prompts me to skim some posts that I would have read more carefully on the old site. I'm not too clear on what causes that effect. I'm guessing that some of it is the excessive amount of white, causing modest sensory overload.

Some of it could be the fact that less of a post fits on a single screenful: I probably form initial guesses about a post's value based on the first screenful, and putting less substance on that first screenful leads me to guess that the post has less substance. Or maybe I associate large fonts with click-baity sites, and small fonts with more intellectual sites.

The editor used for writing comments is really annoying. E.g. links expand to include unrelated text, or unexpectedly stop being links.

I want a way to enter html and/or markup that I can cut and paste after writing them in an editor with which I'm more comfortable. Without that, I'll probably give up on writing comments that are more thoughtful than Facebook comments.

I presume the new karma system will be an important improvement. I'm unhappy that it's bundled with such large changes to aspects of the UI that were working adequately.

Comment by pcm on The Outside View isn't magic · 2017-09-29T18:29:15.940Z · LW · GW

Most of your post is good, but you're too eager to describe trends as mysterious.

Also, your link to "a previous post" is broken.

Moore's law appears to be a special case of Wright's Law. I.e. it seems well explained by experience curve effects (or possibly economies of scale).

Secondly, we have strong reasons to suspect that there won't be any explanation that ties together things like the early evolution of life on Earth, human brain evolution, the agricultural revolution, the industrial revolution, and future technology development. These phenomena have decent local explanations that we already roughly understand

I don't see these strong reasons.

Age of Em gives some hints (page 14) that the last three transitions may have been caused by changes in how innovation diffused, maybe related to population densities enabling better division of labor.

I think Henrich's The Secret of our Success gives a good theory human evolution which supports Robin's intuitions there.

For the industrial revolution, there are too many theories, with inadequate evidence to test them. But it does seem possible that the printing press played a role that's pretty similar to Henrich's explanations for early human evolution.

I don't know much about the causes of the agricultural revolution.

Comment by pcm on Open thread, September 11 - September 17, 2017 · 2017-09-15T14:50:05.851Z · LW · GW

I'm sometimes able to distinguish different types of feeling tired, based on what my system 1 wants me to do differently: sleep more, use specific muscles less, exercise more slowly, do less of a specific type of work, etc.

Comment by pcm on One-Magisterium Bayes · 2017-07-03T22:46:04.719Z · LW · GW

Tool-boxism implies that there is no underlying theory that describes the mechanisms of intelligence.

If I try to apply this to protein folding instead of intelligence, it sounds really strange.

Most people who make useful progress at protein folding appear to use a relatively tool-boxy approach. And they all appear to believe that quantum mechanics provides a very good theory of protein folding. Or it least it would be, given unbounded computing power.

Why is something similar not true for intelligence?

Comment by pcm on Becoming a Better Community · 2017-06-07T17:00:44.671Z · LW · GW

I agree with most of what you said. But in addition to changing the community atmosphere, we can also change how guarded we feel in reaction to a given environment.

CFAR has helped me be more aware of when I'm feeling guarded (againstness), and has helped me understand that those feelings are often unnecessary and fixable.

Authentic relating events (e.g. Aletheia) have helped to train my subconscious to feel more safe about feeling less guarded in contexts such as LW meetups.

There's probably some sense in which I've lowered my standards, but that's mostly been a fairly narrow sense of that term: some key parts of my system 1 have become more willing to bring ideas to my conscious attention. That has enabled me to be less guarded, with essentially no change in the intellectual standards that I use at a system 2 level.

Comment by pcm on Book recommendation requests · 2017-06-02T19:22:29.174Z · LW · GW

It isn't designed to describe the orthodox view. I think the ideas it describes are moderately popular among mainstream experts, but probably some experts dispute them.

Comment by pcm on Book recommendation requests · 2017-06-02T15:32:27.393Z · LW · GW

I enjoyed Shadow Syndromes, which is moderately close to what you asked for.

Comment by pcm on Book recommendation requests · 2017-06-02T15:30:02.927Z · LW · GW

Henrich's The Secret of our Success isn't exactly about storytelling, but it provides a good enough understanding of human evolution that it would feel surprising to me if humans didn't tell stories.

Comment by pcm on Bad intent is a disposition, not a feeling · 2017-05-02T15:45:20.822Z · LW · GW

I'd guess the same fraction of people reacted disrespectfully to Gleb in each community (i.e. most but not all). The difference was more that in an EA context, people worried that he would shift money away from EA-aligned charities, but on LW he only wasted peoples' time.

Comment by pcm on Stupid Questions May 2017 · 2017-04-26T17:19:41.304Z · LW · GW

Some of what a CFAR workshop does is convince our system 1's that it's socially safe to be honest about having some unflattering motives.

Most attempts at doing that in written form would at most only convince our system 2. The benefits of CFAR workshops depend heavily on changing system 1.

Your question about prepping for CFAR sounds focused on preparing system 2. CFAR usually gives advice on preparing for workshops that focuses more on preparing system 1 - minimize outside distractions, and have a list of problems with your life that you might want to solve at the workshop. That's different from "you don't have to do anything".

Most of the difficulties I've had with applying CFAR techniques involve my mind refusing to come up with ideas about where in my life I can apply them. E.g. I had felt some "learned helplessness" about my writing style. The CFAR workshop somehow got me to re-examine that atititude, and to learn how improve it. That probably required some influence on my mood that I've only experienced in reaction to observing people around me being in appropriate moods.

Sorry if this is too vague to help, but much of the relevant stuff happens at subconscious levels where introspection works poorly.

Comment by pcm on Open thread, Apr. 24 - Apr. 30, 2017 · 2017-04-26T15:07:32.813Z · LW · GW

You write about its importance, yet I suspect EAs mostly avoid it due to doubts about tractability and neglectedness.

Comment by pcm on Effective altruism is self-recommending · 2017-04-25T02:21:47.406Z · LW · GW


We are also more deeply examining the original evidence of effectiveness for VillageReach’s pilot project. Our standards for evidence continue to rise, and our re-examination has raised significant questions that we intend to pursue in the coming months.

I had donated to VillageReach due to GiveWell's endorsement, and I found it moderately easy to notice that they had changed more than just the room for funding conclusion.

Comment by pcm on What's up with Arbital? · 2017-03-30T16:37:00.640Z · LW · GW

how much should I use this as an outside view for other activities of MIRI?

I'm unsure whether you should think of it as a MIRI activity, but to the extent you should, then it seems like moderate evidence that MIRI will try many uncertain approaches, and be somewhat sensible about abandoning the ones that reach a dead end.

Comment by pcm on Don't Fear the Reaper: Refuting Bostrom's Superintelligence Argument · 2017-03-01T18:34:57.582Z · LW · GW

I think your conclusion might be roughly correct, but I'm confused by the way your argument seems to switch between claiming that an intelligence explosion will eventually reach limits, and claiming that recalcitrance will be high when AGI is at human levels of intelligence. Bostrom presumably believes there's more low-hanging fruit than you do.

Comment by pcm on Willpower Depletion vs Willpower Distraction · 2017-01-12T18:34:45.481Z · LW · GW

I have a relevant blog post on models of willpower.

Comment by pcm on Stuart Ritche reviews Keith Stanovich's book "The rationality quotient: Toward a test of rational thinking" · 2017-01-12T18:27:38.614Z · LW · GW

I reviewed the book here.

Comment by pcm on Ideas for Next Generation Prediction Technologies · 2016-12-25T19:51:30.732Z · LW · GW

I subsidized some InTrade contracts in 2008. See here, here and here.

Comment by pcm on Open thread, Dec. 05 - Dec. 11, 2016 · 2016-12-07T17:10:17.919Z · LW · GW

See Rosati et al., The Evolutionary Origins of Human Patience: Temporal Preferences in Chimpanzees, Bonobos, and Human Adults, Current Biology (2007). Similar to the marshmallow test.

Comment by pcm on Open thread, Oct. 10 - Oct. 16, 2016 · 2016-10-10T16:44:24.751Z · LW · GW

I suspect attempted telekinesis is relevant.

Comment by pcm on Open thread, Sep. 26 - Oct. 02, 2016 · 2016-09-27T15:04:58.135Z · LW · GW

See ontological crisis for an idea of why it might be hard to preserve a value function.

Comment by pcm on Open thread, Sep. 19 - Sep. 25, 2016 · 2016-09-22T19:23:33.292Z · LW · GW

My equivalent of this document focused more on the risks of unreasonable delays in uploading me. Cryonics organizations have been designed to focus on preservation, which seems likely to bias them toward indefinite delays. This might be especially undesirable in an "Age of Em" scenario.

Instead of your request for a "neutral third-party", I listed several specific people, who I know are comfortable with the idea of uploading, as people whose approval would be evidence that the technology is adequate to upload me. I'm unclear on how hard it would be to find a genuinely neutral third party.

My document is 20 years old now, and I don't have a copy handy. I suppose I should update it soon.

Comment by pcm on Open Thread, Aug. 22 - 28, 2016 · 2016-08-24T15:29:20.820Z · LW · GW

I expect that MIRI would mostly disagree with claim 6.

Can you suggest something specific that MIRI should change about their agenda?

When I try to imagine problems for which imperfect value loading suggests different plans from perfectionist value loading, I come up with things like "don't worry about whether we use the right set of beings when creating a CEV". But MIRI gives that kind of problem low enough priority that they're acting as if they agreed with imperfect value loading.

Comment by pcm on Open thread, Jul. 18 - Jul. 24, 2016 · 2016-07-22T18:18:58.597Z · LW · GW

No, mainly because Elon Musk's concern about AI risk added more prestige than Thiel had.

Comment by pcm on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-15T18:45:26.317Z · LW · GW

There's no particular reason to believe all of his predictions. But that's also true of anyone else who makes as many predictions as the book does (on similar topics).

When you say "anticipate the future the way he does", are you asking whether you should believe there's a 10% chance of his scenario being basicly right?

Nobody should have much confidence in such predictions, and when Robin talks explicitly about his confidence, he doesn't sound very confident.

Good forecasters consider multiple models before making predictions (see Tetlock's work). Reading the book is a better way for most people to develop an additional model of how the future might be than reading new LW comments.

Comment by pcm on Open thread, Jun. 13 - Jun. 19, 2016 · 2016-06-17T18:16:48.189Z · LW · GW

See Seasteading. No good book on it yet, but one will be published in March (by Joe Quirk and LWer Patri Friedman).

Comment by pcm on Open Thread May 30 - June 5, 2016 · 2016-05-30T18:15:08.262Z · LW · GW

I suggest reading Henrich's book The Secret of our Success. It describes a path to increased altruism that doesn't depend on any interesting mutation. It involves selection pressures acting on culture.

Comment by pcm on Perception of the Concrete vs Statistical: Corruption · 2016-03-24T02:04:11.919Z · LW · GW

There used to be important differences between stocks and futures (back when futures exchanges used open outcry) that (I think) enabled futures brokers to delay decisions about which customer got which trade price.

Comment by pcm on Open Thread, Feb 8 - Feb 15, 2016 · 2016-02-09T19:36:09.714Z · LW · GW

It has nearly the opposite effects for ideas I haven't yet bet on but might feel tempted or obligated to bet on.

The bad effects are weaker if I can get out of the bet easily (as is the case on a high-volume prediction market).

Comment by pcm on Why CFAR's Mission? · 2016-01-26T19:41:58.885Z · LW · GW

Peer pressure matters, and younger people are less able to select rationalist-compatible peers (due to less control over who their peers are).

I suspect younger people have short enough time horizons that they're less able to appreciate some of CFAR's ideas that take time to show benefits. I suspect I have more intuitions along these lines that I haven't figured out how to articulate.

Maybe CFAR needs better follow-ups to their workshops, but I get the impression that with people for whom the workshops are most effective, they learn (without much follow-up) to generalize CFAR's ideas in ways that make additional advice from CFAR unimportant.

Comment by pcm on Why CFAR's Mission? · 2016-01-22T19:56:59.648Z · LW · GW

I disagree. My impression is that SPARC is important to CFAR's strategy, and that aiming at younger people than that would have less long-term impact on how rational the participants become.

Comment by pcm on [Link] Introducing OpenAI · 2015-12-13T19:40:42.713Z · LW · GW

Another factor to consider: If AGI is 30+ years away, we're likely to have another "AI winter". Saving money to donate during that winter has some value.

Comment by pcm on Open thread, Oct. 26 - Nov. 01, 2015 · 2015-10-27T04:18:11.527Z · LW · GW

Comment by pcm on Open thread, Oct. 12 - Oct. 18, 2015 · 2015-10-14T18:35:49.648Z · LW · GW

I've felt that lack of curiosity a fair amount over the past 5-10 years. I suspect the biggest change that reduced my curiosity was becoming financially secure. Or maybe some other changes which made me feel more secure.

I doubt that I ever sought knowledge for the sake of knowledge, even when it felt like I was doing that. It seems more plausible that I had hidden motives such as the desire to impress people with the breadth or sophistication of my knowledge.

LessWrong attitudes toward politics may have reduced some aspects of my curiosity by making it clear that my curiosity in many areas had been motivated by a desire to signal tribal membership. That hasn't enabled me to redirect curiosity toward more productive areas, but I'm probably better off without those aspects of curiosity.

Comment by pcm on Vegetarianism Ideological Turing Test! · 2015-08-16T02:51:45.235Z · LW · GW

For Omnivores:

  • Do you think the level of meat consumption in America is healthy for individuals? Do you think it's healthy for the planet?

The level is healthy for individuals. But that includes way to much meat that has been processed dangerously (bacon, sausage), and not enough minimally processed seafood.

It's not good for the planet. I want to deal with that by uploading my mind. Some large changes of that nature will make current meat production problems irrelevant in a few decades.

  • How do you feel about factory farming? Would you pay twice as much money for meat raised in a less efficient (but "more natural") way?

Most factory farming (other than for bivalves) produces less healthy meat. I often pay twice as much for pasture-raised chicken/beef. With seafood there's little need to pay extra to get properly raised food.

  • Are there any animals you would (without significantly changing your mind) never say it was okay to hunt/farm and eat? If so, what distinguishes these animals from the animals which are currently being hunted/farmed?

I'm confused about what rules I should use for primates, octopus, and dolphin. But since I haven't had a convenient opportunity to eat any of those for years, I've procrastinated about deciding.

  • If all your friends were vegetarians, and you had to go out of your way to find meat in a similar way to how vegans must go out of their way right now, do you think you'd still be an omnivore?

I would definitely go out of my way for seafood. I don't trust nutrition science enough to tell me how to safely go vegan. Seafood is a good source of B12, high zinc/copper ratios, iodine, and omega-3. I probably wouldn't go much out of my way for chicken, beef, etc.

For Vegetarians:

  • If there was a way to grow meat in a lab that was indistinguishable from normal meat, and the lab-meat had never been connected to a brain, do you expect you would eat it? Why/why not?

"never connected to a brain" doesn't seem like quite the right criterion. I expect there's some technology that would satisfy my ethical criteria (Drexlerian nanotech?), in which case I would eat moderate amounts of meat (if it's not too expensive).

  • Indigenous hunter gatherers across the world get around 30 percent of their annual calories from meat. Chimpanzees, our closest non-human relatives, eat meat. There are arguments that humans evolved to eat meat and that it's natural to do so. Would you disagree? Elaborate.

Evolution creates enormous amounts of suffering. Natural should in most contexts be interpreted as amoral or immoral.

Maybe we've evolved to be healthier if we eat some animals, but most of the evidence I've seen suggests that bivalves are a much more effective way of getting the relevant nutrition than cruelly farmed vertebrates.

  • Do you think it's any of your business what other people eat? Have you ever tried (more than just suggesting it or leading by example) to get someone to become a vegetarian or vegan?

Yes, it's my business whether you are cruel to innocent beings who can't defend themselves.

A culture of cruelty can have widespread effects beyond current nonhuman animals. We're on the verge of creating many new forms of digital life. I want to set good precedents for how they are treated.

I haven't yet actively tried to persuade anyone yet. I feel a little guilty about that, but it seems like a lower priority than x-risks. Also, I've only been vegetarian for seven months. I expect that eventually I'll find a context in which I feel comfortable enough to actively argue for vegetarianism.

  • What do you think is the primary health risk of eating meat (if any)?

Too much of that meat has been processed in ways that create or add new chemicals that we're poorly evolved to handle. E.g. smoking (bacon), and nitrates (sausage).

A less drastic health risk that's harder to avoid comes from mycotoxins on poorly stored grain that factory farmed animals eat.

Comment by pcm on Open thread, Aug. 10 - Aug. 16, 2015 · 2015-08-11T19:01:31.702Z · LW · GW

Yes, for strategies with low enough transaction costs (i.e. for most buy-and-hold like strategies, but not day-trading).

It will be somewhat hard for ordinary investors to implement the inverse strategies, since brokers that cater to them restrict which stocks they can sell short (professional investors usually don't face this problem).

The EMH is only a loose approximation to reality, so it's not hard to find strategies that underperform on average by something like 5% per year.

Comment by pcm on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-07T15:05:27.168Z · LW · GW

One of the stronger factors influencing the frequency of wars is the ratio of young men to older men. Life extension would change that ratio to imply fewer wars. See

Stable regimes seem to have less need for oppression than unstable ones. So while I see some risk that mild oppression will be more common with life extension, I find it hard to see how that would increase existential risks.

Comment by pcm on Open Thread, Jul. 27 - Aug 02, 2015 · 2015-07-27T19:01:13.255Z · LW · GW

Some of the discussion has moved to CFAR, although that involves more focus on how to get better cooperation between System 1 and System 2, and less on avoiding specific biases.

Maybe the most rational people don't find time to take surveys?

Comment by pcm on Cryonics: peace of mind vs. immortality · 2015-06-24T18:57:14.403Z · LW · GW

Signing up didn't bring me peace of mind, except for brief relief at not having the paperwork on my to-do list.

I've heard other cryonicists report feeling something like peace of mind as a result of signing up, but they appear to be a minority.

Comment by pcm on Open Thread, Apr. 06 - Apr. 12, 2015 · 2015-04-12T19:23:10.259Z · LW · GW

In Chinese grocery stores and restaurants, I see about as much veggie fish/shrimp as veggie beef/chicken, and it tastes about as good. But the veggie fish and shrimp take less like real fish/shrimp than veggie beef/chicken taste like real beef/chicken. So it may be that similar effort went into each, and many cultures were less satisfied with the results for fish.

Comment by pcm on Open thread, Mar. 23 - Mar. 31, 2015 · 2015-03-30T15:55:55.377Z · LW · GW

See discussions of utility monsters. Don't assume that many people here support pure utilitarianism.

Comment by pcm on Calories per dollar vs calories per glycemic load: some notes on my diet · 2015-03-14T21:08:40.642Z · LW · GW

Crickets at $38/pound dry weight are close to being competitive with salmon (more than 3 pounds needed to get the equivalent nutrition). Or $23/pound in Thailand (before high shipping fees), suggesting the cost in the U.S. will drop a bit as increased popularity causes more competition and economies of scale.

Comment by pcm on [LINK] Terry Pratchett is dead · 2015-03-14T20:46:12.909Z · LW · GW

It is sometimes possible to die by refusing to eat/drink. Ben Best has some conflicting claims about how feasible that is with Alzhiemer's here and here.

Comment by pcm on [LINK] Terry Pratchett is dead · 2015-03-12T19:43:05.741Z · LW · GW

What evidence do we have about whether cryonics will work for those who die of Alzheimer's?

Comment by pcm on Open thread, Feb. 23 - Mar. 1, 2015 · 2015-02-24T16:20:54.302Z · LW · GW

In many wars, those who fight get a much higher reputation than those who were expected to fight but refused. This has often translated into a reproductive advantage for those who fought. It's not obviously irrational to want that reproductive advantage or something associated with it.

Comment by pcm on Group Rationality Diary, February 15-28 · 2015-02-23T17:38:37.167Z · LW · GW

I started alternate day calorie restriction last month. I expect it to be one of the best lifestyle changes for increasing my life expectancy.

I've become comfortable enough with it that it no longer requires significant willpower to continue. I think I have slightly more mental energy than before I started (but for the first 17 days, I had drastically lower mental energy).

I have a longer post about this on my blog.

Comment by pcm on Open thread, Feb. 16 - Feb. 22, 2015 · 2015-02-22T15:37:28.978Z · LW · GW

Ralph Merkle's cryonics page is a good place to start. His 1994 paper on The Molecular Repair of the Brain seems to be the most technical explanation of why it looks feasible.

Since whole brain emulation is expected to use many of the same techniques, that roadmap (long pdf) is worth looking at.

Comment by pcm on Superintelligence 21: Value learning · 2015-02-05T19:41:18.788Z · LW · GW

I'm unclear on how the probability distribution over utility functions would be implemented. A complete specification of how to evaluate evidence seems hard to do right. Also, why should we expect we can produce a pool of utility functions that includes an adequate one?

Comment by pcm on Ethical Diets · 2015-01-13T19:08:12.433Z · LW · GW

If you're certain that the world will be dominated by one AGI, then my point is obviously irrelevant.

If we're uncertain whether the world will be dominated by one AGI or by many independently created AGIs whose friendliness we're uncertain of, then it seems like we should both try to design them right and try to create a society where, if no single AGI can dictate rules, the default rules for AGI to follow when dealing with other agents will be ok for us.

Comment by pcm on Ethical Diets · 2015-01-13T18:45:41.456Z · LW · GW

This post is definitely an attempt to answer the question 'What should I eat?', not "What's the best thing I can do about multipolar takeoff?". I didn't mean to imply that my concerns over multipolar takeoff are the only reason for my change in diet. I focused on that because others have given it too little attention.

I would certainly like to do more to increase respect for property rights, but the obvious approaches involve partisan politics that already attract lots of effort on both sides.

Comment by pcm on Open thread, Dec. 15 - Dec. 21, 2014 · 2014-12-16T19:21:32.683Z · LW · GW

I suggest Geoffrey Miller's book The Mating Mind. Or search for sexual selection.