Posts

Book review: Human Compatible 2020-01-19T03:32:04.989Z · score: 39 (11 votes)
Another AI Winter? 2019-12-25T00:58:48.715Z · score: 47 (17 votes)
Book Review: The AI Does Not Hate You 2019-10-28T17:45:26.050Z · score: 28 (9 votes)
Drexler on AI Risk 2019-02-01T05:11:01.008Z · score: 32 (16 votes)
Bundle your Experiments 2019-01-18T23:22:08.660Z · score: 19 (8 votes)
Time Biases 2019-01-12T21:35:54.276Z · score: 31 (8 votes)
Book review: Artificial Intelligence Safety and Security 2018-12-08T03:47:17.098Z · score: 30 (9 votes)
Where is my Flying Car? 2018-10-15T18:39:38.010Z · score: 51 (15 votes)
Book review: Pearl's Book of Why 2018-07-07T17:30:30.994Z · score: 70 (26 votes)

Comments

Comment by petermccluskey on The case for lifelogging as life extension · 2020-02-02T03:01:06.010Z · score: 3 (5 votes) · LW · GW

Agreed. I do a weak version of this via handwritten morning pages, which I intend to digitize someday and store in multiple places, probably including Alcor.

Aside: I'm unaware of a good reason to conclude that Alzheimer's destroys the connectome. It seems quite possible for neurons to shrink while remaining connected. Bredesen's work provides some weak evidence that this is what's happening.

Comment by petermccluskey on Homeostasis and “Root Causes” in Aging · 2020-01-10T19:14:12.779Z · score: 1 (1 votes) · LW · GW

I wasn't trying to describe the root causes of aging. I was trying to distinguish between diseases that are avoidable via lifestyle changes, and age-related diseases that are sufficiently determined by our genes that we'll need major new technology to avoid them. The latter include things that impair our immune system and repair mechanisms.

My best guess is that the root causes of aging involve some clock-like processes that have been actively selected for different metabolism at different ages. See Josh Mitteldorf's writings if you want more on that topic.

Comment by petermccluskey on Homeostasis and “Root Causes” in Aging · 2020-01-08T20:05:46.915Z · score: 3 (2 votes) · LW · GW

My main source is Food and Western Disease: Health and Nutrition from an Evolutionary Perspective, by Staffan Lindeberg .

Comment by petermccluskey on Homeostasis and “Root Causes” in Aging · 2020-01-08T05:13:20.583Z · score: 2 (2 votes) · LW · GW

Alzheimer's, and several other age-related diseases, seem to be related to lifestyle, since they're rare enough in hunter-gatherer tribes that they can't be detected.

The kinds of age-related deaths that are common to all environments are mainly due to frailty, susceptibility to infectious diseases, and cancer.

Comment by petermccluskey on Dec 2019 gwern.net newsletter · 2020-01-06T02:07:44.167Z · score: 6 (2 votes) · LW · GW

Gwern, thank you for your excellent coverage of hydrocephalus and the implications for intelligence.

Comment by petermccluskey on The Thyroid Madness : Core Argument, Evidence, Probabilities and Predictions · 2020-01-01T23:52:06.145Z · score: 3 (2 votes) · LW · GW

I've had some large fluctuations in my thyroid levels, which have prompted me to develop a better understanding of thyroid problems than I had when this post was published.

I agree with a fair amount of this post, and I'm uncertain about a few of the claims it makes.

I agree that hypothyroidism is undertreated, and likely overlaps with CFS.

I think our prior should be that CFS has multiple causes, and that we shouldn't expect to find a single solution to all CFS. It's easy to find other conditions that have multiple causes. E.g. depression can be caused by hypothyroidism, hating your boss, Alzheimer's, etc. I expect that the less successful we've been at treating a syndrome, the more likely it is that it has multiple causes that we're poor at diagnosing.

So this seems plausible:

(2.1) CFS/FMS/Hypothyroidism are extremely similar diseases which are nevertheless differently caused.

I wouldn't even say that hypothyroidism is a single disease - it can clearly be caused by several unrelated underlying problems (too little iodine, too much iodine, autoimmune problems, etc).

I'm unclear on the extent to which conventional medicine disagrees with your position in claims 2.2, 3, and 4, or whether most of the disagreement is over the cost/benefit ratio of treating people who have normal TSH.

  1. TSH seems to be a pretty good measure of whether T4 levels need fixing. The main problem here is the confusion over what threshold to use to decide whether T4 levels are too low. I'm guessing there are two factors contributing to that confusion:
  • doctors are too eager to classify test results so that 95% of patients are considered normal, and to conclude that anything that's normal shouldn't be treated.
  • patients often have trouble detecting the problems associated with hypothyroidism. The symptoms are easy to confuse with aging, depression, etc, and treatment is typically designed to take effect slowly enough that improvements are subtle.

I felt a significant improvement in mood/energy/muscle comfort when I increased my T4 dosage to levels that dropped my TSH from 2.34 to 1.72. But I'm unsure whether I would have noticed, and connected the change to the thyroid levels, if I hadn't previously experienced some large, rapid changes in my thyroid levels.

OTOH, I definitely wasn't aware of the earlier changes associated with the initial rise in my TSH levels from 2.58 to 4.69, which I assume happened gradually over many months.

  1. I'm unclear whether many people deny the existence of problems with T3 levels. There's plenty of disagreement over related issues - maybe because suffering is hard to measure, maybe because of risks associated with T3 treatment, maybe because it's better to find and fix the underlying causes of the problem; I don't have a good idea what's going on here.

The AACB report of 2012 concluded that the normal range was so narrow that huge numbers of people with no symptoms would be outside it, and this range is not widely accepted for obvious reasons

He estimated its prevalence at 40% in the American population

I'm suspicious of these so-called "obvious reasons". I find it quite possible that nearly 40% of the U.S. population has mild to moderate problems due to low T4 (high TSH) levels. A TSH of 2.5 may be normal in the sense of being fairly common, but I consider it too high to be healthy. I think I would have described myself as having no thyroid symptoms during the first few years that I had what, in hindsight, were clearly moderate problems from hypothyroidism.

We also have evidence that, before iodized salt, thyroid problems were pervasive enough that people had little reason to think of them as abnormal. So it seems like we shouldn't assume away additional problems of this nature.

Such catastrophic failures of the body's central control system CANNOT be evolutionarily stable unless they are extremely rare or have compensating advantages.

I'm not too clear on how much of these effects qualify as "catastrophic failures". The widespread problems with low T4 and high TSH seem to qualify, and I suspect they're due to some moderately recent environmental (dietary?) changes.

The low T3 cases that I'm familiar seem like deliberate decisions that our body needs to conserve resources. Sometimes it looks like a reaction to calorie restriction or trauma, either of which imply that our body should prepare for a famine, or minimize the burden that a person places on the tribe when trauma impairs ones' ability to procure food.

It's unclear to me whether it's better to treat the low T3 levels, or to find and treat the underlying cause, but it seems pretty likely that doing one of those things is better than doing nothing.

Comment by petermccluskey on Might humans not be the most intelligent animals? · 2019-12-29T05:20:44.395Z · score: 3 (3 votes) · LW · GW

My point is that the advantage to bigger brains existed long before humans.

This paper suggests that larger brains enable a more diverse diet.

Comment by petermccluskey on Another AI Winter? · 2019-12-25T03:42:25.477Z · score: 5 (4 votes) · LW · GW

I think Robin implied at least a 50% chance, but I don't recall any clear statement to that effect.

Here is the Metaculus page for the prediction.

Comment by petermccluskey on Should I floss? · 2019-12-24T22:46:49.795Z · score: 5 (3 votes) · LW · GW

After I started using a water pik, my dentist switched from recommending I return every 4 months, to every 6 months, without me telling the dentist that I was using the water pik.

I observe a clear difference in my gums, which appears to be due to the water pik, but I don't know whether it affects cavities.

Comment by petermccluskey on Might humans not be the most intelligent animals? · 2019-12-24T18:22:28.718Z · score: 10 (6 votes) · LW · GW

The increase in human brain size seems more due to increased ability to get enough calories to fuel it than it does to the benefits of intelligence. See Suzana Herculano-Houzel's book The Human Advantage for evidence.

Comment by petermccluskey on We run the Center for Applied Rationality, AMA · 2019-12-20T20:54:01.753Z · score: 15 (7 votes) · LW · GW

It's at least as important for CFAR to train people who end up at OpenAI, Deepmind, FHI, etc.

Comment by petermccluskey on Bayesian examination · 2019-12-11T05:28:36.078Z · score: 6 (3 votes) · LW · GW

I'm unclear on the benefits. I can see how it's sometimes faster, but I'm unclear why faster is an important criterion, and it will sometimes be slower: if I quickly generated the first example in the post of 33,33,33,1, then I'd likely slow down a good deal trying to decide which of those 33s to turn into a 30 and which to turn into a 40.

Whereas it seems clearly valuable to encourage a habit of having the probabilities sum to 100%. That's not for the scoring rule, it's for developing good intuitions.

Comment by petermccluskey on How common is it for one entity to have a 3+ year technological lead on its nearest competitor? · 2019-11-21T22:36:40.757Z · score: 9 (3 votes) · LW · GW

What's the evidence for a technological lead in solar panels? That industry has looked pretty competitive recently.

PayPal's lead in online payments never looked like it had much to do with technology.

Comment by petermccluskey on What are concrete examples of potential "lock-in" in AI research? · 2019-09-11T03:01:10.303Z · score: 1 (1 votes) · LW · GW
  • a decentralized internet versus an internet under the central control of something like AOL.

  • Bitcoin energy usage.

  • electrical systems that provide plugs, voltages, and frequencies which are incompatible between countries.

Comment by petermccluskey on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-17T18:21:44.933Z · score: 3 (2 votes) · LW · GW

By bias, I mean the framing effects described in this SlateStarCodex post.

Is there an accusation of violation of existing norms (by a specific person/organization) you see “The AI Timelines Scam” as making?

It's unclear to me whether that post makes such an accusation.

Comment by petermccluskey on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-17T01:57:57.901Z · score: 4 (9 votes) · LW · GW

How about focusing on the evidence, and on demonstrating good epistemics?

The styles encouraged by peer-review provide examples of how to minimize unnecessary accusations against individuals and accidental appearances of accusations against individuals (but peer-review includes too many other constraints to be the ideal norm).

Compare the paper When Will AI Exceed Human Performance? Evidence from AI Experts to The AI Timelines Scam. The former is more polite, and looks more epistemically trustworthy, when pointing out that experts give biased forecasts about AI timelines (more biased than I would have inferred from The AI Timelines Scam), but may err in the direction of being too subtle.

See also Bryan Caplan's advice.

Raemon's advice here doesn't seem 100% right to me, but it seems pretty close. Accusing a specific person or organization of violating an existing norm seems like something that ought to be kept quite separate from arguments about what policies are good. But there are plenty of ways to point out patterns of bad behavior without accusing someone of violating an existing norm, and I'm unsure what rules should apply to those.

Comment by petermccluskey on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-17T00:33:04.389Z · score: 15 (5 votes) · LW · GW

I liked most of this post a lot.

But the references to billions of dollars don't feel quite right. The kind of trust that jessicata and Benquo seem to want sometimes happens in small (e.g. 10-person) companies, and almost always gets destroyed by politics well before the business grows to 1000 people. The patterns that I've seen in business seem better explained by the limits on how many peoples' epistemics I can evaluate well, than they are by the amount of money involved.

LessWrong and the rationalist/EA movements seem to have grown large enough that I'd expect less trust than exists in a good 10-person group, based purely on the size.

Comment by petermccluskey on What are the best resources for examining the evidence for anthropogenic climate change? · 2019-08-06T18:56:29.207Z · score: 2 (2 votes) · LW · GW

I recommend starting with the original greenhouse effect forecasts that were made over a century ago (by someone who expected global warming to be desirable). That model still looks pretty good, except that CO2 emission forecasts of that time weren't very good.

Comment by petermccluskey on What supplements do you use? · 2019-08-02T19:29:49.547Z · score: 9 (5 votes) · LW · GW

I wrote a long review with some comments on the extent to which I trust the book.

Comment by petermccluskey on What supplements do you use? · 2019-08-01T20:28:05.114Z · score: 8 (4 votes) · LW · GW

Metformin has a bunch of undesirable side effects. I don't see an easy way to quantify the importance of those side effects, so I try to evaluate how likely it is that the benefits of metformin will apply to me.

The obvious way in which metformin might cure or prevent age-related diseases is by curing insulin resistance. Some educated-sounding people have been saying that insulin resistance contributes significantly to Western disease (aka diseases of civilization, including cardiovascular disease, diabetes, dementia, and some subset of cancer). Doctors have arguably been undertreating insulin resistance, because it doesn't produce immediate disease-like symptoms. Maybe if metformin were still patented, the patentholder would be pushing the FDA to get insulin resistance classified as a disease.

Insulin resistance seems to be non-existent in cultures that most resemble our pre-farming ancestors, so it sure looks like it's avoidable via lifestyle changes (the best guesses are diet (high fiber, whole foods), exercise, and sleep). That suggests it's possible, although maybe hard, to get the benefits attributed to metformin without the side effects.

I pay close attention to my insulin resistance, via blood tests, and have avoided metformin for now because it looks like my lifestyle is good enough that I have few insulin-related risks so far. If my A1C gets above 5.6, I'll think pretty carefully about getting metformin.

Comment by petermccluskey on What supplements do you use? · 2019-08-01T20:25:21.271Z · score: 5 (2 votes) · LW · GW

Daily supplements:

  • 10mg pregnenolone: sometimes a bit more, aiming for blood levels around 60-80 ng/dL. Seems to increase my mental energy.
  • 25mg dhea: I'm uncertain about the dose, and I'm experimenting a bit to see if I can detect effects.
  • ~3000 IU vitamin D: aiming for blood levels just over 50 ng/mL.
  • 100mcg vitamin K2 mk7
  • 1mg folate: I'm homozygous for MTHFR C677T; it seems to improve my mood.
  • 1mg B12: it's keeping my blood levels around 800 pg/mL, compared to < 500 before supplementing.
  • 1g omega-3 (currently in the form of Bulletproof Omega Krill Complex).
  • Stem Cell 100, once per day
  • 500mg N-Acetyl-L-Cysteine: possibly helps lower my homocysteine, but my results are pretty noisy.
  • ashwagandha: I had very interesting thyroid effects when I was getting too much iodine from kelp, but now that I've fixed my iodine levels, I don't detect any effects.
  • ResveraCel (Thorne), contains Nicotinamide Riboside, Quercetin Phytosome, Trans-Resveratrol, Trimethylglycine.
  • curcumin (Life Extension Bio-curcumin): I suspect I have inflammation due to a temporary problem, and will likely go back to taking this less regularly in a few months.
  • caffeine: from tea, green tea extract pills, or occasionally a 100mg pill.

Supplements taken less than once per day:

  • creatine, 1 or 2 grams (no clear effects).
  • melatonin, typically 300mcg, timed release version.
  • liver, and organ meat pills, from Ancestral Supplements.
  • green mussel
  • kratom: typically 0.25 teaspoon of Maeng Da, mildly stimulating with reduced anxiety?
  • phenibut: taken about once a month, for the anti-anxiety effect, with a bit of stimulation.

A majority of these choices are influenced by Bredesen's book The End of Alzheimers, or by a prior source with similar advice.

Comment by petermccluskey on Dialogue on Appeals to Consequences · 2019-07-19T19:01:55.040Z · score: 13 (3 votes) · LW · GW

Quinn: “Hold it right there. Regardless of whether that’s true, it’s bad to say that.”

Carter: “That’s an appeal to consequences, well-known to be a logical fallacy.”

The link in Carter's statement leads to a page that clearly contradicts Carter's claim:

In logic, appeal to consequences refers only to arguments that assert a conclusion's truth value (true or false) without regard to the formal preservation of the truth from the premises; appeal to consequences does not refer to arguments that address a premise's consequential desirability (good or bad, or right or wrong) instead of its truth value.

Comment by petermccluskey on Open Thread July 2019 · 2019-07-15T18:57:24.277Z · score: 8 (6 votes) · LW · GW

Meditation is action, in some important sense, and mostly can't be demonstrated.

It is hard to reliably distinguish between the results of peer pressure and actual learning. I think CFAR's best reply to this has been it's refund policy: last I knew they offered full refunds to anyone who requested it within one year (although I can't find any online mention of their current policy).

Comment by petermccluskey on Open Thread July 2019 · 2019-07-15T18:53:08.854Z · score: 29 (9 votes) · LW · GW

The idea that CFAR would be superfluous is fairly close to the kind of harm that CFAR worries about. (You might have been right to believe that it would have been superfluous in 2012, but CFAR has changed since then in ways that it hasn't managed to make very legible.)

I think meditation provides the best example for illustrating the harm. It's fairly easy to confuse simple meditation instructions (e.g. focus on your breath, sit still with a straight spine) with the most important features of meditation. It's fairly easy to underestimate the additional goals of meditation, because they're hard to observe and don't fit well with more widely accepted worldviews.

My experience suggests that getting value out of meditation is heavily dependent on a feeling (mostly at a system 1 level) that I'm trying something new, and there were times when I wasn't able to learn from meditation, because I mistakenly thought that focusing on my breath was a much more central part of meditation than it actually is.

The times when I got more value out of meditation were times when I tried new variations on the instructions, or new environments (e.g. on a meditation retreat). I can't see any signs that the new instructions or new environment were inherently better at teaching meditation. It seems to have been mostly that any source of novelty about the meditation makes me more alert to learning from it.

My understanding is that CFAR is largely concerned that participants will mistakenly believe that they've already learned something that CFAR is teaching, and that will sometimes be half-true - participants may know it at a system 2 level, when CFAR is trying to teach other parts of their minds that still reject it.

I think I experienced that a bit, due to having experience with half-baked versions of early CFAR before I took a well-designed version of their workshop. E.g. different parts of my mind have different attitudes to acknowledging my actual motivations when they're less virtuous than the motivations that my system 2 endorses. I understood that pretty well at some level before CFAR existed, yet there are still important parts of my mind that cling to self-deceptive beliefs about my motives.

CFAR likely can't teach a class that's explicitly aimed at that without having lots of participants feel defensive about their motives, in a way that makes them less open to learning. So they approach it via instruction that is partly focused on teaching other things that look more mundane and practical. Those other things often felt familiar enough to me that I reacted by saying: I'll relax now and conserve my mental energy for some future part of the curriculum that's more novel. That might have led me to do the equivalent of what I did when I was meditating the same way repeatedly without learning anything new. How can I tell whether that caused me to miss something important?

Comment by petermccluskey on Open Thread July 2019 · 2019-07-14T21:21:47.023Z · score: 23 (12 votes) · LW · GW

You use math as an example, but that's highly focused on System 2 learning. That suggests that you have false assumptions about what CFAR is trying to teach.

There are many subjects where written instructions are much less valuable than instruction that includes direct practice: circling, karate, meditation, dancing, etc. Most of those analogies are fairly imperfect, and some have partially useful written instructions (in the case of meditation, the written version might have lagged in-person instruction by many centuries). Circling is the example that I'd consider most apt, but it won't mean much to people who haven't taken a good circling workshop.

A different analogy, which more emphasizes the costs of false assumptions: people often imagine that economics teaches something like how to run a good business or how to predict the stock market, because there isn't any slot in their worldview for what a good economics course actually teaches. There are plenty of mediocre executive summaries of economics, which fail to convey to most people that economics requires a pervasive worldview shift (integrating utilitarianism, empiricism about preferences, and some counterintuitive empirical patterns).

The CFAR handbook is more like the syllabus for an economics course than it is like an economics textbook, and a syllabus is useless (possibly harmful) for teaching economics to people who have bad assumptions about what kind of questions economics answers. (This analogy is imperfect because economics textbooks have been written, unlike a CFAR textbook.)

Maybe CFAR is making a mistake, but it appears that the people who seem most confident about that usually seem to be confused about what it is that CFAR is trying to teach.

Reading the sequences, or reading about the reversal test, are unlikely to have much relevance to what CFAR teaches. Just be careful not to imagine that they're good examples of what CFAR is about.

Comment by petermccluskey on Schism Begets Schism · 2019-07-10T16:02:26.279Z · score: 21 (9 votes) · LW · GW

I don't like the way this post hints that schism and splitting are the same thing.

E.g. the split between CFAR and MIRI, and the creation of multiple other AI risk organizations (FHI, FLI, BERI, etc), fit the archipelago model, without being schism-like.

Hostility begets hostility, but agreeing to specialize in different things doesn't have the same tendency to beget further specialization.

Comment by petermccluskey on Do bond yield curve inversions really indicate there is likely to be a recession? · 2019-07-10T15:25:01.593Z · score: 7 (4 votes) · LW · GW

High short-term interest rates constitute relatively good evidence of a coming recession. The hard part is deciding what qualifies as high. "Higher than long-term rates" seems to get attention partly due to its ability to produce a clear threshold, rather than to anything optimal about the signal that's produced by long-term rates.

Note that if recessions were easy to predict, they would be easier to avoid than they have been.

Comment by petermccluskey on What are some of Robin Hanson's best posts? · 2019-07-07T22:49:27.607Z · score: 29 (5 votes) · LW · GW

Here are some, based mainly on a search for which ones I've linked to before:

Beware the Inside View

On Liberty vs. Efficiency

Distinguish Info, Analysis, Belief, Action

Further Than Africa

Two Types of People (foragers and farmers)

Policy Tug-O-War

And, since I think he published better writings outside of his blog, here's a longer list of those:

Shall We Vote on Values, But Bet on Beliefs?

How YOU Do Not Tell the Truth: Academic Disagreement as Self-Deception

He Who Pays The Piper Must Know The Tune

Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization

Long-Term Growth As A Sequence of Exponential Modes

Economic Growth Given Machine Intelligence

Fear of Death and Muddled Thinking - It Is So Much Worse Than You Think

When Do Extraordinary Claims Give Extraordinary Evidence?

WHY HEALTH IS NOT SPECIAL: ERRORS IN EVOLVED BIOETHICS INTUITIONS

Buy Health, Not Health Care

Why Meat is Moral, and Veggies are Immoral

How To Live In A Simulation

Comment by petermccluskey on Open Thread July 2019 · 2019-07-07T01:48:58.115Z · score: 3 (2 votes) · LW · GW

I'll suggest the main change is open threads get fewer comments because the system makes open threads less conspicuous, and provides alternatives, such as questions, that are decent substitutes for comments.

Comment by petermccluskey on Should rationality be a movement? · 2019-06-21T15:21:22.694Z · score: 8 (5 votes) · LW · GW

Note that the rationality community used to have the Singularity Summit, which was fairly similar to EAGlobal in its attitude toward high-status speakers.

Comment by petermccluskey on Get Rich Real Slowly · 2019-06-11T15:04:20.258Z · score: 3 (2 votes) · LW · GW

The risk/return curve hides some important problems.

Many investors have more income when the stock market is high than when it is low.

For example, donations to MIRI are likely heavily influenced by the price of tech stocks and by cryptocurrency prices. MIRI needs to pay salaries that fluctuate much less than those prices. If MIRI invests in assets that have much correlation with those prices, MIRI will likely end up buying disproportionate amounts of those assets within a year after they peak, and buying less (sometimes even selling) within a year after they reach a low.

Maybe that works well for short-term market fluctuations, but whenever there are significant market fluctuations that last for several years, that would mean that MIRI would be buying mainly at the worst time. I expect that would cause them to underperform the relevant benchmark by at least a few percentage points.

So I expect MIRI will be better off minimizing the volatility of their investments.

Note that this conclusion holds even if MIRI is risk-neutral.

I expect that a fair number of individual investors have income fluctuations that imply the same conclusion.

Comment by petermccluskey on Major Update on Cost Disease · 2019-06-10T02:25:53.895Z · score: 5 (3 votes) · LW · GW

They have a good argument that the Baumol effect explains a significant fraction of cost disease.

Much of their point is that it's not surprising that some industries have better productivity growth than others.

But they don't have a full explanation of the extent to which productivity growth varies by industry. Why do medical productivity trends appear much worse than auto repair productivity trends? What caused the pharmaceutical industry to become less productive?

Their partial answer seems to be Hansonian:

A key distinction is when labor is an input into the good or service and when labor, in essence, is the service. Consumers do not care how much labor is used as an input into the production of a car, but they do care how much labor input is used in a massage, artistic performance, or doctor’s visit.

That's pointing in the right general direction, but I expect a full explanation would also involve consumers being unwilling or unable to switch to companies that offer cheaper services.

Comment by petermccluskey on Tales From the American Medical System · 2019-05-10T15:26:34.986Z · score: 12 (4 votes) · LW · GW

This doesn't sound like something that would happen at Kaiser. It's sometimes unclear whether people at Kaiser care about my health, but they seem to have clear incentives to be polite and to not waste my time with unnecessary visits.

Comment by petermccluskey on [Answer] Why wasn't science invented in China? · 2019-04-24T19:45:28.476Z · score: 20 (6 votes) · LW · GW

This is impressive given the amount of time you put into it.

There's more evidence in the book State, Economy, and the Great Divergence: Great Britain and China, 1680s - 1850s, by Peer Vries, which I reviewed here. In particular, Vries disputes the claim that property rights were secure in Britain before the industrial revolution.

Comment by petermccluskey on What are CAIS' boldest near/medium-term predictions? · 2019-04-09T19:10:53.261Z · score: 11 (3 votes) · LW · GW

One clear difference between Drexler's worldview and MIRI's is that Drexler expects progress to continue along the path that recent ML research has outlined, whereas MIRI sees more need for fundamental insights.

So I'll guess that Drexler would predict maybe a 15% chance that AI research will shift away from deep learning and reinforcement learning within a decade, whereas MIRI might say something more like 25%.

I'll guess that MIRI would also predict a higher chance of an AI winter than Drexler would, at least for some definition of winter that focused more on diminishing IQ-like returns to investment, than on overall spending.

Comment by petermccluskey on How do people become ambitious? · 2019-04-06T02:09:46.393Z · score: 25 (11 votes) · LW · GW

The literature on learned helplessness describes how to destroy ambition. That suggests that any good answer should resemble moving away from those situations.

Comment by petermccluskey on Could waste heat become an environment problem in the future (centuries)? · 2019-04-04T16:51:29.875Z · score: 4 (3 votes) · LW · GW

See this more rigorous analysis by Robert A. Freitas. Yes, it will cause some problems.

Comment by petermccluskey on What are effective strategies for mitigating the impact of acute sleep deprivation on cognition? · 2019-04-01T15:13:20.452Z · score: 7 (4 votes) · LW · GW

I used to think I didn't get anything from trying to nap, since I never fell asleep during one. Then someone reported that I had been snoring during one of those naps in which I hadn't noticed any sign that I'd fallen asleep. Now I'm uncertain whether naps are worthwhile.

Comment by petermccluskey on Implications of living within a Simulation · 2019-03-19T22:07:08.364Z · score: 1 (1 votes) · LW · GW

Robin Hanson has, of course, written about how the simulation hypothesis should affect our behavior.

Comment by petermccluskey on How dangerous is it to ride a bicycle without a helmet? · 2019-03-09T20:50:03.948Z · score: 3 (2 votes) · LW · GW

Interesting. The ability to fold it enough to fit in a backpack should reduce the hassle of storing it at my destination, which has been part of why I've been reluctant to use one.

Comment by petermccluskey on How dangerous is it to ride a bicycle without a helmet? · 2019-03-09T17:29:00.876Z · score: 8 (5 votes) · LW · GW

I also ride my bike in Berkeley without a helmet.

Some other considerations which influence me:

  • helmet use seems to make crashes more likely (by making bikers and/or drivers less cautious), so it's misleading to use data about harm that's conditioned on there being a reported accident.

  • I'm fairly careful to avoid roads with heavy traffic, or with cars driving more than about 30 mph. I expect that fatality rates vary a lot by these road factors.

  • I use a cheap bike that doesn't go as fast as the average bike.

Alas, I don't have good evidence about how to quantify these considerations.

Comment by petermccluskey on Unconscious Economics · 2019-02-27T16:36:17.277Z · score: 6 (6 votes) · LW · GW

Economists usually treat minds as black boxes. That seems to help them develop their models, maybe via helping them to ignore issues such as "I'd feel embarrassed if my mind worked that way".

There doesn't seem to be much incentive for textbooks to improve their effectiveness at persuading the marginal student. The incentives might even be backwards, as becoming a good economist almost requires thinking in ways that seem odd to the average student.

Comment by petermccluskey on Why didn't Agoric Computing become popular? · 2019-02-17T17:40:26.430Z · score: 2 (2 votes) · LW · GW

Those concerns would have slowed adoption of agoric computing, but they seem to apply to markets in general, so they don't seem useful in explaining why agoric computing is less popular than markets in other goods/services.

Comment by petermccluskey on Why didn't Agoric Computing become popular? · 2019-02-17T17:32:30.099Z · score: 1 (1 votes) · LW · GW

The central planner may know exactly what resources exist on the system they own, but they don't know all the algorithms and data that are available somewhere on the internet. Agoric computing would enable more options for getting programmers and database creators to work for you.

Comment by petermccluskey on Why didn't Agoric Computing become popular? · 2019-02-16T17:50:50.268Z · score: 12 (7 votes) · LW · GW

One obstacle has been security. To develop any software that exchanges services for money, you need to put substantially more thought into the security risks of that software, and you probably can't trust a large fraction of the existing base of standard software. Coauthor Mark S. Miller has devoted lots of effort to replacing existing operating systems and programming languages with secure alternatives, with very limited success.

One other explanation that I've wondered about involves conflicts of interest. Market interactions are valuable mainly when they generate cooperation among agents who have divergent goals. Most software development happens in environments where there's enough cooperation that adding market forces wouldn't provide much value via improved cooperation. I think that's true even within large companies. I'll guess that the benefits of the agoric approach only become interesting when large number of companies switch to using it, and there's little reward to being the first such company.

Comment by petermccluskey on Individual profit-sharing? · 2019-02-14T03:49:59.760Z · score: 7 (3 votes) · LW · GW

Universities have tried something like this for tuition. See these mentions from Alex Tabarrok. They have some trouble with people defaulting.

Comment by petermccluskey on Drexler on AI Risk · 2019-02-01T23:09:20.560Z · score: 5 (3 votes) · LW · GW

Thanks, I've fixed those.

Comment by petermccluskey on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2019-01-28T22:32:35.165Z · score: 4 (3 votes) · LW · GW

After further rereading, I now think that what Drexler imagines is a bit more complex: (section 27.7) "senior human decision makers" would have access to a service with some strategic planning ability (which would have enough power to generate plans with dangerously broad goals), and they would likely restrict access to those high-level services.

I suspect Drexler is deliberately vague about the extent to which the strategic planning services will contain safeguards.

This, of course, depends on the controversial assumption that relatively responsible organizations will develop CAIS well before other entities are able to develop any form of equally powerful AI. I consider that plausible, but it seems to be one of the weakest parts of his analysis.

And presumably the publicly available AI services won't be sufficiently general and powerful to enable random people to assemble them into an agent AGI? Combining a robocar + Google translate + an aircraft designer + a theorem prover doesn't sound dangerous. But I'd prefer to have something more convincing than just "I spent a few minutes looking for risks, and didn't find any".

Comment by petermccluskey on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2019-01-25T16:49:49.591Z · score: 3 (2 votes) · LW · GW

I was assuming that long term strategic planners (as described in section 27) are available as an AIS, and would be one of the components of the hypothetical AGI.

That's not consistent with my understanding of section 27. My understanding is that Drexler would describe that as too dangerous.

suppose you asked the plan maker to create a plan to cure cancer.

I suspect that a problem here is that "plan maker" is ambiguous as to whether it falls within Drexler's notion of something with a bounded goal.

CAIS isn't just a way to structure software. It also requires some not-yet-common sense about what goals to give the software.

"Cure cancer" seems too broad to qualify as a goal that Drexler would consider safe to give to software. Sections 27 and 28 suggest that Drexler wants humans to break that down into narrower subtasks. E.g. he says:

By contrast, it is difficult to envision a development path in which AI developers would treat all aspects of biomedical research (or even cancer research) as a single task to be learned and implemented by a generic system.

Comment by petermccluskey on Does freeze-dried mussel powder have good stuff that vegan diets don't? · 2019-01-21T01:11:37.213Z · score: 17 (4 votes) · LW · GW

My guess, based on crude extrapolations from reported nutrients of other dried foods, is that you'll get half the nutrients of fresh mussels.

That ought to be a clear improvement on a vegan diet.

I suspect your main remaining reason for concern might be creatine.

My guesses about why vitamin pills tend to be ineffective (none of which apply to dried mussels):
* pills lack some important nutrients - ones which have not yet been recognized as important
* pills provide unnatural ratios of nutrients
* pills often provide some vitamins in a synthetic form, which not everyone converts to the biologically active form