Drexler on AI Risk 2019-02-01T05:11:01.008Z · score: 31 (15 votes)
Bundle your Experiments 2019-01-18T23:22:08.660Z · score: 19 (8 votes)
Time Biases 2019-01-12T21:35:54.276Z · score: 31 (8 votes)
Book review: Artificial Intelligence Safety and Security 2018-12-08T03:47:17.098Z · score: 30 (9 votes)
Where is my Flying Car? 2018-10-15T18:39:38.010Z · score: 51 (15 votes)
Book review: Pearl's Book of Why 2018-07-07T17:30:30.994Z · score: 70 (26 votes)


Comment by petermccluskey on What are concrete examples of potential "lock-in" in AI research? · 2019-09-11T03:01:10.303Z · score: 1 (1 votes) · LW · GW
  • a decentralized internet versus an internet under the central control of something like AOL.

  • Bitcoin energy usage.

  • electrical systems that provide plugs, voltages, and frequencies which are incompatible between countries.

Comment by petermccluskey on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-17T18:21:44.933Z · score: 3 (2 votes) · LW · GW

By bias, I mean the framing effects described in this SlateStarCodex post.

Is there an accusation of violation of existing norms (by a specific person/organization) you see “The AI Timelines Scam” as making?

It's unclear to me whether that post makes such an accusation.

Comment by petermccluskey on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-17T01:57:57.901Z · score: 4 (9 votes) · LW · GW

How about focusing on the evidence, and on demonstrating good epistemics?

The styles encouraged by peer-review provide examples of how to minimize unnecessary accusations against individuals and accidental appearances of accusations against individuals (but peer-review includes too many other constraints to be the ideal norm).

Compare the paper When Will AI Exceed Human Performance? Evidence from AI Experts to The AI Timelines Scam. The former is more polite, and looks more epistemically trustworthy, when pointing out that experts give biased forecasts about AI timelines (more biased than I would have inferred from The AI Timelines Scam), but may err in the direction of being too subtle.

See also Bryan Caplan's advice.

Raemon's advice here doesn't seem 100% right to me, but it seems pretty close. Accusing a specific person or organization of violating an existing norm seems like something that ought to be kept quite separate from arguments about what policies are good. But there are plenty of ways to point out patterns of bad behavior without accusing someone of violating an existing norm, and I'm unsure what rules should apply to those.

Comment by petermccluskey on Partial summary of debate with Benquo and Jessicata [pt 1] · 2019-08-17T00:33:04.389Z · score: 15 (5 votes) · LW · GW

I liked most of this post a lot.

But the references to billions of dollars don't feel quite right. The kind of trust that jessicata and Benquo seem to want sometimes happens in small (e.g. 10-person) companies, and almost always gets destroyed by politics well before the business grows to 1000 people. The patterns that I've seen in business seem better explained by the limits on how many peoples' epistemics I can evaluate well, than they are by the amount of money involved.

LessWrong and the rationalist/EA movements seem to have grown large enough that I'd expect less trust than exists in a good 10-person group, based purely on the size.

Comment by petermccluskey on What are the best resources for examining the evidence for anthropogenic climate change? · 2019-08-06T18:56:29.207Z · score: 2 (2 votes) · LW · GW

I recommend starting with the original greenhouse effect forecasts that were made over a century ago (by someone who expected global warming to be desirable). That model still looks pretty good, except that CO2 emission forecasts of that time weren't very good.

Comment by petermccluskey on What supplements do you use? · 2019-08-02T19:29:49.547Z · score: 9 (5 votes) · LW · GW

I wrote a long review with some comments on the extent to which I trust the book.

Comment by petermccluskey on What supplements do you use? · 2019-08-01T20:28:05.114Z · score: 8 (4 votes) · LW · GW

Metformin has a bunch of undesirable side effects. I don't see an easy way to quantify the importance of those side effects, so I try to evaluate how likely it is that the benefits of metformin will apply to me.

The obvious way in which metformin might cure or prevent age-related diseases is by curing insulin resistance. Some educated-sounding people have been saying that insulin resistance contributes significantly to Western disease (aka diseases of civilization, including cardiovascular disease, diabetes, dementia, and some subset of cancer). Doctors have arguably been undertreating insulin resistance, because it doesn't produce immediate disease-like symptoms. Maybe if metformin were still patented, the patentholder would be pushing the FDA to get insulin resistance classified as a disease.

Insulin resistance seems to be non-existent in cultures that most resemble our pre-farming ancestors, so it sure looks like it's avoidable via lifestyle changes (the best guesses are diet (high fiber, whole foods), exercise, and sleep). That suggests it's possible, although maybe hard, to get the benefits attributed to metformin without the side effects.

I pay close attention to my insulin resistance, via blood tests, and have avoided metformin for now because it looks like my lifestyle is good enough that I have few insulin-related risks so far. If my A1C gets above 5.6, I'll think pretty carefully about getting metformin.

Comment by petermccluskey on What supplements do you use? · 2019-08-01T20:25:21.271Z · score: 5 (2 votes) · LW · GW

Daily supplements:

  • 10mg pregnenolone: sometimes a bit more, aiming for blood levels around 60-80 ng/dL. Seems to increase my mental energy.
  • 25mg dhea: I'm uncertain about the dose, and I'm experimenting a bit to see if I can detect effects.
  • ~3000 IU vitamin D: aiming for blood levels just over 50 ng/mL.
  • 100mcg vitamin K2 mk7
  • 1mg folate: I'm homozygous for MTHFR C677T; it seems to improve my mood.
  • 1mg B12: it's keeping my blood levels around 800 pg/mL, compared to < 500 before supplementing.
  • 1g omega-3 (currently in the form of Bulletproof Omega Krill Complex).
  • Stem Cell 100, once per day
  • 500mg N-Acetyl-L-Cysteine: possibly helps lower my homocysteine, but my results are pretty noisy.
  • ashwagandha: I had very interesting thyroid effects when I was getting too much iodine from kelp, but now that I've fixed my iodine levels, I don't detect any effects.
  • ResveraCel (Thorne), contains Nicotinamide Riboside, Quercetin Phytosome, Trans-Resveratrol, Trimethylglycine.
  • curcumin (Life Extension Bio-curcumin): I suspect I have inflammation due to a temporary problem, and will likely go back to taking this less regularly in a few months.
  • caffeine: from tea, green tea extract pills, or occasionally a 100mg pill.

Supplements taken less than once per day:

  • creatine, 1 or 2 grams (no clear effects).
  • melatonin, typically 300mcg, timed release version.
  • liver, and organ meat pills, from Ancestral Supplements.
  • green mussel
  • kratom: typically 0.25 teaspoon of Maeng Da, mildly stimulating with reduced anxiety?
  • phenibut: taken about once a month, for the anti-anxiety effect, with a bit of stimulation.

A majority of these choices are influenced by Bredesen's book The End of Alzheimers, or by a prior source with similar advice.

Comment by petermccluskey on Dialogue on Appeals to Consequences · 2019-07-19T19:01:55.040Z · score: 13 (3 votes) · LW · GW

Quinn: “Hold it right there. Regardless of whether that’s true, it’s bad to say that.”

Carter: “That’s an appeal to consequences, well-known to be a logical fallacy.”

The link in Carter's statement leads to a page that clearly contradicts Carter's claim:

In logic, appeal to consequences refers only to arguments that assert a conclusion's truth value (true or false) without regard to the formal preservation of the truth from the premises; appeal to consequences does not refer to arguments that address a premise's consequential desirability (good or bad, or right or wrong) instead of its truth value.

Comment by petermccluskey on Open Thread July 2019 · 2019-07-15T18:57:24.277Z · score: 8 (6 votes) · LW · GW

Meditation is action, in some important sense, and mostly can't be demonstrated.

It is hard to reliably distinguish between the results of peer pressure and actual learning. I think CFAR's best reply to this has been it's refund policy: last I knew they offered full refunds to anyone who requested it within one year (although I can't find any online mention of their current policy).

Comment by petermccluskey on Open Thread July 2019 · 2019-07-15T18:53:08.854Z · score: 29 (9 votes) · LW · GW

The idea that CFAR would be superfluous is fairly close to the kind of harm that CFAR worries about. (You might have been right to believe that it would have been superfluous in 2012, but CFAR has changed since then in ways that it hasn't managed to make very legible.)

I think meditation provides the best example for illustrating the harm. It's fairly easy to confuse simple meditation instructions (e.g. focus on your breath, sit still with a straight spine) with the most important features of meditation. It's fairly easy to underestimate the additional goals of meditation, because they're hard to observe and don't fit well with more widely accepted worldviews.

My experience suggests that getting value out of meditation is heavily dependent on a feeling (mostly at a system 1 level) that I'm trying something new, and there were times when I wasn't able to learn from meditation, because I mistakenly thought that focusing on my breath was a much more central part of meditation than it actually is.

The times when I got more value out of meditation were times when I tried new variations on the instructions, or new environments (e.g. on a meditation retreat). I can't see any signs that the new instructions or new environment were inherently better at teaching meditation. It seems to have been mostly that any source of novelty about the meditation makes me more alert to learning from it.

My understanding is that CFAR is largely concerned that participants will mistakenly believe that they've already learned something that CFAR is teaching, and that will sometimes be half-true - participants may know it at a system 2 level, when CFAR is trying to teach other parts of their minds that still reject it.

I think I experienced that a bit, due to having experience with half-baked versions of early CFAR before I took a well-designed version of their workshop. E.g. different parts of my mind have different attitudes to acknowledging my actual motivations when they're less virtuous than the motivations that my system 2 endorses. I understood that pretty well at some level before CFAR existed, yet there are still important parts of my mind that cling to self-deceptive beliefs about my motives.

CFAR likely can't teach a class that's explicitly aimed at that without having lots of participants feel defensive about their motives, in a way that makes them less open to learning. So they approach it via instruction that is partly focused on teaching other things that look more mundane and practical. Those other things often felt familiar enough to me that I reacted by saying: I'll relax now and conserve my mental energy for some future part of the curriculum that's more novel. That might have led me to do the equivalent of what I did when I was meditating the same way repeatedly without learning anything new. How can I tell whether that caused me to miss something important?

Comment by petermccluskey on Open Thread July 2019 · 2019-07-14T21:21:47.023Z · score: 23 (12 votes) · LW · GW

You use math as an example, but that's highly focused on System 2 learning. That suggests that you have false assumptions about what CFAR is trying to teach.

There are many subjects where written instructions are much less valuable than instruction that includes direct practice: circling, karate, meditation, dancing, etc. Most of those analogies are fairly imperfect, and some have partially useful written instructions (in the case of meditation, the written version might have lagged in-person instruction by many centuries). Circling is the example that I'd consider most apt, but it won't mean much to people who haven't taken a good circling workshop.

A different analogy, which more emphasizes the costs of false assumptions: people often imagine that economics teaches something like how to run a good business or how to predict the stock market, because there isn't any slot in their worldview for what a good economics course actually teaches. There are plenty of mediocre executive summaries of economics, which fail to convey to most people that economics requires a pervasive worldview shift (integrating utilitarianism, empiricism about preferences, and some counterintuitive empirical patterns).

The CFAR handbook is more like the syllabus for an economics course than it is like an economics textbook, and a syllabus is useless (possibly harmful) for teaching economics to people who have bad assumptions about what kind of questions economics answers. (This analogy is imperfect because economics textbooks have been written, unlike a CFAR textbook.)

Maybe CFAR is making a mistake, but it appears that the people who seem most confident about that usually seem to be confused about what it is that CFAR is trying to teach.

Reading the sequences, or reading about the reversal test, are unlikely to have much relevance to what CFAR teaches. Just be careful not to imagine that they're good examples of what CFAR is about.

Comment by petermccluskey on Schism Begets Schism · 2019-07-10T16:02:26.279Z · score: 21 (9 votes) · LW · GW

I don't like the way this post hints that schism and splitting are the same thing.

E.g. the split between CFAR and MIRI, and the creation of multiple other AI risk organizations (FHI, FLI, BERI, etc), fit the archipelago model, without being schism-like.

Hostility begets hostility, but agreeing to specialize in different things doesn't have the same tendency to beget further specialization.

Comment by petermccluskey on Do bond yield curve inversions really indicate there is likely to be a recession? · 2019-07-10T15:25:01.593Z · score: 7 (4 votes) · LW · GW

High short-term interest rates constitute relatively good evidence of a coming recession. The hard part is deciding what qualifies as high. "Higher than long-term rates" seems to get attention partly due to its ability to produce a clear threshold, rather than to anything optimal about the signal that's produced by long-term rates.

Note that if recessions were easy to predict, they would be easier to avoid than they have been.

Comment by petermccluskey on What are some of Robin Hanson's best posts? · 2019-07-07T22:49:27.607Z · score: 29 (5 votes) · LW · GW

Here are some, based mainly on a search for which ones I've linked to before:

Beware the Inside View

On Liberty vs. Efficiency

Distinguish Info, Analysis, Belief, Action

Further Than Africa

Two Types of People (foragers and farmers)

Policy Tug-O-War

And, since I think he published better writings outside of his blog, here's a longer list of those:

Shall We Vote on Values, But Bet on Beliefs?

How YOU Do Not Tell the Truth: Academic Disagreement as Self-Deception

He Who Pays The Piper Must Know The Tune

Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization

Long-Term Growth As A Sequence of Exponential Modes

Economic Growth Given Machine Intelligence

Fear of Death and Muddled Thinking - It Is So Much Worse Than You Think

When Do Extraordinary Claims Give Extraordinary Evidence?


Buy Health, Not Health Care

Why Meat is Moral, and Veggies are Immoral

How To Live In A Simulation

Comment by petermccluskey on Open Thread July 2019 · 2019-07-07T01:48:58.115Z · score: 3 (2 votes) · LW · GW

I'll suggest the main change is open threads get fewer comments because the system makes open threads less conspicuous, and provides alternatives, such as questions, that are decent substitutes for comments.

Comment by petermccluskey on Should rationality be a movement? · 2019-06-21T15:21:22.694Z · score: 8 (5 votes) · LW · GW

Note that the rationality community used to have the Singularity Summit, which was fairly similar to EAGlobal in its attitude toward high-status speakers.

Comment by petermccluskey on Get Rich Real Slowly · 2019-06-11T15:04:20.258Z · score: 3 (2 votes) · LW · GW

The risk/return curve hides some important problems.

Many investors have more income when the stock market is high than when it is low.

For example, donations to MIRI are likely heavily influenced by the price of tech stocks and by cryptocurrency prices. MIRI needs to pay salaries that fluctuate much less than those prices. If MIRI invests in assets that have much correlation with those prices, MIRI will likely end up buying disproportionate amounts of those assets within a year after they peak, and buying less (sometimes even selling) within a year after they reach a low.

Maybe that works well for short-term market fluctuations, but whenever there are significant market fluctuations that last for several years, that would mean that MIRI would be buying mainly at the worst time. I expect that would cause them to underperform the relevant benchmark by at least a few percentage points.

So I expect MIRI will be better off minimizing the volatility of their investments.

Note that this conclusion holds even if MIRI is risk-neutral.

I expect that a fair number of individual investors have income fluctuations that imply the same conclusion.

Comment by petermccluskey on Major Update on Cost Disease · 2019-06-10T02:25:53.895Z · score: 5 (3 votes) · LW · GW

They have a good argument that the Baumol effect explains a significant fraction of cost disease.

Much of their point is that it's not surprising that some industries have better productivity growth than others.

But they don't have a full explanation of the extent to which productivity growth varies by industry. Why do medical productivity trends appear much worse than auto repair productivity trends? What caused the pharmaceutical industry to become less productive?

Their partial answer seems to be Hansonian:

A key distinction is when labor is an input into the good or service and when labor, in essence, is the service. Consumers do not care how much labor is used as an input into the production of a car, but they do care how much labor input is used in a massage, artistic performance, or doctor’s visit.

That's pointing in the right general direction, but I expect a full explanation would also involve consumers being unwilling or unable to switch to companies that offer cheaper services.

Comment by petermccluskey on Tales From the American Medical System · 2019-05-10T15:26:34.986Z · score: 12 (4 votes) · LW · GW

This doesn't sound like something that would happen at Kaiser. It's sometimes unclear whether people at Kaiser care about my health, but they seem to have clear incentives to be polite and to not waste my time with unnecessary visits.

Comment by petermccluskey on [Answer] Why wasn't science invented in China? · 2019-04-24T19:45:28.476Z · score: 20 (6 votes) · LW · GW

This is impressive given the amount of time you put into it.

There's more evidence in the book State, Economy, and the Great Divergence: Great Britain and China, 1680s - 1850s, by Peer Vries, which I reviewed here. In particular, Vries disputes the claim that property rights were secure in Britain before the industrial revolution.

Comment by petermccluskey on What are CAIS' boldest near/medium-term predictions? · 2019-04-09T19:10:53.261Z · score: 11 (3 votes) · LW · GW

One clear difference between Drexler's worldview and MIRI's is that Drexler expects progress to continue along the path that recent ML research has outlined, whereas MIRI sees more need for fundamental insights.

So I'll guess that Drexler would predict maybe a 15% chance that AI research will shift away from deep learning and reinforcement learning within a decade, whereas MIRI might say something more like 25%.

I'll guess that MIRI would also predict a higher chance of an AI winter than Drexler would, at least for some definition of winter that focused more on diminishing IQ-like returns to investment, than on overall spending.

Comment by petermccluskey on How do people become ambitious? · 2019-04-06T02:09:46.393Z · score: 24 (10 votes) · LW · GW

The literature on learned helplessness describes how to destroy ambition. That suggests that any good answer should resemble moving away from those situations.

Comment by petermccluskey on Could waste heat become an environment problem in the future (centuries)? · 2019-04-04T16:51:29.875Z · score: 4 (3 votes) · LW · GW

See this more rigorous analysis by Robert A. Freitas. Yes, it will cause some problems.

Comment by petermccluskey on What are effective strategies for mitigating the impact of acute sleep deprivation on cognition? · 2019-04-01T15:13:20.452Z · score: 7 (4 votes) · LW · GW

I used to think I didn't get anything from trying to nap, since I never fell asleep during one. Then someone reported that I had been snoring during one of those naps in which I hadn't noticed any sign that I'd fallen asleep. Now I'm uncertain whether naps are worthwhile.

Comment by petermccluskey on Implications of living within a Simulation · 2019-03-19T22:07:08.364Z · score: 1 (1 votes) · LW · GW

Robin Hanson has, of course, written about how the simulation hypothesis should affect our behavior.

Comment by petermccluskey on How dangerous is it to ride a bicycle without a helmet? · 2019-03-09T20:50:03.948Z · score: 3 (2 votes) · LW · GW

Interesting. The ability to fold it enough to fit in a backpack should reduce the hassle of storing it at my destination, which has been part of why I've been reluctant to use one.

Comment by petermccluskey on How dangerous is it to ride a bicycle without a helmet? · 2019-03-09T17:29:00.876Z · score: 8 (5 votes) · LW · GW

I also ride my bike in Berkeley without a helmet.

Some other considerations which influence me:

  • helmet use seems to make crashes more likely (by making bikers and/or drivers less cautious), so it's misleading to use data about harm that's conditioned on there being a reported accident.

  • I'm fairly careful to avoid roads with heavy traffic, or with cars driving more than about 30 mph. I expect that fatality rates vary a lot by these road factors.

  • I use a cheap bike that doesn't go as fast as the average bike.

Alas, I don't have good evidence about how to quantify these considerations.

Comment by petermccluskey on Unconscious Economics · 2019-02-27T16:36:17.277Z · score: 6 (6 votes) · LW · GW

Economists usually treat minds as black boxes. That seems to help them develop their models, maybe via helping them to ignore issues such as "I'd feel embarrassed if my mind worked that way".

There doesn't seem to be much incentive for textbooks to improve their effectiveness at persuading the marginal student. The incentives might even be backwards, as becoming a good economist almost requires thinking in ways that seem odd to the average student.

Comment by petermccluskey on Why didn't Agoric Computing become popular? · 2019-02-17T17:40:26.430Z · score: 2 (2 votes) · LW · GW

Those concerns would have slowed adoption of agoric computing, but they seem to apply to markets in general, so they don't seem useful in explaining why agoric computing is less popular than markets in other goods/services.

Comment by petermccluskey on Why didn't Agoric Computing become popular? · 2019-02-17T17:32:30.099Z · score: 1 (1 votes) · LW · GW

The central planner may know exactly what resources exist on the system they own, but they don't know all the algorithms and data that are available somewhere on the internet. Agoric computing would enable more options for getting programmers and database creators to work for you.

Comment by petermccluskey on Why didn't Agoric Computing become popular? · 2019-02-16T17:50:50.268Z · score: 12 (7 votes) · LW · GW

One obstacle has been security. To develop any software that exchanges services for money, you need to put substantially more thought into the security risks of that software, and you probably can't trust a large fraction of the existing base of standard software. Coauthor Mark S. Miller has devoted lots of effort to replacing existing operating systems and programming languages with secure alternatives, with very limited success.

One other explanation that I've wondered about involves conflicts of interest. Market interactions are valuable mainly when they generate cooperation among agents who have divergent goals. Most software development happens in environments where there's enough cooperation that adding market forces wouldn't provide much value via improved cooperation. I think that's true even within large companies. I'll guess that the benefits of the agoric approach only become interesting when large number of companies switch to using it, and there's little reward to being the first such company.

Comment by petermccluskey on Individual profit-sharing? · 2019-02-14T03:49:59.760Z · score: 7 (3 votes) · LW · GW

Universities have tried something like this for tuition. See these mentions from Alex Tabarrok. They have some trouble with people defaulting.

Comment by petermccluskey on Drexler on AI Risk · 2019-02-01T23:09:20.560Z · score: 5 (3 votes) · LW · GW

Thanks, I've fixed those.

Comment by petermccluskey on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2019-01-28T22:32:35.165Z · score: 3 (2 votes) · LW · GW

After further rereading, I now think that what Drexler imagines is a bit more complex: (section 27.7) "senior human decision makers" would have access to a service with some strategic planning ability (which would have enough power to generate plans with dangerously broad goals), and they would likely restrict access to those high-level services.

I suspect Drexler is deliberately vague about the extent to which the strategic planning services will contain safeguards.

This, of course, depends on the controversial assumption that relatively responsible organizations will develop CAIS well before other entities are able to develop any form of equally powerful AI. I consider that plausible, but it seems to be one of the weakest parts of his analysis.

And presumably the publicly available AI services won't be sufficiently general and powerful to enable random people to assemble them into an agent AGI? Combining a robocar + Google translate + an aircraft designer + a theorem prover doesn't sound dangerous. But I'd prefer to have something more convincing than just "I spent a few minutes looking for risks, and didn't find any".

Comment by petermccluskey on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2019-01-25T16:49:49.591Z · score: 3 (2 votes) · LW · GW

I was assuming that long term strategic planners (as described in section 27) are available as an AIS, and would be one of the components of the hypothetical AGI.

That's not consistent with my understanding of section 27. My understanding is that Drexler would describe that as too dangerous.

suppose you asked the plan maker to create a plan to cure cancer.

I suspect that a problem here is that "plan maker" is ambiguous as to whether it falls within Drexler's notion of something with a bounded goal.

CAIS isn't just a way to structure software. It also requires some not-yet-common sense about what goals to give the software.

"Cure cancer" seems too broad to qualify as a goal that Drexler would consider safe to give to software. Sections 27 and 28 suggest that Drexler wants humans to break that down into narrower subtasks. E.g. he says:

By contrast, it is difficult to envision a development path in which AI developers would treat all aspects of biomedical research (or even cancer research) as a single task to be learned and implemented by a generic system.

Comment by petermccluskey on Does freeze-dried mussel powder have good stuff that vegan diets don't? · 2019-01-21T01:11:37.213Z · score: 17 (4 votes) · LW · GW

My guess, based on crude extrapolations from reported nutrients of other dried foods, is that you'll get half the nutrients of fresh mussels.

That ought to be a clear improvement on a vegan diet.

I suspect your main remaining reason for concern might be creatine.

My guesses about why vitamin pills tend to be ineffective (none of which apply to dried mussels):
* pills lack some important nutrients - ones which have not yet been recognized as important
* pills provide unnatural ratios of nutrients
* pills often provide some vitamins in a synthetic form, which not everyone converts to the biologically active form

Comment by petermccluskey on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2019-01-08T22:12:50.603Z · score: 3 (2 votes) · LW · GW

I'm not talking about the range. Domain seems possibly right, but not as informative as I'd like. I'm talking about what parts of spacetime it cares about, and saying that it only cares about specific outputs of a specific process. Drexler refers to this as "bounded scope and duration". Note that this will normally be an implicit utility function, that we infer from our understanding of the system.

"bounded utility function" is definitely not an ideal way of referring to this.

Comment by petermccluskey on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2019-01-08T20:26:46.727Z · score: 12 (5 votes) · LW · GW

I want to draw separate attention to chapter 40 of Drexler's paper, which uses what looks like a novel approach to argue that current supercomputers likely have more raw processing power than a human brain. I find that scary.

Comment by petermccluskey on Reframing Superintelligence: Comprehensive AI Services as General Intelligence · 2019-01-08T19:58:27.064Z · score: 1 (1 votes) · LW · GW

I consider it important to further clarify the notion of a bounded utility function.

A deployed neural network has a utility function that can be described as outputting a description of the patterns it sees in its most recent input, according to whatever algorithm it's been trained to apply. It's pretty clear to any expert that the neural network doesn't care about anything beyond a specific set of numbers that it outputs.

A neural network that is in the process of being trained is slightly harder to analyze, but essentially the same. It cares about generating an algorithm that will be used in a deployed neural network. At any one training step, it is focused solely on applying fixed algorithms to produce improvements to the deployable algorithm. It has no concept that would lead it to look beyond its immediate task of incremental improvements to that deployable algorithm.

And in some important sense, those steps are the main ways in which AI gets used to produce cars that have superhuman driving ability, and the designers can prove (at least to themselves) that the cars won't go out and buy more processing power, or forage for more energy.

Many forms of AI will be more complex than neural networks (e.g. they might be a mix of RL and neural networks), and I don't have the expertise to extend this analysis to those systems. I'm confident that it's possible in principle to get general-purpose superhuman AIs using only this kind of bounded utility function, but I'm uncertain how practical that is compared to a more unified agent with a broader utility function.

Comment by petermccluskey on Why I expect successful (narrow) alignment · 2019-01-02T01:57:43.577Z · score: 11 (6 votes) · LW · GW

MIRI has a lot invested in the idea that AI safety is a hard problem which must have a difficult solution. So there’s a sense in which the salaries of their employees depend on them not understanding how a simple solution to FAI might work.

That doesn't sound correct. My understanding is that they're looking for simple solutions, in the sense that quantum mechanics and general relativity are simple. What they've invested a lot in is the idea that it's hard to even ask the right questions about how AI alignment might work. They're biased against easy solutions, but they might also be biased in favor of simple solutions.

Comment by petermccluskey on In Defense of Finance · 2018-12-18T19:03:38.623Z · score: 22 (6 votes) · LW · GW

Any discussion of bailouts ought to note that some countries have much fewer banking crises than the US.

Comment by petermccluskey on Player vs. Character: A Two-Level Model of Ethics · 2018-12-16T02:58:52.768Z · score: 16 (6 votes) · LW · GW

people generally identify as their characters, not their players.

I prefer to identify with my whole brain. I suspect that reduces my internal conflicts.

Comment by petermccluskey on Should ethicists be inside or outside a profession? · 2018-12-13T23:21:30.867Z · score: 7 (5 votes) · LW · GW

This seems a bit too strong. It seems to imply that I should ignore Bostrom's writings about AI ethics, and only look to people such as Demis Hassabis.

Or if I thought that nobody was close to having the expertise to build a superintelligent AI, maybe I'd treat it as implying that it's premature to have opinions about AI ethics.

Instead, I treat professional expertise as merely one piece of evidence about a person's qualifications to inform us about ethics.

Comment by petermccluskey on Peanut Butter · 2018-12-04T21:22:15.024Z · score: 4 (3 votes) · LW · GW

Based on focusing I have realized some feelings like tiredness are not really ‘real’. They’re just a felt sense of not wanting to keep programming.

Feelings such as tiredness involve more high-level processing than I used to think.

That doesn't cause me to classify them as less real. Instead, I conclude that most, or maybe all, feelings of tiredness include some rather high-level predictions about the costs and benefits of whatever I'm doing now.

Comment by petermccluskey on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2018-12-04T20:39:26.065Z · score: 1 (1 votes) · LW · GW

Perhaps the ability to recognize individuals isn’t as tied to being a social animal as I had thought

I expect multiple sources of evolutionary pressure for recognizing individuals. E.g. when a human chases an animal to exhaustion, the human needs to track that specific animal even if it disappears into a herd, so as to not make the mistake of chasing an animal that isn't tired.

Comment by petermccluskey on Fat People Are Heroes · 2018-11-14T18:55:30.037Z · score: 2 (2 votes) · LW · GW

Being always hungry is a lousy way to loose weight. It means my body is always trying to conserve energy, as if this was a famine.

Part of the vicious cycle is addiction to food that doesn't make us feel full (see the Satiety Index for ideas about which foods). Remember that obesity is virtually unknown in hunter-gatherers, even when they have plenty of food available. It takes modern foods to make obesity common (see Stephan Guyenet).

Intermittent hunger can work somewhat well for weight loss, but mainly I need to eat food that's less addictive and that makes me feel full.

Comment by petermccluskey on Real-time hiring with prediction markets · 2018-11-13T00:39:01.390Z · score: 1 (1 votes) · LW · GW

It is often difficult to get people to bet in markets. This looks like a case where employees will do approximately no betting unless there are unusual incentives. It's hard to say whether these markets would produce enough benefit to pay for those incentives.

My intuition is that there's likely some cheaper way of improving the situation.

Comment by petermccluskey on Thoughts on short timelines · 2018-10-24T18:08:15.110Z · score: 20 (5 votes) · LW · GW

I disagree with your analysis of "are we that ignorant?".

For things like nuclear war or financial meltdown, we've got lots of relevant data, and not too much reason to expect new risks. For advanced nanotechnology, I think we are ignorant enough that a 10% chance sounds right (I'm guessing it will take something like $1 billion in focused funding).

With AGI, ML researchers can be influenced to change their forecast by 75 years by subtle changes in how the question is worded. That suggests unusual uncertainty.

We can see from Moore's law and from ML progress that we're on track for something at least as unusual as the industrial revolution.

The stock and bond markets do provide some evidence of predictability, but I'm unsure how good they are at evaluating events that happen much less than once per century.

Comment by petermccluskey on Where is my Flying Car? · 2018-10-23T01:15:56.585Z · score: 1 (1 votes) · LW · GW

By some strange coincidence, less than a week after posting this, I got another chance to observe the effects of being above the clouds. Technically, it was fog rather than clouds, but hiking up through and above fog has the same effect: in addition to creating the muted far-mode colors with fewer details that Josh mentions, it also enhances the impression of unusual height, by enabling my subconscious to imagine that the ground (or in this case ocean) beneath the clouds might be a half-mile below where I can see.

I wasn't able to observe that I was more in far mode than on a typical hike. I'm pretty sure the unusual feelings were more along the lines of feeling high status, due to some correlation between being higher than my surroundings and being high status, or maybe due to the strategic advantage of having higher ground than any hostile forces. Or maybe just a feeling of accomplishment for having climbed so high.

At any rate, it feels quite good, and it seems similar to the effects from flying small aircraft.