Posts

What Evidence Is AlphaGo Zero Re AGI Complexity? 2017-10-22T02:28:45.764Z
What Program Are You? 2009-10-12T00:29:19.218Z
Least Signaling Activities? 2009-05-22T02:46:29.949Z
Rationality Toughness Tests 2009-04-06T01:12:31.928Z
Most Rationalists Are Elsewhere 2009-03-29T21:46:49.307Z
Rational Me or We? 2009-03-17T13:39:29.073Z
The Costs of Rationality 2009-03-03T18:13:17.465Z
Test Your Rationality 2009-03-01T13:21:34.375Z

Comments

Comment by RobinHanson on Contra Hanson on AI Risk · 2023-03-05T00:18:57.290Z · LW · GW

Seems to me I spent a big % of my post arguing against the rapid growth claim. 

Comment by RobinHanson on Contra Hanson on AI Risk · 2023-03-05T00:17:50.607Z · LW · GW

Come on, most every business tracks revenue in great detail. If customers were getting unhappy with the firm's services and rapidly switching en mass, the firm would quickly become very aware, and looking into the problem in great detail. 

Comment by RobinHanson on Contra Hanson on AI Risk · 2023-03-05T00:15:30.054Z · LW · GW

You complain that my estimating rates from historical trends is arbitrary, but you offer no other basis for estimating such rates. You only appeal to uncertainty. But there are several other assumptions required for this doomsday scenario. If all you have is logical possibility to argue for piling on several a priori unlikely assumptions, it gets hard to take that seriously. 

Comment by RobinHanson on Contra Hanson on AI Risk · 2023-03-04T15:56:04.544Z · LW · GW

You keep invoking the scenario of a single dominant AI that is extremely intelligent. But that only happens AFTER a single AI fooms to be much better than all other AIs. You can't invoke its super intelligence to explain why its owners fail to notice and control its early growth. 

Comment by RobinHanson on Replicating and extending the grabby aliens model · 2022-07-26T15:40:36.830Z · LW · GW

I comment on this paper here: https://www.overcomingbias.com/2022/07/cooks-critique-of-our-earliness-argument.html

Comment by RobinHanson on Replicating and extending the grabby aliens model · 2022-07-21T18:45:47.563Z · LW · GW

That's an exponential with mean 0.7, or mean 1/0.7?

Comment by RobinHanson on Replicating and extending the grabby aliens model · 2022-07-21T14:32:50.583Z · LW · GW

"My prior on  is distributed 

I don't understand this notation. It reads to me like "103+ 5 Gy";  how is that a distribution? 

Comment by RobinHanson on Linkpost: Robin Hanson - Why Not Wait On AI Risk? · 2022-06-30T18:21:33.079Z · LW · GW

It seems the key feature of this remaining story is the "coalition of AIs" part. I can believe that AIs would get powerful, what I'm skeptical about is the claim that they naturally form a coalition of them against us. Which is also what I object to in your prior comments. Horses are terrible at coordination compared to humans, and humans weren't built by horses and integrated into a horse society, with each human originally in the service of a particular horse.  

Comment by RobinHanson on Linkpost: Robin Hanson - Why Not Wait On AI Risk? · 2022-06-27T21:18:02.051Z · LW · GW

Its not enough that AI might appear in a few decades, you also need something useful you can do about it now, compared to investing your money to have more to spend later when concrete problems appear.

Comment by RobinHanson on Linkpost: Robin Hanson - Why Not Wait On AI Risk? · 2022-06-27T21:04:47.060Z · LW · GW

I just read through your "what 2026 looks like" post, but didn't see how it is a problematic scenario. Why should we want to work ahead of time to prepare for that scenario?

Comment by RobinHanson on How confident are we that there are no Extremely Obvious Aliens? · 2022-05-09T03:29:09.583Z · LW · GW

In our simulations, we find it overwhelmingly likely that any such spherical volume of an alien civ would be much larger than the full moon in the sky. So no need to study distant galaxies in fine detail; look for huge spheres in the sky. 

Comment by RobinHanson on [Book Review] "Suffering-focused Ethics" by Magnus Vinding · 2021-12-28T14:02:19.673Z · LW · GW

"or more likely we are an early civilization in the universe (according to Robin Hanson’s “Grabby Aliens” model) so, 2) quite possibly there are no grabby aliens populating the universe with S-Risks yet" 

But our model implies that there are in fact many aliens out there right now. Just not in our backward light cone.

Comment by RobinHanson on Talk by Robin Hanson on Elites · 2021-10-10T16:05:24.443Z · LW · GW

Aw,  I still don't know which face goes with the TGGP name.

Comment by RobinHanson on Robin Hanson's Grabby Aliens model explained - part 1 · 2021-09-29T18:38:47.281Z · LW · GW

Wow, it seems that EVERYONE here has this counter argument "You say humans look weird according to this calculation, but here are other ways we are weird that you don't explain." But there is NO WAY to explain all ways we are weird, because we are in fact weird in some ways. For each way that we are weird, we should be looking for some other way to see the situation that makes us look less weird. But there is no guarantee of finding that; we just just actually be weird. https://www.overcomingbias.com/2021/07/why-are-we-weird.html

Comment by RobinHanson on Robin Hanson's Grabby Aliens model explained - part 1 · 2021-09-23T22:14:10.235Z · LW · GW

You have the date of the great filter paper wrong; it was 1998, not 1996.

Comment by RobinHanson on Grabby aliens and Zoo hypothesis · 2021-03-04T14:38:28.006Z · LW · GW

Yes, a zoo hypothesis is much like a simulation hypothesis, and the data we use cannot exclude it. (Nor can they exclude a simulation hypothesis.) We choose to assume that grabby aliens change their volumes in some clearly visible way, exactly to exclude zoo hypotheses. 

Comment by RobinHanson on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-21T13:53:35.665Z · LW · GW

I'm arguing for simpler rules here overall.

Comment by RobinHanson on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-20T20:28:44.484Z · LW · GW

Your point #1 misses the whole norm violation element. The reason it hurts if others are told about an affair is that others disapprove. That isn't why loud music hurts.

Comment by RobinHanson on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-20T20:26:57.893Z · LW · GW

Imagine there's a law against tattoos, and I say "Yes some gang members wear them but so do many others. Maybe just outlaw gang tattoos?" You could then respond that I'm messing with edge cases, so we should just leave the rule alone.

Comment by RobinHanson on Analyzing Blackmail Being Illegal (Hanson and Mowshowitz related) · 2020-08-20T20:21:27.310Z · LW · GW

You will allow harmful gossip, but not blackmail, because the first might be pursuing your "values", but the second is seeking to harm. Yet the second can have many motives, and is mostly commonly to get money. And you are focused too much on motives, rather than on outcomes.

Comment by RobinHanson on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-20T20:06:37.865Z · LW · GW

Yup.

Comment by RobinHanson on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-20T20:06:15.258Z · LW · GW

The sensible approach is. to demand a stream of payments over time. If you reveal it to others who also demand streams, that will cut how much of a stream they are willing to pay you.

Comment by RobinHanson on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-20T20:05:11.694Z · LW · GW

You are very much in the minority if you want to abolish norms in general.

Comment by RobinHanson on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-20T20:03:36.762Z · LW · GW

NDAs are also legal in the case where info was known before the agreement. For example, Trump using NDAs to keep affairs secret.

Comment by RobinHanson on What can the principal-agent literature tell us about AI risk? · 2020-02-11T20:19:37.992Z · LW · GW

"models are brittle" and "models are limited" ARE the generic complaints I pointed to.

Comment by RobinHanson on What can the principal-agent literature tell us about AI risk? · 2020-02-11T20:18:37.074Z · LW · GW

We have lots of models that are useful even when the conclusions follow pretty directly. Such as supply and demand. The question is whether such models are useful, not if they are simple.

Comment by RobinHanson on What can the principal-agent literature tell us about AI risk? · 2020-02-11T18:12:10.381Z · LW · GW

There are THOUSANDS of critiques out there of the form "Economic theory can't be trusted because economic theory analyses make assumptions that can't be proven and are often wrong, and conclusions are often sensitive to assumptions." Really, this is a very standard and generic critique, and of course it is quite wrong, as such a critique can be equally made against any area of theory whatsoever, in any field.

Comment by RobinHanson on What can the principal-agent literature tell us about AI risk? · 2020-02-11T18:07:53.755Z · LW · GW

The agency literature is there to model real agency relations in the world. Those real relations no doubt contain plenty of "unawareness". If models without unawareness were failing to capture and explain a big fraction of real agency problems, there would be plenty of scope for people to try to fill that gap via models that include it. The claim that this couldn't work because such models are limited seems just arbitrary and wrong to me. So either one must claim that AI-related unawareness is of a very different type or scale from ordinary human cases in our world today, or one must implicitly claim that unawareness modeling would in fact be a contribution to the agency literature. It seems to me a mild burden of proof sits on advocates for this latter case to in fact create such contributions.

Comment by RobinHanson on What can the principal-agent literature tell us about AI risk? · 2020-02-11T17:58:13.459Z · LW · GW

"Hanson believes that the principal-agent literature (PAL) provides strong evidence against rents being this high."

I didn't say that. This is what I actually said:

"surely the burden of 'proof' (really argument) should lie on those say this case is radically different from most found in our large and robust agency literatures."

Comment by RobinHanson on Don't Double-Crux With Suicide Rock · 2020-01-04T21:17:13.226Z · LW · GW

Uh, we are talking about holding people to MUCH higher rationality standards than the ability to parse Phil arguments.

Comment by RobinHanson on Characterising utopia · 2020-01-04T00:07:13.251Z · LW · GW

"At its worst, there might be pressure to carve out the parts of ourselves that make us human, like Hanson discusses in Age of Em."

To be clear, while some people do claim that such such things might happen in an Age of Em, I'm not one of them. Of course I can't exclude such things in the long run; few things can be excluded in the long run. But that doesn't seem at all likely to me in the short run.

Comment by RobinHanson on Don't Double-Crux With Suicide Rock · 2020-01-01T21:25:29.813Z · LW · GW

You are a bit too quick to allow the reader the presumption that they have more algorithmic faith than the other folks they talk to. Yes if you are super rational and they are not, you can ignore them. But how did you come to be confident in that description of the situation?

Comment by RobinHanson on Another AI Winter? · 2019-12-31T12:22:04.305Z · LW · GW

Seems like you guys might have (or be able to create) a dataset on who makes what kind of forecasts, and who tends to be accurate or hyped re them. Would be great if you could publish some simple stats from such a dataset.

Comment by RobinHanson on Another AI Winter? · 2019-12-25T13:38:19.968Z · LW · GW

To be clear, Foresight asked each speakers to offer a topic for participants to forecast on, related to our talks. This was the topic I offered. That is NOT the same as my making a prediction on that topic. Instead, that is to say that the chance on this question seemed an unusual combination of verifiable in a year and relevant to the chances on other topics I talked about.

Comment by RobinHanson on Another AI Winter? · 2019-12-25T13:36:20.325Z · LW · GW

Foresight asked us to offer topics for participants to forecast on, related to our talks. This was the topic I offered. That is NOT the same as my making a prediction on that topic. Instead, that is to say that the chance on this question is an unusual combination of verifiable in a year and relevant to the chances on other topics I talked about.

Comment by RobinHanson on Robin Hanson on the futurist focus on AI · 2019-11-17T14:02:23.977Z · LW · GW

If you specifically want models with "bounded rationality", why do add in that search term: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&as_vis=1&q=bounded+rationality+principal+agent&btnG=

See also:

https://onlinelibrary.wiley.com/doi/abs/10.1111/geer.12111

https://www.mdpi.com/2073-4336/4/3/508

https://etd.ohiolink.edu/!etd.send_file?accession=miami153299521737861&disposition=inline


Comment by RobinHanson on Robin Hanson on the futurist focus on AI · 2019-11-16T12:43:54.982Z · LW · GW

The % of world income that goes to computer hardware & software, and the % of useful tasks that are done by them.

Comment by RobinHanson on Robin Hanson on the futurist focus on AI · 2019-11-16T12:42:33.281Z · LW · GW

Most models have an agent who is fully rational, but I'm not sure what you mean by "principal is very limited".

Comment by RobinHanson on Robin Hanson on the futurist focus on AI · 2019-11-16T12:35:43.371Z · LW · GW

I'd also want to know that ratio X for each of the previous booms. There isn't a discrete threshold, because analogies go on a continuum from more to less relevant. An unusually high X would be noteworthy and relevant, but not make prior analogies irrelevant.

Comment by RobinHanson on Robin Hanson on the futurist focus on AI · 2019-11-16T12:33:25.066Z · LW · GW

The literature is vast, but this gets you started: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&q=%22principal+agent%22&btnG=

Comment by RobinHanson on Robin Hanson on the futurist focus on AI · 2019-11-14T17:32:54.505Z · LW · GW

My understanding is that this progress looks much less of a trend deviation when you scale it against the hardware and other resources devoted to these tasks. And of course in any larger area there are always subareas which happen to progress faster. So we have to judge how large is a subarea that is going faster, and is that size unusually large.

Life extension also suffers from the 100,000 fans hype problem.

Comment by RobinHanson on Robin Hanson on the futurist focus on AI · 2019-11-14T15:41:16.556Z · LW · GW

I'll respond to comments here, at least for a few days.

Comment by RobinHanson on Prediction Markets Don't Reveal The Territory · 2019-10-19T12:02:26.765Z · LW · GW

Markets can work fine with only a few participants. But they do need sufficient incentives to participate.

Comment by RobinHanson on Prediction Markets Don't Reveal The Territory · 2019-10-16T23:18:22.678Z · LW · GW

"of all the hidden factors which caused the market consensus to reach this point, which, if any of them, do we have any power to affect?" A prediction market can only answer the question you ask it. You can use a conditional market to ask if a particular factor has an effect on an outcome. Yes of course it will cost more to ask more questions. If there were a lot of possible factors, you might offer a prize to whomever proposes a factor that turns out to have a big effect. Yes it would cost to offer such a prize, because it could be work to find such factors.

Comment by RobinHanson on Quotes from Moral Mazes · 2019-06-05T13:21:33.405Z · LW · GW

I was once that young and naive. But I'd never heard of this book Moral Mazes. Seems great, and I intend to read it. https://twitter.com/robinhanson/status/1136260917644185606

Comment by RobinHanson on Simple Rules of Law · 2019-05-21T16:47:24.368Z · LW · GW

The CEO proposal is to fire them at the end of the quarter if the prices just before then so indicate. This solves the problem of the market traders expecting later traders to have more info than they. And it doesn't mean that the board can't fire them at other times for other reasons.

Comment by RobinHanson on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-25T15:36:58.397Z · LW · GW

The claim that AI is vastly better at coordination seems to me implausible on its face. I'm open to argument, but will remain skeptical until I hear good arguments.

Comment by RobinHanson on Robin Hanson on Simple, Evidence Backed Models · 2019-04-17T18:34:07.982Z · LW · GW

Secrecy CAN have private value. But it isn't at all clear that we are typically together better off with secrets. There are some cases, to be sure, where that is true. But there are also so many cases where it is not.

Comment by RobinHanson on Why didn't Agoric Computing become popular? · 2019-02-17T16:01:27.096Z · LW · GW

My guess is that the reason is close to why security is so bad: Its hard to add security to an architecture that didn't consider it up front, and most projects are in too much of a rush to take time to do that. Similarly, it takes time to think about what parts of a system should own what and be trusted to judge what.. Easier/faster to just make a system that does things, without attending to this, even if that is very costly in the long run. When the long run arrives, the earlier players are usually gone.

Comment by RobinHanson on Some disjunctive reasons for urgency on AI risk · 2019-02-17T13:43:26.173Z · LW · GW

We have to imagine that we have some influence over the allocation of something, or there's nothing to debate here. Call it "resources" or "talent" or whatever, if there's nothing to move, there's nothing to discuss.

I'm skeptical solving hard philosophical problems will be of much use here. Once we see the actual form of relevant systems then we can do lots of useful work on concrete variations.

I'd call "human labor being obsolete within 10 years … 15%, and within 20 years … 35%" crazy extreme predictions, and happily bet against them.

If we look at direct economic impact, we've had a pretty steady trend for at least a century of jobs displaced by automation, and the continuation of past trend puts full AGI a long way off. So you need a huge unprecedented foom-like lump of innovation to have change that big that soon.