Posts

What Evidence Is AlphaGo Zero Re AGI Complexity? 2017-10-22T02:28:45.764Z · score: 51 (40 votes)
What Program Are You? 2009-10-12T00:29:19.218Z · score: 28 (30 votes)
Least Signaling Activities? 2009-05-22T02:46:29.949Z · score: 29 (37 votes)
Rationality Toughness Tests 2009-04-06T01:12:31.928Z · score: 26 (27 votes)
Most Rationalists Are Elsewhere 2009-03-29T21:46:49.307Z · score: 56 (65 votes)
Rational Me or We? 2009-03-17T13:39:29.073Z · score: 135 (141 votes)
The Costs of Rationality 2009-03-03T18:13:17.465Z · score: 35 (45 votes)
Test Your Rationality 2009-03-01T13:21:34.375Z · score: 41 (44 votes)

Comments

Comment by robinhanson on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-21T13:53:35.665Z · score: 4 (2 votes) · LW · GW

I'm arguing for simpler rules here overall.

Comment by robinhanson on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-20T20:28:44.484Z · score: 4 (2 votes) · LW · GW

Your point #1 misses the whole norm violation element. The reason it hurts if others are told about an affair is that others disapprove. That isn't why loud music hurts.

Comment by robinhanson on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-20T20:26:57.893Z · score: 4 (2 votes) · LW · GW

Imagine there's a law against tattoos, and I say "Yes some gang members wear them but so do many others. Maybe just outlaw gang tattoos?" You could then respond that I'm messing with edge cases, so we should just leave the rule alone.

Comment by robinhanson on Analyzing Blackmail Being Illegal (Hanson and Mowshowitz related) · 2020-08-20T20:21:27.310Z · score: 2 (1 votes) · LW · GW

You will allow harmful gossip, but not blackmail, because the first might be pursuing your "values", but the second is seeking to harm. Yet the second can have many motives, and is mostly commonly to get money. And you are focused too much on motives, rather than on outcomes.

Comment by robinhanson on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-20T20:06:37.865Z · score: 3 (2 votes) · LW · GW

Yup.

Comment by robinhanson on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-20T20:06:15.258Z · score: 3 (2 votes) · LW · GW

The sensible approach is. to demand a stream of payments over time. If you reveal it to others who also demand streams, that will cut how much of a stream they are willing to pay you.

Comment by robinhanson on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-20T20:05:11.694Z · score: 4 (2 votes) · LW · GW

You are very much in the minority if you want to abolish norms in general.

Comment by robinhanson on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-20T20:03:36.762Z · score: 3 (2 votes) · LW · GW

NDAs are also legal in the case where info was known before the agreement. For example, Trump using NDAs to keep affairs secret.

Comment by robinhanson on What can the principal-agent literature tell us about AI risk? · 2020-02-11T20:19:37.992Z · score: 2 (4 votes) · LW · GW

"models are brittle" and "models are limited" ARE the generic complaints I pointed to.

Comment by robinhanson on What can the principal-agent literature tell us about AI risk? · 2020-02-11T20:18:37.074Z · score: 4 (2 votes) · LW · GW

We have lots of models that are useful even when the conclusions follow pretty directly. Such as supply and demand. The question is whether such models are useful, not if they are simple.

Comment by robinhanson on What can the principal-agent literature tell us about AI risk? · 2020-02-11T18:12:10.381Z · score: 3 (7 votes) · LW · GW

There are THOUSANDS of critiques out there of the form "Economic theory can't be trusted because economic theory analyses make assumptions that can't be proven and are often wrong, and conclusions are often sensitive to assumptions." Really, this is a very standard and generic critique, and of course it is quite wrong, as such a critique can be equally made against any area of theory whatsoever, in any field.

Comment by robinhanson on What can the principal-agent literature tell us about AI risk? · 2020-02-11T18:07:53.755Z · score: 5 (3 votes) · LW · GW

The agency literature is there to model real agency relations in the world. Those real relations no doubt contain plenty of "unawareness". If models without unawareness were failing to capture and explain a big fraction of real agency problems, there would be plenty of scope for people to try to fill that gap via models that include it. The claim that this couldn't work because such models are limited seems just arbitrary and wrong to me. So either one must claim that AI-related unawareness is of a very different type or scale from ordinary human cases in our world today, or one must implicitly claim that unawareness modeling would in fact be a contribution to the agency literature. It seems to me a mild burden of proof sits on advocates for this latter case to in fact create such contributions.

Comment by robinhanson on What can the principal-agent literature tell us about AI risk? · 2020-02-11T17:58:13.459Z · score: 5 (5 votes) · LW · GW

"Hanson believes that the principal-agent literature (PAL) provides strong evidence against rents being this high."

I didn't say that. This is what I actually said:

"surely the burden of 'proof' (really argument) should lie on those say this case is radically different from most found in our large and robust agency literatures."

Comment by robinhanson on Don't Double-Crux With Suicide Rock · 2020-01-04T21:17:13.226Z · score: 4 (2 votes) · LW · GW

Uh, we are talking about holding people to MUCH higher rationality standards than the ability to parse Phil arguments.

Comment by robinhanson on Characterising utopia · 2020-01-04T00:07:13.251Z · score: 6 (3 votes) · LW · GW

"At its worst, there might be pressure to carve out the parts of ourselves that make us human, like Hanson discusses in Age of Em."

To be clear, while some people do claim that such such things might happen in an Age of Em, I'm not one of them. Of course I can't exclude such things in the long run; few things can be excluded in the long run. But that doesn't seem at all likely to me in the short run.

Comment by robinhanson on Don't Double-Crux With Suicide Rock · 2020-01-01T21:25:29.813Z · score: 38 (14 votes) · LW · GW

You are a bit too quick to allow the reader the presumption that they have more algorithmic faith than the other folks they talk to. Yes if you are super rational and they are not, you can ignore them. But how did you come to be confident in that description of the situation?

Comment by robinhanson on Another AI Winter? · 2019-12-31T12:22:04.305Z · score: 8 (4 votes) · LW · GW

Seems like you guys might have (or be able to create) a dataset on who makes what kind of forecasts, and who tends to be accurate or hyped re them. Would be great if you could publish some simple stats from such a dataset.

Comment by robinhanson on Another AI Winter? · 2019-12-25T13:38:19.968Z · score: 37 (13 votes) · LW · GW

To be clear, Foresight asked each speakers to offer a topic for participants to forecast on, related to our talks. This was the topic I offered. That is NOT the same as my making a prediction on that topic. Instead, that is to say that the chance on this question seemed an unusual combination of verifiable in a year and relevant to the chances on other topics I talked about.

Comment by robinhanson on Another AI Winter? · 2019-12-25T13:36:20.325Z · score: 15 (6 votes) · LW · GW

Foresight asked us to offer topics for participants to forecast on, related to our talks. This was the topic I offered. That is NOT the same as my making a prediction on that topic. Instead, that is to say that the chance on this question is an unusual combination of verifiable in a year and relevant to the chances on other topics I talked about.

Comment by robinhanson on Robin Hanson on the futurist focus on AI · 2019-11-17T14:02:23.977Z · score: -5 (5 votes) · LW · GW

If you specifically want models with "bounded rationality", why do add in that search term: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&as_vis=1&q=bounded+rationality+principal+agent&btnG=

See also:

https://onlinelibrary.wiley.com/doi/abs/10.1111/geer.12111

https://www.mdpi.com/2073-4336/4/3/508

https://etd.ohiolink.edu/!etd.send_file?accession=miami153299521737861&disposition=inline


Comment by robinhanson on Robin Hanson on the futurist focus on AI · 2019-11-16T12:43:54.982Z · score: 4 (2 votes) · LW · GW

The % of world income that goes to computer hardware & software, and the % of useful tasks that are done by them.

Comment by robinhanson on Robin Hanson on the futurist focus on AI · 2019-11-16T12:42:33.281Z · score: 2 (1 votes) · LW · GW

Most models have an agent who is fully rational, but I'm not sure what you mean by "principal is very limited".

Comment by robinhanson on Robin Hanson on the futurist focus on AI · 2019-11-16T12:35:43.371Z · score: 3 (2 votes) · LW · GW

I'd also want to know that ratio X for each of the previous booms. There isn't a discrete threshold, because analogies go on a continuum from more to less relevant. An unusually high X would be noteworthy and relevant, but not make prior analogies irrelevant.

Comment by robinhanson on Robin Hanson on the futurist focus on AI · 2019-11-16T12:33:25.066Z · score: -10 (6 votes) · LW · GW

The literature is vast, but this gets you started: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&q=%22principal+agent%22&btnG=

Comment by robinhanson on Robin Hanson on the futurist focus on AI · 2019-11-14T17:32:54.505Z · score: 5 (4 votes) · LW · GW

My understanding is that this progress looks much less of a trend deviation when you scale it against the hardware and other resources devoted to these tasks. And of course in any larger area there are always subareas which happen to progress faster. So we have to judge how large is a subarea that is going faster, and is that size unusually large.

Life extension also suffers from the 100,000 fans hype problem.

Comment by robinhanson on Robin Hanson on the futurist focus on AI · 2019-11-14T15:41:16.556Z · score: 22 (10 votes) · LW · GW

I'll respond to comments here, at least for a few days.

Comment by robinhanson on Prediction Markets Don't Reveal The Territory · 2019-10-19T12:02:26.765Z · score: 3 (2 votes) · LW · GW

Markets can work fine with only a few participants. But they do need sufficient incentives to participate.

Comment by robinhanson on Prediction Markets Don't Reveal The Territory · 2019-10-16T23:18:22.678Z · score: 5 (5 votes) · LW · GW

"of all the hidden factors which caused the market consensus to reach this point, which, if any of them, do we have any power to affect?" A prediction market can only answer the question you ask it. You can use a conditional market to ask if a particular factor has an effect on an outcome. Yes of course it will cost more to ask more questions. If there were a lot of possible factors, you might offer a prize to whomever proposes a factor that turns out to have a big effect. Yes it would cost to offer such a prize, because it could be work to find such factors.

Comment by robinhanson on Quotes from Moral Mazes · 2019-06-05T13:21:33.405Z · score: 36 (19 votes) · LW · GW

I was once that young and naive. But I'd never heard of this book Moral Mazes. Seems great, and I intend to read it. https://twitter.com/robinhanson/status/1136260917644185606

Comment by robinhanson on Simple Rules of Law · 2019-05-21T16:47:24.368Z · score: 4 (2 votes) · LW · GW

The CEO proposal is to fire them at the end of the quarter if the prices just before then so indicate. This solves the problem of the market traders expecting later traders to have more info than they. And it doesn't mean that the board can't fire them at other times for other reasons.

Comment by robinhanson on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-25T15:36:58.397Z · score: 5 (3 votes) · LW · GW

The claim that AI is vastly better at coordination seems to me implausible on its face. I'm open to argument, but will remain skeptical until I hear good arguments.

Comment by robinhanson on Robin Hanson on Simple, Evidence Backed Models · 2019-04-17T18:34:07.982Z · score: 7 (4 votes) · LW · GW

Secrecy CAN have private value. But it isn't at all clear that we are typically together better off with secrets. There are some cases, to be sure, where that is true. But there are also so many cases where it is not.

Comment by robinhanson on Why didn't Agoric Computing become popular? · 2019-02-17T16:01:27.096Z · score: 18 (8 votes) · LW · GW

My guess is that the reason is close to why security is so bad: Its hard to add security to an architecture that didn't consider it up front, and most projects are in too much of a rush to take time to do that. Similarly, it takes time to think about what parts of a system should own what and be trusted to judge what.. Easier/faster to just make a system that does things, without attending to this, even if that is very costly in the long run. When the long run arrives, the earlier players are usually gone.

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-17T13:43:26.173Z · score: 5 (5 votes) · LW · GW

We have to imagine that we have some influence over the allocation of something, or there's nothing to debate here. Call it "resources" or "talent" or whatever, if there's nothing to move, there's nothing to discuss.

I'm skeptical solving hard philosophical problems will be of much use here. Once we see the actual form of relevant systems then we can do lots of useful work on concrete variations.

I'd call "human labor being obsolete within 10 years … 15%, and within 20 years … 35%" crazy extreme predictions, and happily bet against them.

If we look at direct economic impact, we've had a pretty steady trend for at least a century of jobs displaced by automation, and the continuation of past trend puts full AGI a long way off. So you need a huge unprecedented foom-like lump of innovation to have change that big that soon.

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-17T13:33:23.333Z · score: 2 (4 votes) · LW · GW

Solving problems is mostly a matter of total resources devoted, not time devoted. Yes some problems have intrinsic clocks, but this doesn't look like such a problem. If we get signs of a problem looming, and can devote a lot of resources then, that makes it tempting to save resources today for such a future push, as we'll know a lot more then and resources today become more resources when delayed.

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-16T17:45:48.055Z · score: 8 (4 votes) · LW · GW

Can you point to a good/best argument for the claim that AGI is coming soon enough to justify lots of effort today?

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-16T10:54:37.312Z · score: 7 (4 votes) · LW · GW

Its not quite about "fast" v. "slow" than about the chances for putting lots of resources into the problem with substantial warning. Even if things change fast, as long as you get enough warning and resources can be moved to the problem fast enough, waiting still makes sense.

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-16T10:51:19.843Z · score: 4 (5 votes) · LW · GW

By "main reason for concern" I mean best arguments; I'm not trying to categorize people's motivations.

AGI isn't remotely close, and I just don't believe people who think they see signs of that. Yes for any problem that we'll eventually want to work on, a few people should work on it now just so someone is tracking the problem, ready to tell the rest of us if they see signs of it coming soon. But I see people calling for much more than that minimal tracking effort.

Most people who work in research areas call for more relative funding for their areas. So the rest of us just can't be in the habit of believing such calls. We must hold a higher standard than "people who get $ to work on this say more $ should go to this now."

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-16T01:54:27.559Z · score: 12 (9 votes) · LW · GW

Even if there will be problems worth working on at some point, if we will know a lot more later and if resources today can be traded for a lot more resources later, the temptation to wait should be strong. The foom scenario has few visible indications of a problem looming, forcing one to work on the problems far ahead of time. But in scenarios where there's warning, lots more resources, and better tools and understand later, waiting makes a lot more sense.

Comment by robinhanson on Towards no-math, graphical instructions for prediction markets · 2019-01-05T17:45:41.255Z · score: 8 (4 votes) · LW · GW

Your critique is plausible. I was never a fan of these supposedly simple interfaces.

Comment by robinhanson on Towards no-math, graphical instructions for prediction markets · 2019-01-05T17:44:58.792Z · score: 10 (5 votes) · LW · GW

There have long been lots of unexplored good ideas for interface improvements. But they need to be tested in the context of real systems and users.

Comment by robinhanson on On Robin Hanson’s Board Game · 2018-09-15T12:59:16.207Z · score: 7 (4 votes) · LW · GW

Yes, we only did a half dozen trials, and mostly with new players, so players were inexperienced.

Comment by robinhanson on On Robin Hanson’s Board Game · 2018-09-09T13:14:52.942Z · score: 9 (6 votes) · LW · GW

Note that all this analysis is based on thinking about the game, not from playing the game. From my observing game play, I'd say that price accuracy does not noticeably suffer in the endgame.

For game design, yes good to include a few characters who will be excluded early, so people attend to story in early period.

Comment by robinhanson on On Robin Hanson’s Board Game · 2018-09-09T13:13:19.602Z · score: 5 (3 votes) · LW · GW

If more than one person "did it", you could pay off that fraction of $100 to each. So if two did it, each card is worth $50 at the end.

Comment by robinhanson on Confusions Concerning Pre-Rationality · 2018-06-05T08:55:10.909Z · score: 19 (4 votes) · LW · GW

The problem of how to be rational is hard enough that one shouldn’t expect to get good proposals for complete algorithms for how to be rational in all situations. Instead we must chip away at the problem. And one way to do that is to slowly collect rationality constraints. I saw myself as contributing by possibly adding a new one. I’m not very moved by the complaint “but what is the algorithm to become fully rational from any starting point?” as that is just too hard a problem to solve all at once. 

Comment by robinhanson on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-24T13:37:44.060Z · score: 3 (2 votes) · LW · GW

I disagree with the claim that "this single simple tool gives a bigger advantage on a wider range of tasks than we have seen with previous tools."

Comment by robinhanson on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T23:52:13.321Z · score: 3 (5 votes) · LW · GW

Consider the example of whether a big terror attack indicates that there has been an increase in the average rate or harm of terror attacks. You could easily say "You can't possibly claim that big terror attack yesterday is no evidence; and if it is evidence it is surely in the direction of the ave rate/harm having increased." Technically correct, but then every other day without such a big attack is also evidence for a slow decrease in the rate/harm of attacks. Even if the rate/harm didn't change, every once in a while you should expect a big attack. This in the sense in which I'd say that finding one more big tool isn't much evidence that big tools will matter more in the future. Sure the day when you find such a tool is itself weak evidence in that direction. But the whole recent history of that day and all the days before it may be an evidential wash.

Comment by robinhanson on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T23:40:03.702Z · score: 25 (11 votes) · LW · GW

This seems to me a reasonable statement of the kind of evidence that would be most relevant.

Comment by robinhanson on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T23:38:49.510Z · score: 6 (2 votes) · LW · GW

As I said, I'm treating it as the difference of learning N simple general tools to learning N+1 such tools. Do you think it stronger evidence than that, or do you think I'm not acknowledging how big that is?

Comment by robinhanson on Why no total winner? · 2017-10-17T21:32:13.114Z · score: 11 (3 votes) · LW · GW

Why assume AGI doesn't have problems analogous to agency problems? It will have parts of itself that it doesn't understand well, and which might go rogue.