Posts

What Evidence Is AlphaGo Zero Re AGI Complexity? 2017-10-22T02:28:45.764Z · score: 56 (39 votes)
What Program Are You? 2009-10-12T00:29:19.218Z · score: 28 (30 votes)
Least Signaling Activities? 2009-05-22T02:46:29.949Z · score: 27 (36 votes)
Rationality Toughness Tests 2009-04-06T01:12:31.928Z · score: 26 (27 votes)
Most Rationalists Are Elsewhere 2009-03-29T21:46:49.307Z · score: 55 (64 votes)
Rational Me or We? 2009-03-17T13:39:29.073Z · score: 127 (135 votes)
The Costs of Rationality 2009-03-03T18:13:17.465Z · score: 34 (44 votes)
Test Your Rationality 2009-03-01T13:21:34.375Z · score: 39 (44 votes)

Comments

Comment by robinhanson on Robin Hanson on the futurist focus on AI · 2019-11-17T14:02:23.977Z · score: 1 (2 votes) · LW · GW

If you specifically want models with "bounded rationality", why do add in that search term: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&as_vis=1&q=bounded+rationality+principal+agent&btnG=

See also:

https://onlinelibrary.wiley.com/doi/abs/10.1111/geer.12111

https://www.mdpi.com/2073-4336/4/3/508

https://etd.ohiolink.edu/!etd.send_file?accession=miami153299521737861&disposition=inline


Comment by robinhanson on Robin Hanson on the futurist focus on AI · 2019-11-16T12:43:54.982Z · score: 2 (1 votes) · LW · GW

The % of world income that goes to computer hardware & software, and the % of useful tasks that are done by them.

Comment by robinhanson on Robin Hanson on the futurist focus on AI · 2019-11-16T12:42:33.281Z · score: 2 (1 votes) · LW · GW

Most models have an agent who is fully rational, but I'm not sure what you mean by "principal is very limited".

Comment by robinhanson on Robin Hanson on the futurist focus on AI · 2019-11-16T12:35:43.371Z · score: 3 (2 votes) · LW · GW

I'd also want to know that ratio X for each of the previous booms. There isn't a discrete threshold, because analogies go on a continuum from more to less relevant. An unusually high X would be noteworthy and relevant, but not make prior analogies irrelevant.

Comment by robinhanson on Robin Hanson on the futurist focus on AI · 2019-11-16T12:33:25.066Z · score: -4 (4 votes) · LW · GW

The literature is vast, but this gets you started: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&q=%22principal+agent%22&btnG=

Comment by robinhanson on Robin Hanson on the futurist focus on AI · 2019-11-14T17:32:54.505Z · score: 4 (3 votes) · LW · GW

My understanding is that this progress looks much less of a trend deviation when you scale it against the hardware and other resources devoted to these tasks. And of course in any larger area there are always subareas which happen to progress faster. So we have to judge how large is a subarea that is going faster, and is that size unusually large.

Life extension also suffers from the 100,000 fans hype problem.

Comment by robinhanson on Robin Hanson on the futurist focus on AI · 2019-11-14T15:41:16.556Z · score: 19 (8 votes) · LW · GW

I'll respond to comments here, at least for a few days.

Comment by robinhanson on Prediction Markets Don't Reveal The Territory · 2019-10-19T12:02:26.765Z · score: 3 (2 votes) · LW · GW

Markets can work fine with only a few participants. But they do need sufficient incentives to participate.

Comment by robinhanson on Prediction Markets Don't Reveal The Territory · 2019-10-16T23:18:22.678Z · score: 5 (5 votes) · LW · GW

"of all the hidden factors which caused the market consensus to reach this point, which, if any of them, do we have any power to affect?" A prediction market can only answer the question you ask it. You can use a conditional market to ask if a particular factor has an effect on an outcome. Yes of course it will cost more to ask more questions. If there were a lot of possible factors, you might offer a prize to whomever proposes a factor that turns out to have a big effect. Yes it would cost to offer such a prize, because it could be work to find such factors.

Comment by robinhanson on Quotes from Moral Mazes · 2019-06-05T13:21:33.405Z · score: 34 (17 votes) · LW · GW

I was once that young and naive. But I'd never heard of this book Moral Mazes. Seems great, and I intend to read it. https://twitter.com/robinhanson/status/1136260917644185606

Comment by robinhanson on Simple Rules of Law · 2019-05-21T16:47:24.368Z · score: 4 (2 votes) · LW · GW

The CEO proposal is to fire them at the end of the quarter if the prices just before then so indicate. This solves the problem of the market traders expecting later traders to have more info than they. And it doesn't mean that the board can't fire them at other times for other reasons.

Comment by robinhanson on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-25T15:36:58.397Z · score: 5 (3 votes) · LW · GW

The claim that AI is vastly better at coordination seems to me implausible on its face. I'm open to argument, but will remain skeptical until I hear good arguments.

Comment by robinhanson on Robin Hanson on Simple, Evidence Backed Models · 2019-04-17T18:34:07.982Z · score: 7 (4 votes) · LW · GW

Secrecy CAN have private value. But it isn't at all clear that we are typically together better off with secrets. There are some cases, to be sure, where that is true. But there are also so many cases where it is not.

Comment by robinhanson on Why didn't Agoric Computing become popular? · 2019-02-17T16:01:27.096Z · score: 18 (8 votes) · LW · GW

My guess is that the reason is close to why security is so bad: Its hard to add security to an architecture that didn't consider it up front, and most projects are in too much of a rush to take time to do that. Similarly, it takes time to think about what parts of a system should own what and be trusted to judge what.. Easier/faster to just make a system that does things, without attending to this, even if that is very costly in the long run. When the long run arrives, the earlier players are usually gone.

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-17T13:43:26.173Z · score: 5 (5 votes) · LW · GW

We have to imagine that we have some influence over the allocation of something, or there's nothing to debate here. Call it "resources" or "talent" or whatever, if there's nothing to move, there's nothing to discuss.

I'm skeptical solving hard philosophical problems will be of much use here. Once we see the actual form of relevant systems then we can do lots of useful work on concrete variations.

I'd call "human labor being obsolete within 10 years … 15%, and within 20 years … 35%" crazy extreme predictions, and happily bet against them.

If we look at direct economic impact, we've had a pretty steady trend for at least a century of jobs displaced by automation, and the continuation of past trend puts full AGI a long way off. So you need a huge unprecedented foom-like lump of innovation to have change that big that soon.

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-17T13:33:23.333Z · score: 3 (3 votes) · LW · GW

Solving problems is mostly a matter of total resources devoted, not time devoted. Yes some problems have intrinsic clocks, but this doesn't look like such a problem. If we get signs of a problem looming, and can devote a lot of resources then, that makes it tempting to save resources today for such a future push, as we'll know a lot more then and resources today become more resources when delayed.

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-16T17:45:48.055Z · score: 8 (4 votes) · LW · GW

Can you point to a good/best argument for the claim that AGI is coming soon enough to justify lots of effort today?

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-16T10:54:37.312Z · score: 7 (4 votes) · LW · GW

Its not quite about "fast" v. "slow" than about the chances for putting lots of resources into the problem with substantial warning. Even if things change fast, as long as you get enough warning and resources can be moved to the problem fast enough, waiting still makes sense.

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-16T10:51:19.843Z · score: 4 (5 votes) · LW · GW

By "main reason for concern" I mean best arguments; I'm not trying to categorize people's motivations.

AGI isn't remotely close, and I just don't believe people who think they see signs of that. Yes for any problem that we'll eventually want to work on, a few people should work on it now just so someone is tracking the problem, ready to tell the rest of us if they see signs of it coming soon. But I see people calling for much more than that minimal tracking effort.

Most people who work in research areas call for more relative funding for their areas. So the rest of us just can't be in the habit of believing such calls. We must hold a higher standard than "people who get $ to work on this say more $ should go to this now."

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-16T01:54:27.559Z · score: 13 (8 votes) · LW · GW

Even if there will be problems worth working on at some point, if we will know a lot more later and if resources today can be traded for a lot more resources later, the temptation to wait should be strong. The foom scenario has few visible indications of a problem looming, forcing one to work on the problems far ahead of time. But in scenarios where there's warning, lots more resources, and better tools and understand later, waiting makes a lot more sense.

Comment by robinhanson on Towards no-math, graphical instructions for prediction markets · 2019-01-05T17:45:41.255Z · score: 8 (4 votes) · LW · GW

Your critique is plausible. I was never a fan of these supposedly simple interfaces.

Comment by robinhanson on Towards no-math, graphical instructions for prediction markets · 2019-01-05T17:44:58.792Z · score: 10 (5 votes) · LW · GW

There have long been lots of unexplored good ideas for interface improvements. But they need to be tested in the context of real systems and users.

Comment by robinhanson on On Robin Hanson’s Board Game · 2018-09-15T12:59:16.207Z · score: 7 (4 votes) · LW · GW

Yes, we only did a half dozen trials, and mostly with new players, so players were inexperienced.

Comment by robinhanson on On Robin Hanson’s Board Game · 2018-09-09T13:14:52.942Z · score: 9 (6 votes) · LW · GW

Note that all this analysis is based on thinking about the game, not from playing the game. From my observing game play, I'd say that price accuracy does not noticeably suffer in the endgame.

For game design, yes good to include a few characters who will be excluded early, so people attend to story in early period.

Comment by robinhanson on On Robin Hanson’s Board Game · 2018-09-09T13:13:19.602Z · score: 5 (3 votes) · LW · GW

If more than one person "did it", you could pay off that fraction of $100 to each. So if two did it, each card is worth $50 at the end.

Comment by robinhanson on Confusions Concerning Pre-Rationality · 2018-06-05T08:55:10.909Z · score: 19 (4 votes) · LW · GW

The problem of how to be rational is hard enough that one shouldn’t expect to get good proposals for complete algorithms for how to be rational in all situations. Instead we must chip away at the problem. And one way to do that is to slowly collect rationality constraints. I saw myself as contributing by possibly adding a new one. I’m not very moved by the complaint “but what is the algorithm to become fully rational from any starting point?” as that is just too hard a problem to solve all at once. 

Comment by robinhanson on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-24T13:37:44.060Z · score: 3 (2 votes) · LW · GW

I disagree with the claim that "this single simple tool gives a bigger advantage on a wider range of tasks than we have seen with previous tools."

Comment by robinhanson on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T23:52:13.321Z · score: 3 (5 votes) · LW · GW

Consider the example of whether a big terror attack indicates that there has been an increase in the average rate or harm of terror attacks. You could easily say "You can't possibly claim that big terror attack yesterday is no evidence; and if it is evidence it is surely in the direction of the ave rate/harm having increased." Technically correct, but then every other day without such a big attack is also evidence for a slow decrease in the rate/harm of attacks. Even if the rate/harm didn't change, every once in a while you should expect a big attack. This in the sense in which I'd say that finding one more big tool isn't much evidence that big tools will matter more in the future. Sure the day when you find such a tool is itself weak evidence in that direction. But the whole recent history of that day and all the days before it may be an evidential wash.

Comment by robinhanson on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T23:40:03.702Z · score: 23 (9 votes) · LW · GW

This seems to me a reasonable statement of the kind of evidence that would be most relevant.

Comment by robinhanson on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T23:38:49.510Z · score: 6 (2 votes) · LW · GW

As I said, I'm treating it as the difference of learning N simple general tools to learning N+1 such tools. Do you think it stronger evidence than that, or do you think I'm not acknowledging how big that is?

Comment by robinhanson on Why no total winner? · 2017-10-17T21:32:13.114Z · score: 11 (3 votes) · LW · GW

Why assume AGI doesn't have problems analogous to agency problems? It will have parts of itself that it doesn't understand well, and which might go rogue.

Comment by robinhanson on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-03-26T23:18:00.977Z · score: 8 (8 votes) · LW · GW

If it is the possibility of large amounts of torture that bothers you, instead of large ratios of torture experience relative to other better experience, then any growing future should bother you, and you should just want to end civilization. But if it is ratios that concern you, then since torture isn't usually profitable, most em experience won't be torture. Even if some bad folks being rich means they could afford a lot of torture, that would still be a small fraction of total experience.

Comment by robinhanson on If we can't lie to others, we will lie to ourselves · 2016-11-28T19:13:33.547Z · score: 6 (1 votes) · LW · GW

Even if you use truth-promoting norms, their effect can be weak enough that other effects overwhelm this effect. The "rationalist community" is different in a great many ways from other communities of thought.

Comment by robinhanson on If we can't lie to others, we will lie to ourselves · 2016-11-28T13:53:01.508Z · score: 9 (6 votes) · LW · GW

I said I haven't seen this community as exceptionally accurate, and you say that you have seen that, and called my view "uncharitable". I then mentioned a track record as a way to remind us that we lack the sort of particularly clear evidence that we agree would be persuasive. I didn't mean that to be a criticism that you or others have not worked hard enough to create such a track record. Surely you can understand why outsiders might find suspect your standard of saying you think your community is more accurate because they more often agree with your beliefs.

Comment by robinhanson on If we can't lie to others, we will lie to ourselves · 2016-11-28T00:50:54.596Z · score: 7 (2 votes) · LW · GW

Yes believe fewer things and believe them less strongly. On abstract beliefs I'm not following you. The usual motive for most people is that they don't need most abstract beliefs to live their lives.

Comment by robinhanson on If we can't lie to others, we will lie to ourselves · 2016-11-27T19:57:10.309Z · score: 16 (6 votes) · LW · GW

"charitable" seems an odd name for the tendency to assume that you and your friends are better than other people, because well it just sure seems that way to you and your friends. You don't have an accuracy track record of this group to refer to, right?

Comment by robinhanson on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T18:31:46.662Z · score: 20 (24 votes) · LW · GW

I have serious doubts about the basic claim that "the rationalist community" is so smart and wise and on to good stuff compared to everyone else that it should focus on reading and talking to each other at the expense of reading others and participating in other conversations. There are obviously cultish in-group favoring biases pushing this way, and I'd want strong evidence before I attributed this push to anything else.

Comment by robinhanson on If we can't lie to others, we will lie to ourselves · 2016-11-27T18:23:11.042Z · score: 2 (2 votes) · LW · GW

I meant to claim that in fact your conscious thoughts are largely optimized for good impact on the things you say.

You can of course bet on eventual outcome of conservative epistemic norms, just as you can bet on what actually happens. Not sure what else you can do to create incentives now to believe what conservative norms will eventually say.

Comment by robinhanson on If we can't lie to others, we will lie to ourselves · 2016-11-27T14:02:07.025Z · score: 11 (6 votes) · LW · GW

That only helps if your "rationalist community" in fact pushes you to more accurate reasoning. Merely giving your community that name is far from sufficient however, and in my experience the "rationalist community" is mostly that in name only.

Comment by robinhanson on If we can't lie to others, we will lie to ourselves · 2016-11-27T14:00:25.659Z · score: 18 (8 votes) · LW · GW

"the most powerful tool is adopting epistemic norms which are appropriately conservative; to rely more on the scientific method, on well-formed arguments, on evidence that can be clearly articulated and reproduced, and so on."

A simple summary: Believe Less. Hold higher standards for what is sufficient reason to believe. Of course this is in fact what most people actually do. They don't bother to hold beliefs on the kind of abstract topics on which Paul wants to hold beliefs.

"1. What my decisions are optimized for. .. 2. What I consciously believe I want."

No. 2 might be better thought of as "What my talk is optimized for." Both systems are highly optimized. This way of seeing it emphasizes that if you want to make the two results more consistent, you want to move your talk closer to action. As with bets, or other more concrete actions.

Comment by robinhanson on Superintelligence 19: Post-transition formation of a singleton · 2015-01-20T16:06:57.454Z · score: 4 (4 votes) · LW · GW

If signing a contract was all that we needed to coordinate well, we would already be coordinating as much as is useful now. We already have good strong reasons to want to coordinate for mutual benefit.

Comment by robinhanson on Superintelligence 18: Life in an algorithmic economy · 2015-01-14T19:52:05.925Z · score: 1 (1 votes) · LW · GW

Not only is this feasible, this is in fact the usual default situation in a simple market economy.

Comment by robinhanson on Superintelligence 18: Life in an algorithmic economy · 2015-01-14T19:51:20.492Z · score: 1 (1 votes) · LW · GW

Poverty doesn't require that you work for others; most in history were poor, but were not employees. Through most of history rich people did in fact feel safe among the poor. They didn't hang there because that made them lower status. You can only deliver universal abundance if you coordinate to strongly limit population growth. So you mean abundance for the already existing, and the worse poverty possible for the not-yet-existing.

Comment by robinhanson on Superintelligence 18: Life in an algorithmic economy · 2015-01-14T19:48:10.477Z · score: 1 (1 votes) · LW · GW

With near subsistence wages there's not much to donate, so no need to bother.

Comment by robinhanson on Superintelligence 18: Life in an algorithmic economy · 2015-01-14T19:47:02.866Z · score: 1 (1 votes) · LW · GW

Yes, the surroundings would need to be not overly distracting. But that is quite consistent with luxurious.

Comment by robinhanson on The impact of whole brain emulation · 2015-01-14T00:37:20.791Z · score: 0 (0 votes) · LW · GW

von Neumann was very smart, but I very much doubt he would have been better than everyone at all jobs if trained in those jobs. There is still comparative advantage, even among the very smartest and most capable.

Comment by robinhanson on Superintelligence 18: Life in an algorithmic economy · 2015-01-13T18:34:21.922Z · score: 3 (3 votes) · LW · GW

The idea of selecting for people willing to donate everything to an employer seems fanciful and not very relevant. In a low wage competitive economy the question would instead be if one is willing to create new copies conditional on them earning low wages. If large fractions of people are so willing then one needn't pay much selection power to get that feature.

Comment by robinhanson on Superintelligence 18: Life in an algorithmic economy · 2015-01-13T18:31:15.016Z · score: 6 (6 votes) · LW · GW

There is a difference between what we might each choose if we ruled the world, and what we will together choose as the net result of our individual choices. It is not enough that many of us share your ethical principles. We would also need to coordinate to achieve outcomes suggested by those principles. That is much much harder.

Comment by robinhanson on Superintelligence 18: Life in an algorithmic economy · 2015-01-13T18:27:28.567Z · score: 2 (2 votes) · LW · GW

Having a low wage is just not like being a slave. The vast majority of humans who have ever lived were poor, but beside from the fact that slaves are also poor in a sense, the life of a typical poor person was not like the life of a slave. Ems might be poor in the sense of needing to work many hours to survive, but they would also have no pain, hunger, cold, sickness, grim, etc. unless they wanted them. Their physical appearance and surroundings would be what we'd see as very luxurious.

Comment by robinhanson on Open thread, January 25- February 1 · 2014-01-28T19:32:47.741Z · score: 5 (5 votes) · LW · GW

Yes by judging someone on their credentials in other fields, you can't tell if they are just making stuff up on this subject vs. studied it for 15 years.