Posts

What Evidence Is AlphaGo Zero Re AGI Complexity? 2017-10-22T02:28:45.764Z · score: 56 (39 votes)
What Program Are You? 2009-10-12T00:29:19.218Z · score: 28 (30 votes)
Least Signaling Activities? 2009-05-22T02:46:29.949Z · score: 27 (36 votes)
Rationality Toughness Tests 2009-04-06T01:12:31.928Z · score: 26 (27 votes)
Most Rationalists Are Elsewhere 2009-03-29T21:46:49.307Z · score: 55 (64 votes)
Rational Me or We? 2009-03-17T13:39:29.073Z · score: 127 (135 votes)
The Costs of Rationality 2009-03-03T18:13:17.465Z · score: 34 (44 votes)
Test Your Rationality 2009-03-01T13:21:34.375Z · score: 39 (44 votes)

Comments

Comment by robinhanson on Quotes from Moral Mazes · 2019-06-05T13:21:33.405Z · score: 34 (17 votes) · LW · GW

I was once that young and naive. But I'd never heard of this book Moral Mazes. Seems great, and I intend to read it. https://twitter.com/robinhanson/status/1136260917644185606

Comment by robinhanson on Simple Rules of Law · 2019-05-21T16:47:24.368Z · score: 4 (2 votes) · LW · GW

The CEO proposal is to fire them at the end of the quarter if the prices just before then so indicate. This solves the problem of the market traders expecting later traders to have more info than they. And it doesn't mean that the board can't fire them at other times for other reasons.

Comment by robinhanson on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-25T15:36:58.397Z · score: 5 (3 votes) · LW · GW

The claim that AI is vastly better at coordination seems to me implausible on its face. I'm open to argument, but will remain skeptical until I hear good arguments.

Comment by robinhanson on Robin Hanson on Simple, Evidence Backed Models · 2019-04-17T18:34:07.982Z · score: 7 (4 votes) · LW · GW

Secrecy CAN have private value. But it isn't at all clear that we are typically together better off with secrets. There are some cases, to be sure, where that is true. But there are also so many cases where it is not.

Comment by robinhanson on Why didn't Agoric Computing become popular? · 2019-02-17T16:01:27.096Z · score: 18 (8 votes) · LW · GW

My guess is that the reason is close to why security is so bad: Its hard to add security to an architecture that didn't consider it up front, and most projects are in too much of a rush to take time to do that. Similarly, it takes time to think about what parts of a system should own what and be trusted to judge what.. Easier/faster to just make a system that does things, without attending to this, even if that is very costly in the long run. When the long run arrives, the earlier players are usually gone.

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-17T13:43:26.173Z · score: 5 (5 votes) · LW · GW

We have to imagine that we have some influence over the allocation of something, or there's nothing to debate here. Call it "resources" or "talent" or whatever, if there's nothing to move, there's nothing to discuss.

I'm skeptical solving hard philosophical problems will be of much use here. Once we see the actual form of relevant systems then we can do lots of useful work on concrete variations.

I'd call "human labor being obsolete within 10 years … 15%, and within 20 years … 35%" crazy extreme predictions, and happily bet against them.

If we look at direct economic impact, we've had a pretty steady trend for at least a century of jobs displaced by automation, and the continuation of past trend puts full AGI a long way off. So you need a huge unprecedented foom-like lump of innovation to have change that big that soon.

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-17T13:33:23.333Z · score: 3 (3 votes) · LW · GW

Solving problems is mostly a matter of total resources devoted, not time devoted. Yes some problems have intrinsic clocks, but this doesn't look like such a problem. If we get signs of a problem looming, and can devote a lot of resources then, that makes it tempting to save resources today for such a future push, as we'll know a lot more then and resources today become more resources when delayed.

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-16T17:45:48.055Z · score: 8 (4 votes) · LW · GW

Can you point to a good/best argument for the claim that AGI is coming soon enough to justify lots of effort today?

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-16T10:54:37.312Z · score: 7 (4 votes) · LW · GW

Its not quite about "fast" v. "slow" than about the chances for putting lots of resources into the problem with substantial warning. Even if things change fast, as long as you get enough warning and resources can be moved to the problem fast enough, waiting still makes sense.

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-16T10:51:19.843Z · score: 4 (5 votes) · LW · GW

By "main reason for concern" I mean best arguments; I'm not trying to categorize people's motivations.

AGI isn't remotely close, and I just don't believe people who think they see signs of that. Yes for any problem that we'll eventually want to work on, a few people should work on it now just so someone is tracking the problem, ready to tell the rest of us if they see signs of it coming soon. But I see people calling for much more than that minimal tracking effort.

Most people who work in research areas call for more relative funding for their areas. So the rest of us just can't be in the habit of believing such calls. We must hold a higher standard than "people who get $ to work on this say more $ should go to this now."

Comment by robinhanson on Some disjunctive reasons for urgency on AI risk · 2019-02-16T01:54:27.559Z · score: 13 (8 votes) · LW · GW

Even if there will be problems worth working on at some point, if we will know a lot more later and if resources today can be traded for a lot more resources later, the temptation to wait should be strong. The foom scenario has few visible indications of a problem looming, forcing one to work on the problems far ahead of time. But in scenarios where there's warning, lots more resources, and better tools and understand later, waiting makes a lot more sense.

Comment by robinhanson on Towards no-math, graphical instructions for prediction markets · 2019-01-05T17:45:41.255Z · score: 8 (4 votes) · LW · GW

Your critique is plausible. I was never a fan of these supposedly simple interfaces.

Comment by robinhanson on Towards no-math, graphical instructions for prediction markets · 2019-01-05T17:44:58.792Z · score: 10 (5 votes) · LW · GW

There have long been lots of unexplored good ideas for interface improvements. But they need to be tested in the context of real systems and users.

Comment by robinhanson on On Robin Hanson’s Board Game · 2018-09-15T12:59:16.207Z · score: 7 (4 votes) · LW · GW

Yes, we only did a half dozen trials, and mostly with new players, so players were inexperienced.

Comment by robinhanson on On Robin Hanson’s Board Game · 2018-09-09T13:14:52.942Z · score: 9 (6 votes) · LW · GW

Note that all this analysis is based on thinking about the game, not from playing the game. From my observing game play, I'd say that price accuracy does not noticeably suffer in the endgame.

For game design, yes good to include a few characters who will be excluded early, so people attend to story in early period.

Comment by robinhanson on On Robin Hanson’s Board Game · 2018-09-09T13:13:19.602Z · score: 5 (3 votes) · LW · GW

If more than one person "did it", you could pay off that fraction of $100 to each. So if two did it, each card is worth $50 at the end.

Comment by robinhanson on Confusions Concerning Pre-Rationality · 2018-06-05T08:55:10.909Z · score: 19 (4 votes) · LW · GW

The problem of how to be rational is hard enough that one shouldn’t expect to get good proposals for complete algorithms for how to be rational in all situations. Instead we must chip away at the problem. And one way to do that is to slowly collect rationality constraints. I saw myself as contributing by possibly adding a new one. I’m not very moved by the complaint “but what is the algorithm to become fully rational from any starting point?” as that is just too hard a problem to solve all at once. 

Comment by robinhanson on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-24T13:37:44.060Z · score: 3 (2 votes) · LW · GW

I disagree with the claim that "this single simple tool gives a bigger advantage on a wider range of tasks than we have seen with previous tools."

Comment by robinhanson on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T23:52:13.321Z · score: 3 (5 votes) · LW · GW

Consider the example of whether a big terror attack indicates that there has been an increase in the average rate or harm of terror attacks. You could easily say "You can't possibly claim that big terror attack yesterday is no evidence; and if it is evidence it is surely in the direction of the ave rate/harm having increased." Technically correct, but then every other day without such a big attack is also evidence for a slow decrease in the rate/harm of attacks. Even if the rate/harm didn't change, every once in a while you should expect a big attack. This in the sense in which I'd say that finding one more big tool isn't much evidence that big tools will matter more in the future. Sure the day when you find such a tool is itself weak evidence in that direction. But the whole recent history of that day and all the days before it may be an evidential wash.

Comment by robinhanson on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T23:40:03.702Z · score: 23 (9 votes) · LW · GW

This seems to me a reasonable statement of the kind of evidence that would be most relevant.

Comment by robinhanson on What Evidence Is AlphaGo Zero Re AGI Complexity? · 2017-10-23T23:38:49.510Z · score: 6 (2 votes) · LW · GW

As I said, I'm treating it as the difference of learning N simple general tools to learning N+1 such tools. Do you think it stronger evidence than that, or do you think I'm not acknowledging how big that is?

Comment by robinhanson on Why no total winner? · 2017-10-17T21:32:13.114Z · score: 11 (3 votes) · LW · GW

Why assume AGI doesn't have problems analogous to agency problems? It will have parts of itself that it doesn't understand well, and which might go rogue.

Comment by robinhanson on Unethical Human Behavior Incentivised by Existence of AGI and Mind-Uploading · 2017-03-26T23:18:00.977Z · score: 8 (8 votes) · LW · GW

If it is the possibility of large amounts of torture that bothers you, instead of large ratios of torture experience relative to other better experience, then any growing future should bother you, and you should just want to end civilization. But if it is ratios that concern you, then since torture isn't usually profitable, most em experience won't be torture. Even if some bad folks being rich means they could afford a lot of torture, that would still be a small fraction of total experience.

Comment by robinhanson on If we can't lie to others, we will lie to ourselves · 2016-11-28T19:13:33.547Z · score: 6 (1 votes) · LW · GW

Even if you use truth-promoting norms, their effect can be weak enough that other effects overwhelm this effect. The "rationalist community" is different in a great many ways from other communities of thought.

Comment by robinhanson on If we can't lie to others, we will lie to ourselves · 2016-11-28T13:53:01.508Z · score: 10 (5 votes) · LW · GW

I said I haven't seen this community as exceptionally accurate, and you say that you have seen that, and called my view "uncharitable". I then mentioned a track record as a way to remind us that we lack the sort of particularly clear evidence that we agree would be persuasive. I didn't mean that to be a criticism that you or others have not worked hard enough to create such a track record. Surely you can understand why outsiders might find suspect your standard of saying you think your community is more accurate because they more often agree with your beliefs.

Comment by robinhanson on If we can't lie to others, we will lie to ourselves · 2016-11-28T00:50:54.596Z · score: 7 (2 votes) · LW · GW

Yes believe fewer things and believe them less strongly. On abstract beliefs I'm not following you. The usual motive for most people is that they don't need most abstract beliefs to live their lives.

Comment by robinhanson on If we can't lie to others, we will lie to ourselves · 2016-11-27T19:57:10.309Z · score: 10 (5 votes) · LW · GW

"charitable" seems an odd name for the tendency to assume that you and your friends are better than other people, because well it just sure seems that way to you and your friends. You don't have an accuracy track record of this group to refer to, right?

Comment by robinhanson on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T18:31:46.662Z · score: 20 (24 votes) · LW · GW

I have serious doubts about the basic claim that "the rationalist community" is so smart and wise and on to good stuff compared to everyone else that it should focus on reading and talking to each other at the expense of reading others and participating in other conversations. There are obviously cultish in-group favoring biases pushing this way, and I'd want strong evidence before I attributed this push to anything else.

Comment by robinhanson on If we can't lie to others, we will lie to ourselves · 2016-11-27T18:23:11.042Z · score: 2 (2 votes) · LW · GW

I meant to claim that in fact your conscious thoughts are largely optimized for good impact on the things you say.

You can of course bet on eventual outcome of conservative epistemic norms, just as you can bet on what actually happens. Not sure what else you can do to create incentives now to believe what conservative norms will eventually say.

Comment by robinhanson on If we can't lie to others, we will lie to ourselves · 2016-11-27T14:02:07.025Z · score: 11 (6 votes) · LW · GW

That only helps if your "rationalist community" in fact pushes you to more accurate reasoning. Merely giving your community that name is far from sufficient however, and in my experience the "rationalist community" is mostly that in name only.

Comment by robinhanson on If we can't lie to others, we will lie to ourselves · 2016-11-27T14:00:25.659Z · score: 12 (7 votes) · LW · GW

"the most powerful tool is adopting epistemic norms which are appropriately conservative; to rely more on the scientific method, on well-formed arguments, on evidence that can be clearly articulated and reproduced, and so on."

A simple summary: Believe Less. Hold higher standards for what is sufficient reason to believe. Of course this is in fact what most people actually do. They don't bother to hold beliefs on the kind of abstract topics on which Paul wants to hold beliefs.

"1. What my decisions are optimized for. .. 2. What I consciously believe I want."

No. 2 might be better thought of as "What my talk is optimized for." Both systems are highly optimized. This way of seeing it emphasizes that if you want to make the two results more consistent, you want to move your talk closer to action. As with bets, or other more concrete actions.

Comment by robinhanson on Superintelligence 19: Post-transition formation of a singleton · 2015-01-20T16:06:57.454Z · score: 4 (4 votes) · LW · GW

If signing a contract was all that we needed to coordinate well, we would already be coordinating as much as is useful now. We already have good strong reasons to want to coordinate for mutual benefit.

Comment by robinhanson on Superintelligence 18: Life in an algorithmic economy · 2015-01-14T19:52:05.925Z · score: 1 (1 votes) · LW · GW

Not only is this feasible, this is in fact the usual default situation in a simple market economy.

Comment by robinhanson on Superintelligence 18: Life in an algorithmic economy · 2015-01-14T19:51:20.492Z · score: 1 (1 votes) · LW · GW

Poverty doesn't require that you work for others; most in history were poor, but were not employees. Through most of history rich people did in fact feel safe among the poor. They didn't hang there because that made them lower status. You can only deliver universal abundance if you coordinate to strongly limit population growth. So you mean abundance for the already existing, and the worse poverty possible for the not-yet-existing.

Comment by robinhanson on Superintelligence 18: Life in an algorithmic economy · 2015-01-14T19:48:10.477Z · score: 1 (1 votes) · LW · GW

With near subsistence wages there's not much to donate, so no need to bother.

Comment by robinhanson on Superintelligence 18: Life in an algorithmic economy · 2015-01-14T19:47:02.866Z · score: 1 (1 votes) · LW · GW

Yes, the surroundings would need to be not overly distracting. But that is quite consistent with luxurious.

Comment by robinhanson on The impact of whole brain emulation · 2015-01-14T00:37:20.791Z · score: 0 (0 votes) · LW · GW

von Neumann was very smart, but I very much doubt he would have been better than everyone at all jobs if trained in those jobs. There is still comparative advantage, even among the very smartest and most capable.

Comment by robinhanson on Superintelligence 18: Life in an algorithmic economy · 2015-01-13T18:34:21.922Z · score: 3 (3 votes) · LW · GW

The idea of selecting for people willing to donate everything to an employer seems fanciful and not very relevant. In a low wage competitive economy the question would instead be if one is willing to create new copies conditional on them earning low wages. If large fractions of people are so willing then one needn't pay much selection power to get that feature.

Comment by robinhanson on Superintelligence 18: Life in an algorithmic economy · 2015-01-13T18:31:15.016Z · score: 6 (6 votes) · LW · GW

There is a difference between what we might each choose if we ruled the world, and what we will together choose as the net result of our individual choices. It is not enough that many of us share your ethical principles. We would also need to coordinate to achieve outcomes suggested by those principles. That is much much harder.

Comment by robinhanson on Superintelligence 18: Life in an algorithmic economy · 2015-01-13T18:27:28.567Z · score: 2 (2 votes) · LW · GW

Having a low wage is just not like being a slave. The vast majority of humans who have ever lived were poor, but beside from the fact that slaves are also poor in a sense, the life of a typical poor person was not like the life of a slave. Ems might be poor in the sense of needing to work many hours to survive, but they would also have no pain, hunger, cold, sickness, grim, etc. unless they wanted them. Their physical appearance and surroundings would be what we'd see as very luxurious.

Comment by robinhanson on Open thread, January 25- February 1 · 2014-01-28T19:32:47.741Z · score: 5 (5 votes) · LW · GW

Yes by judging someone on their credentials in other fields, you can't tell if they are just making stuff up on this subject vs. studied it for 15 years.

Comment by robinhanson on Common sense as a prior · 2013-08-12T19:10:35.807Z · score: 1 (1 votes) · LW · GW

The overall framework is sensible, but I have trouble applying it to the most vexing cases: where the respected elites mostly just giggle at a claim and seem to refuse to even think about reasons for or against it, but instead just confidently reject it. It might seem to me that their usual intellectual standards would require that they engage in such reasoning, but the fact that they do not in fact think that appropriate in this case is evidence of something. But what?

Comment by robinhanson on Model Combination and Adjustment · 2013-07-18T20:35:21.183Z · score: 0 (0 votes) · LW · GW

If the difference is between inference from surface features vs. internal structure, then yes of course in either case unless you have a very strong theory, you will probably do better to combine many weak theories. When looking at surface features, look at many different features, not just one.

Comment by robinhanson on Singleton: the risks and benefits of one world governments · 2013-07-05T15:13:10.205Z · score: 23 (25 votes) · LW · GW

I'm not sure the point of outlining a research program in an area where you are not an expert and there are now many experts. First you just want to find out what the current experts think they know. Then if you want to know more, I'd think you'd either want to ask those experts to outline a research program, or you'd want to become an expert yourself and then outline a program.

Comment by robinhanson on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-06T15:29:19.928Z · score: 19 (19 votes) · LW · GW

This post has not at all misunderstood my suggestion from long ago, though I don't think I thought about it very much at the time. I agree with the thrust of the post that a leverage factor seems to deal with the basic problem, though of course I'm also somewhat expecting more scenarios to be proposed to upset the apparent resolution soon.

Comment by robinhanson on Neil deGrasse Tyson on Cryonics · 2012-12-20T16:34:57.287Z · score: 3 (5 votes) · LW · GW

There can be a lot of redundancy within neurons as well. Just because you find causally relevant chemical densities that predict neuron states doesn't mean that there aren't other chemical densities that also predict those same states.

Comment by robinhanson on Neil deGrasse Tyson on Cryonics · 2012-12-12T15:06:47.760Z · score: 11 (11 votes) · LW · GW

No doubt you can identify particular local info that is causally effective in changing local states, and that is lost or destroyed in cryonics. The key question is the redundancy of this info with other info elsewhere. If there is lots of redundancy, then we only need one place where it is held to be preserved. Your comments here have not spoken to this key issue.

Comment by robinhanson on [LINK] blog on cryonics by someone who freezes things in a cell bio lab · 2012-12-12T15:04:32.241Z · score: 2 (2 votes) · LW · GW

Um, the whole point of the blood system is to overcome the squared area vs cubed volume problem. So you can cool larger things fast if you use blood vessels to move fluid that carries out heat.

Comment by robinhanson on [video] Robin Hanson: Uploads Economics 101 · 2012-08-25T02:22:14.833Z · score: 2 (2 votes) · LW · GW

Yes, anything is expensive to sim in very high detail. But it isn't at all clear that in typical settings the amount of detail you can get for say 0.1% of the cost of running a brain is usually unpleasant or disturbing.

Comment by robinhanson on [video] Robin Hanson: Uploads Economics 101 · 2012-08-10T01:16:48.785Z · score: 0 (0 votes) · LW · GW

For any given task there will be particular people you need to talk to to get it done. I expect hardware would be specialized for particular speeds, but that minds could be moved between hardware of differing speeds in order to change speeds. In general most tasks have a particular time they need to be done, with only minor rewards for doing it much sooner.