Posts
Comments
Seems to me I spent a big % of my post arguing against the rapid growth claim.
Come on, most every business tracks revenue in great detail. If customers were getting unhappy with the firm's services and rapidly switching en mass, the firm would quickly become very aware, and looking into the problem in great detail.
You complain that my estimating rates from historical trends is arbitrary, but you offer no other basis for estimating such rates. You only appeal to uncertainty. But there are several other assumptions required for this doomsday scenario. If all you have is logical possibility to argue for piling on several a priori unlikely assumptions, it gets hard to take that seriously.
You keep invoking the scenario of a single dominant AI that is extremely intelligent. But that only happens AFTER a single AI fooms to be much better than all other AIs. You can't invoke its super intelligence to explain why its owners fail to notice and control its early growth.
I comment on this paper here: https://www.overcomingbias.com/2022/07/cooks-critique-of-our-earliness-argument.html
That's an exponential with mean 0.7, or mean 1/0.7?
"My prior on is distributed "
I don't understand this notation. It reads to me like "103+ 5 Gy"; how is that a distribution?
It seems the key feature of this remaining story is the "coalition of AIs" part. I can believe that AIs would get powerful, what I'm skeptical about is the claim that they naturally form a coalition of them against us. Which is also what I object to in your prior comments. Horses are terrible at coordination compared to humans, and humans weren't built by horses and integrated into a horse society, with each human originally in the service of a particular horse.
Its not enough that AI might appear in a few decades, you also need something useful you can do about it now, compared to investing your money to have more to spend later when concrete problems appear.
I just read through your "what 2026 looks like" post, but didn't see how it is a problematic scenario. Why should we want to work ahead of time to prepare for that scenario?
In our simulations, we find it overwhelmingly likely that any such spherical volume of an alien civ would be much larger than the full moon in the sky. So no need to study distant galaxies in fine detail; look for huge spheres in the sky.
"or more likely we are an early civilization in the universe (according to Robin Hanson’s “Grabby Aliens” model) so, 2) quite possibly there are no grabby aliens populating the universe with S-Risks yet"
But our model implies that there are in fact many aliens out there right now. Just not in our backward light cone.
Aw, I still don't know which face goes with the TGGP name.
Wow, it seems that EVERYONE here has this counter argument "You say humans look weird according to this calculation, but here are other ways we are weird that you don't explain." But there is NO WAY to explain all ways we are weird, because we are in fact weird in some ways. For each way that we are weird, we should be looking for some other way to see the situation that makes us look less weird. But there is no guarantee of finding that; we just just actually be weird. https://www.overcomingbias.com/2021/07/why-are-we-weird.html
You have the date of the great filter paper wrong; it was 1998, not 1996.
Yes, a zoo hypothesis is much like a simulation hypothesis, and the data we use cannot exclude it. (Nor can they exclude a simulation hypothesis.) We choose to assume that grabby aliens change their volumes in some clearly visible way, exactly to exclude zoo hypotheses.
I'm arguing for simpler rules here overall.
Your point #1 misses the whole norm violation element. The reason it hurts if others are told about an affair is that others disapprove. That isn't why loud music hurts.
Imagine there's a law against tattoos, and I say "Yes some gang members wear them but so do many others. Maybe just outlaw gang tattoos?" You could then respond that I'm messing with edge cases, so we should just leave the rule alone.
You will allow harmful gossip, but not blackmail, because the first might be pursuing your "values", but the second is seeking to harm. Yet the second can have many motives, and is mostly commonly to get money. And you are focused too much on motives, rather than on outcomes.
Yup.
The sensible approach is. to demand a stream of payments over time. If you reveal it to others who also demand streams, that will cut how much of a stream they are willing to pay you.
You are very much in the minority if you want to abolish norms in general.
NDAs are also legal in the case where info was known before the agreement. For example, Trump using NDAs to keep affairs secret.
"models are brittle" and "models are limited" ARE the generic complaints I pointed to.
We have lots of models that are useful even when the conclusions follow pretty directly. Such as supply and demand. The question is whether such models are useful, not if they are simple.
There are THOUSANDS of critiques out there of the form "Economic theory can't be trusted because economic theory analyses make assumptions that can't be proven and are often wrong, and conclusions are often sensitive to assumptions." Really, this is a very standard and generic critique, and of course it is quite wrong, as such a critique can be equally made against any area of theory whatsoever, in any field.
The agency literature is there to model real agency relations in the world. Those real relations no doubt contain plenty of "unawareness". If models without unawareness were failing to capture and explain a big fraction of real agency problems, there would be plenty of scope for people to try to fill that gap via models that include it. The claim that this couldn't work because such models are limited seems just arbitrary and wrong to me. So either one must claim that AI-related unawareness is of a very different type or scale from ordinary human cases in our world today, or one must implicitly claim that unawareness modeling would in fact be a contribution to the agency literature. It seems to me a mild burden of proof sits on advocates for this latter case to in fact create such contributions.
"Hanson believes that the principal-agent literature (PAL) provides strong evidence against rents being this high."
I didn't say that. This is what I actually said:
"surely the burden of 'proof' (really argument) should lie on those say this case is radically different from most found in our large and robust agency literatures."
Uh, we are talking about holding people to MUCH higher rationality standards than the ability to parse Phil arguments.
"At its worst, there might be pressure to carve out the parts of ourselves that make us human, like Hanson discusses in Age of Em."
To be clear, while some people do claim that such such things might happen in an Age of Em, I'm not one of them. Of course I can't exclude such things in the long run; few things can be excluded in the long run. But that doesn't seem at all likely to me in the short run.
You are a bit too quick to allow the reader the presumption that they have more algorithmic faith than the other folks they talk to. Yes if you are super rational and they are not, you can ignore them. But how did you come to be confident in that description of the situation?
Seems like you guys might have (or be able to create) a dataset on who makes what kind of forecasts, and who tends to be accurate or hyped re them. Would be great if you could publish some simple stats from such a dataset.
To be clear, Foresight asked each speakers to offer a topic for participants to forecast on, related to our talks. This was the topic I offered. That is NOT the same as my making a prediction on that topic. Instead, that is to say that the chance on this question seemed an unusual combination of verifiable in a year and relevant to the chances on other topics I talked about.
Foresight asked us to offer topics for participants to forecast on, related to our talks. This was the topic I offered. That is NOT the same as my making a prediction on that topic. Instead, that is to say that the chance on this question is an unusual combination of verifiable in a year and relevant to the chances on other topics I talked about.
If you specifically want models with "bounded rationality", why do add in that search term: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&as_vis=1&q=bounded+rationality+principal+agent&btnG=
See also:
https://onlinelibrary.wiley.com/doi/abs/10.1111/geer.12111
https://www.mdpi.com/2073-4336/4/3/508
https://etd.ohiolink.edu/!etd.send_file?accession=miami153299521737861&disposition=inline
The % of world income that goes to computer hardware & software, and the % of useful tasks that are done by them.
Most models have an agent who is fully rational, but I'm not sure what you mean by "principal is very limited".
I'd also want to know that ratio X for each of the previous booms. There isn't a discrete threshold, because analogies go on a continuum from more to less relevant. An unusually high X would be noteworthy and relevant, but not make prior analogies irrelevant.
The literature is vast, but this gets you started: https://scholar.google.com/scholar?hl=en&as_sdt=0%2C47&q=%22principal+agent%22&btnG=
My understanding is that this progress looks much less of a trend deviation when you scale it against the hardware and other resources devoted to these tasks. And of course in any larger area there are always subareas which happen to progress faster. So we have to judge how large is a subarea that is going faster, and is that size unusually large.
Life extension also suffers from the 100,000 fans hype problem.
I'll respond to comments here, at least for a few days.
Markets can work fine with only a few participants. But they do need sufficient incentives to participate.
"of all the hidden factors which caused the market consensus to reach this point, which, if any of them, do we have any power to affect?" A prediction market can only answer the question you ask it. You can use a conditional market to ask if a particular factor has an effect on an outcome. Yes of course it will cost more to ask more questions. If there were a lot of possible factors, you might offer a prize to whomever proposes a factor that turns out to have a big effect. Yes it would cost to offer such a prize, because it could be work to find such factors.
I was once that young and naive. But I'd never heard of this book Moral Mazes. Seems great, and I intend to read it. https://twitter.com/robinhanson/status/1136260917644185606
The CEO proposal is to fire them at the end of the quarter if the prices just before then so indicate. This solves the problem of the market traders expecting later traders to have more info than they. And it doesn't mean that the board can't fire them at other times for other reasons.
The claim that AI is vastly better at coordination seems to me implausible on its face. I'm open to argument, but will remain skeptical until I hear good arguments.
Secrecy CAN have private value. But it isn't at all clear that we are typically together better off with secrets. There are some cases, to be sure, where that is true. But there are also so many cases where it is not.
My guess is that the reason is close to why security is so bad: Its hard to add security to an architecture that didn't consider it up front, and most projects are in too much of a rush to take time to do that. Similarly, it takes time to think about what parts of a system should own what and be trusted to judge what.. Easier/faster to just make a system that does things, without attending to this, even if that is very costly in the long run. When the long run arrives, the earlier players are usually gone.
We have to imagine that we have some influence over the allocation of something, or there's nothing to debate here. Call it "resources" or "talent" or whatever, if there's nothing to move, there's nothing to discuss.
I'm skeptical solving hard philosophical problems will be of much use here. Once we see the actual form of relevant systems then we can do lots of useful work on concrete variations.
I'd call "human labor being obsolete within 10 years … 15%, and within 20 years … 35%" crazy extreme predictions, and happily bet against them.
If we look at direct economic impact, we've had a pretty steady trend for at least a century of jobs displaced by automation, and the continuation of past trend puts full AGI a long way off. So you need a huge unprecedented foom-like lump of innovation to have change that big that soon.