Posts

Is this a good way to bet on short timelines? 2020-11-28T12:51:07.516Z
Persuasion Tools: AI takeover without AGI or agency? 2020-11-20T16:54:01.306Z
How Roodman's GWP model translates to TAI timelines 2020-11-16T14:05:45.654Z
How can I bet on short timelines? 2020-11-07T12:44:20.360Z
What considerations influence whether I have more influence over short or long timelines? 2020-11-05T19:56:12.147Z
AI risk hub in Singapore? 2020-10-29T11:45:16.096Z
The date of AI Takeover is not the day the AI takes over 2020-10-22T10:41:09.242Z
If GPT-6 is human-level AGI but costs $200 per page of output, what would happen? 2020-10-09T12:00:36.814Z
Where is human level on text prediction? (GPTs task) 2020-09-20T09:00:28.693Z
Forecasting Thread: AI Timelines 2020-08-22T02:33:09.431Z
What if memes are common in highly capable minds? 2020-07-30T20:45:17.500Z
What a 20-year-lead in military tech might look like 2020-07-29T20:10:09.303Z
Does the lottery ticket hypothesis suggest the scaling hypothesis? 2020-07-28T19:52:51.825Z
Probability that other architectures will scale as well as Transformers? 2020-07-28T19:36:53.590Z
Lessons on AI Takeover from the conquistadors 2020-07-17T22:35:32.265Z
What are the risks of permanent injury from COVID? 2020-07-07T16:30:49.413Z
Relevant pre-AGI possibilities 2020-06-20T10:52:00.257Z
Image GPT 2020-06-18T11:41:21.198Z
List of public predictions of what GPT-X can or can't do? 2020-06-14T14:25:17.839Z
Preparing for "The Talk" with AI projects 2020-06-13T23:01:24.332Z
Reminder: Blog Post Day III today 2020-06-13T10:28:41.605Z
Blog Post Day III 2020-06-01T13:56:10.037Z
Predictions/questions about conquistadors? 2020-05-22T11:43:40.786Z
Better name for "Heavy-tailedness of the world?" 2020-04-17T20:50:06.407Z
Is this viable physics? 2020-04-14T19:29:28.372Z
Blog Post Day II Retrospective 2020-03-31T15:03:21.305Z
Three Kinds of Competitiveness 2020-03-31T01:00:56.196Z
Reminder: Blog Post Day II today! 2020-03-28T11:35:03.774Z
What are the most plausible "AI Safety warning shot" scenarios? 2020-03-26T20:59:58.491Z
Could we use current AI methods to understand dolphins? 2020-03-22T14:45:29.795Z
Blog Post Day II 2020-03-21T16:39:04.280Z
What "Saving throws" does the world have against coronavirus? (And how plausible are they?) 2020-03-04T18:04:18.662Z
Blog Post Day Retrospective 2020-03-01T11:32:00.601Z
Cortés, Pizarro, and Afonso as Precedents for Takeover 2020-03-01T03:49:44.573Z
Reminder: Blog Post Day (Unofficial) 2020-02-29T15:10:17.264Z
Response to Oren Etzioni's "How to know if artificial intelligence is about to destroy civilization" 2020-02-27T18:10:11.129Z
What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? 2020-02-26T14:19:27.197Z
Blog Post Day (Unofficial) 2020-02-18T19:05:47.140Z
Simulation of technological progress (work in progress) 2020-02-10T20:39:34.620Z
A dilemma for prosaic AI alignment 2019-12-17T22:11:02.316Z
A parable in the style of Invisible Cities 2019-12-16T15:55:06.072Z
Why aren't assurance contracts widely used? 2019-12-01T00:20:21.610Z
How common is it for one entity to have a 3+ year technological lead on its nearest competitor? 2019-11-17T15:23:36.913Z
Daniel Kokotajlo's Shortform 2019-10-08T18:53:22.087Z
Occam's Razor May Be Sufficient to Infer the Preferences of Irrational Agents: A reply to Armstrong & Mindermann 2019-10-07T19:52:19.266Z
Soft takeoff can still lead to decisive strategic advantage 2019-08-23T16:39:31.317Z
The "Commitment Races" problem 2019-08-23T01:58:19.669Z
The Main Sources of AI Risk? 2019-03-21T18:28:33.068Z

Comments

Comment by daniel-kokotajlo on In Addition to Ragebait and Doomscrolling · 2020-12-03T21:29:42.377Z · LW · GW

Scornporn?

Comment by daniel-kokotajlo on Book review: WEIRDest People · 2020-12-02T17:20:18.371Z · LW · GW

Good point about the colonization. One thing I was surprised to learn when I researched the conquistadors stuff is that Muslim merchants, fleets, armies, and rulers had penetrated into India, Indonesia, all around the indian ocean, and even into China I think by the time the Portuguese showed up. Malacca was ruled by a Muslim for example. And yeah, no doubt this led to a lot of resources flowing back towards the middle east.

How much damage did the Mongols do to Muslim science? My vague guess would be, quite a lot? Perhaps this is also relevant.

Comment by daniel-kokotajlo on AI Safety "Success Stories" · 2020-12-02T15:36:06.686Z · LW · GW

I'm surprised this post didn't get more comments and spark more further research. Rereading it, I think it's both an excellent overview/distillation, and also a piece of strategy research in its own right. I wish there were more things like this. I think this post deserves to be expanded into a book or website and continually updated and refined.

Comment by daniel-kokotajlo on The Parable of Predict-O-Matic · 2020-12-02T15:28:15.906Z · LW · GW

This piece of fiction is good sci-fi. It is fun to read and makes you think. In this case, it makes you think about some really important issues in AI safety and AI alignment.

Comment by daniel-kokotajlo on Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists · 2020-12-02T15:22:44.100Z · LW · GW

This post improved my understanding of how censorship, polarization, groupthink, etc. work. I also love the slogan "Blatant cherry-picking is the best kind."

Comment by daniel-kokotajlo on No nonsense version of the "racial algorithm bias" · 2020-12-02T15:19:49.444Z · LW · GW

This post does what it says in the title. With diagrams! It's not about AI safety or anything, but it's a high-quality explainer on a much-discussed topic, that I'd be happy to link people to.

Comment by daniel-kokotajlo on Rule Thinkers In, Not Out · 2020-12-02T14:51:37.213Z · LW · GW

This post seems like good life advice for me and people like me, when taken with appropriate caution. It's well-written too, of course.

Comment by daniel-kokotajlo on Thoughts on Human Models · 2020-12-02T14:40:40.391Z · LW · GW

This post raises awareness about some important and neglected problems IMO. It's not flashy or mind-blowing, but it's solid.

Comment by daniel-kokotajlo on Humans Who Are Not Concentrating Are Not General Intelligences · 2020-12-02T14:36:04.523Z · LW · GW

I keep thinking about the title (/central claim) of this post. I'm not sure it's true, but it's given me a lot to think about. I think this post is useful for understanding GPT etc.

Comment by daniel-kokotajlo on Understanding “Deep Double Descent” · 2020-12-02T14:14:50.264Z · LW · GW

I'm no ML expert, but thanks to this post I feel like I have a basic grasp of some important ML theory. (It's clearly written and has great graphs.) This is a big deal because this understanding of deep double descent has shaped my AI timelines to a noticeable degree.

Comment by daniel-kokotajlo on Programmers Should Plan For Lower Pay · 2020-12-02T12:57:20.480Z · LW · GW

I'm fascinated to hear how this went. Well done, Randomsong, and please let us know what happened!

Comment by daniel-kokotajlo on The LessWrong 2018 Book is Available for Pre-order · 2020-12-02T12:04:12.247Z · LW · GW

This looks awesome! Can't wait to get mine.

Is there something similar for The Codex? On Amazon I see a physical collection of slatestarcodex essays, but it has poor reviews, saying it's just a scrape of the website without images. Is it even official?

Comment by daniel-kokotajlo on The LessWrong 2018 Book is Available for Pre-order · 2020-12-02T11:59:36.732Z · LW · GW

I think the pre-order link is broken? It takes me to a LW page saying "No comment found"

Comment by daniel-kokotajlo on Book review: WEIRDest People · 2020-12-01T20:34:46.742Z · LW · GW

See, if they both had engineering cultures and proto-capitalism, that seems like evidence for the "Because colonialism" hypothesis.

But I do think the "never really unified" hypothesis is intriguing. After all, the Chinese not only destroyed their own treasure fleet but basically banned maritime trade and sent the army to depopulate their own coastline for 20km or so inland, IIRC, because of the misguided policy decisions of the central government. No central government, no misguided policy decisions applied to entire civilizations.

Comment by daniel-kokotajlo on Book review: WEIRDest People · 2020-12-01T20:30:01.984Z · LW · GW

Yeah, good point, maybe it was something like "Will to explore and colonize" that was the most important variable, even more important than the ships+navigation tech. Or maybe it was a more generic tech advantage, that made it cheaper and more profitable for Europeans to do it than for the Chinese or Arabs to do it.

I think the ships+navigation tech are definitely worth mentioning at least, because they were necessary, and not easy to acquire. And Europeans were certainly disproportionately good at it at the time, as far as I can tell. I know their ships were (in the relevant ways) slightly superior to the ships in the Indian Ocean in 1500, and while I haven't looked this up, I'd be willing to bet that their navigation tech (and therefore, their ability to cross the Pacific and Atlantic) was superior to the Chinese. The Polynesians had excellent navigation tech, but tiny ships and insufficient military or economic tech to exploit this advantage. No one else comes close to those groups as far as I know.

Comment by daniel-kokotajlo on Book review: WEIRDest People · 2020-12-01T06:44:33.017Z · LW · GW

OK, thanks -- this is the sort of argument I was looking for, and it is updating me.

Do you think there's a common cause between colonialism and the IR? Say, science + capitalism? Or maybe simply engineering culture (producing steam engines, but before that ships + navigation?)

Comment by daniel-kokotajlo on Book review: WEIRDest People · 2020-11-30T20:49:13.753Z · LW · GW

OK, thanks. I find it hard to take seriously the idea that the IR caused colonialism, since colonialism happened first (just look at the world in 1750!). Maybe the idea is that there was some underlying advantage Europe had which caused both colonialism and the IR?

I agree that maybe western culture is part of the explanation for why colonialism happened in the West more than it did elsewhere. But I think having good ships and navigation tech is a bigger part of the explanation.

I like the point that resources in general don't seem to cause technological growth. Russia, North Korea, etc. Vaniver mentions China below, maybe a better example would be Mongolia, which suddenly ruled almost the entire world after Ghenghis Khan but didn't spark an IR. (Though maybe it did spark a bunch of new tech developments? Idk, would be interested to hear.)

FWIW, my current view is something like "WEIRD culture helped there be science and market institutions to a mildly strong extent, though not dramatically more than other places like China; then the Europeans lucked into some really good ships & navigation tech (and kings eager to use them, unlike some emperors I could mention) and started sailing around a lot, and then this spurred more market institutions and more science, creating a feedback loop / snowball effect. In this story, WEIRD culture is important, but it's the ships+navigation+kings that's the most important thing. I'm no historian though and would love to hear criticisms of this take.

Comment by daniel-kokotajlo on Book review: WEIRDest People · 2020-11-30T20:36:09.914Z · LW · GW

The question is whether the IR would have happened in China if they, and not the Europeans, controlled the world's oceans, plantations, and mines. (And by that I mean, imagine if in the 1700s the Americas were all Chinese colonies, if the Indian and Pacific and Atlantic oceans were controlled by Chinese fleets and port-forts, if kingdoms all along the coast of India and Arabia and Africa and Europe swore fealty to China instead of Europe... In this scenario, would the IR still have happened in Britain?)

Yeah, china was rich, but plenty of rich places failed to generate IRs. The unique thing about Europe prior to the IR may have been its WEIRDness... but it also may have been the fact that it controlled so much of the world at the time. Why would this help? Well, maybe having a glut of resources for a relatively fixed labor pool raised GDP per capita a bunch and incentivised labor-saving devices like windmills and watermills and eventually steam engines. Maybe all the oceanic trade made for a robust free-ish market and spurred the development of good financial instruments and institutions (in other words, maybe what makes capitalism work so well was particularly present due to all the oceanic trade).

Comment by daniel-kokotajlo on Book review: WEIRDest People · 2020-11-30T11:56:47.389Z · LW · GW

Yes, this is further support for the "Because colonialism" theory. Maybe there's a nearby possible world where the Emperor got really excited about exploration and colonization, and sent the fleet out again and again instead of burning it, and then historians in the 2000's write big books about why the Industrial Revolution happened first in the East because of Confucian values.

Comment by daniel-kokotajlo on Book review: WEIRDest People · 2020-11-30T09:43:37.539Z · LW · GW

I'm surprised that none of the books on the list of books explaining why the IR happened in the West and not China said "Because colonialism." Look at the world in 1700, just prior to the IR: maybe China was economically and technologically advanced, but they didn't control the world's oceans, plantations, and mines. Surely there are books out there arguing for this theory. Have you read any of them? What do you think about this theory? Do the books you mention consider it and rebut it?

(I've heard people say that GDP per capita was higher in the West before colonialism even began, and use this as a rebuttal of the idea that colonialism was the cause of the IR. Is this it? To be really convincing, I'd like to see some sort of analysis of how the IR started in Britain but Britain hadn't begun to benefit much from colonialism by the time the IR started. Or something like that.)

(Or is the idea that colonialism did cause the IR, but colonialism was in turn caused by technological advantages that were the result of WEIRDness?)

Comment by daniel-kokotajlo on How can I bet on short timelines? · 2020-11-30T09:11:35.709Z · LW · GW

Nah. Once it's clear we are all doomed, I'll get by just fine probably--it's unlikely that I'll have spent literally all my money and social capital by then. And if I don't, it won't matter much anyway.

Comment by daniel-kokotajlo on How can I bet on short timelines? · 2020-11-29T14:52:26.917Z · LW · GW

Yes.

Comment by daniel-kokotajlo on Is this a good way to bet on short timelines? · 2020-11-29T08:24:24.238Z · LW · GW

Your assumptions are mostly correct. Thanks for this feedback, it makes me more encouraged to propose options 1 and 2 to various people. I agree with your criticism of option 3.

Comment by daniel-kokotajlo on Is this a good way to bet on short timelines? · 2020-11-28T16:54:09.860Z · LW · GW

Yeah. I'm hopeful that versions of them can be found which are appealing to both me and my bet-partners, even if we have to judge using our intuitions rather than math.

I'm working on a big post (or sequence) outlining my views on timelines, but quantitatively they look like this black line here:

Daniel Kokotajlo's November 2020 TAI Timelines

My understanding is that most people have timelines that are more like the blue/bottom line on this graph.

Comment by daniel-kokotajlo on Is this a good way to bet on short timelines? · 2020-11-28T14:28:28.180Z · LW · GW

Thanks. I don't think that condition holds, alas. I'm trying to optimize for making the singularity go well, and don't care much (relatively speaking) about my level of influence afterwards. If you would like to give me some of your influence now, in return for me giving you some of my influence afterwards, perhaps we can strike a deal!

Comment by daniel-kokotajlo on Luna Lovegood and the Chamber of Secrets - Part 1 · 2020-11-28T12:58:20.775Z · LW · GW

I love the focus on epistemology! What a cool character.

Comment by daniel-kokotajlo on Persuasion Tools: AI takeover without AGI or agency? · 2020-11-24T15:38:24.521Z · LW · GW

Facebook is a single system and therefore not subject to moloch. Concretely, yeah the algorithm could start manipulating election results to get more power, but if that happened it would be an ordinary AI alignment failure rather than Moloch.

The bitcoin example seems more like moloch to me, but no more so than (and in same way as) the market economy already is: People who build more infrastructure get more money, etc. We already know that in the long run the market economy leads to terrible outcomes due to Moloch.

Comment by daniel-kokotajlo on Persuasion Tools: AI takeover without AGI or agency? · 2020-11-24T14:01:48.303Z · LW · GW

I think I can see how market and money-making is like this, but facebook and bitcoin? Can you elaborate?

Comment by daniel-kokotajlo on Continuing the takeoffs debate · 2020-11-24T09:20:20.595Z · LW · GW

I'm confused about why you only updated mildly away from slow takeoff. It seems that you've got a pretty good argument against slow takeoff here:

Are there simple changes to chimps (or other animals) that would make them much better at accumulating culture?
Will humans continually pursue all simple yet powerful changes to our AIs?

Seems like if the answer to the first question is No, then there really is some relatively sharp transition to much more powerful culture-accumulating capabilities, that humans crossed when they evolved from chimp-like creatures. Thus, our default assumption should be that as we train bigger and bigger neural nets on more and more data, there will also be some relatively sharp transition. In other words, Yudkowsky's argument is correct.

Seems like if the answer to the second question is No, then Paul's disanalogy between evolution and AI researchers is also wrong; both evolution and AI researchers are shoddy optimizers that sometimes miss things etc. So Yudkowsky's argument is correct.

Now, you put 50% on the first answer being No and 70% on the second answer being No. So shouldn't you have something like 85% credence that Paul is wrong and Yudkowsky's argument is correct? And isn't that a fairly big update against slow takeoff?

Maybe the idea is that you are meta-uncertain, unsure you are reasoning about this correctly, etc.? Or maybe the idea is that Yudkowsky's argument could easily be wrong for other reasons than the ones Paul gave? Fair enough.

Comment by daniel-kokotajlo on Persuasion Tools: AI takeover without AGI or agency? · 2020-11-24T08:56:35.353Z · LW · GW

Huh. No wonder the CIA thought this could be used for mind control.

Comment by daniel-kokotajlo on Daniel Kokotajlo's Shortform · 2020-11-24T07:05:19.509Z · LW · GW

Yeah, probably not. It would need to be an international agreement I guess. But this is true for lots of proposals. On the bright side, you could maybe tax the chip manufacturers instead of the AI projects? Idk.

Maybe one way it could be avoided is if it came packaged with loads of extra funding for safe AGI research, so that overall it is still cheapest to work from the US.

Comment by daniel-kokotajlo on AGI Predictions · 2020-11-22T10:42:01.516Z · LW · GW

FWIW, I made these judgments quickly and intuitively and thus could easily have just made a silly mistake. Thank you for pointing this out.

So, what do I think now, reflecting a bit more?

--The 7% judgment still seems correct to me. I feel pretty screwed in a world where our entire community stops thinking about this stuff. I think it's because of Yudkowskian pessimism combined with the heavy-tailed nature of impact and research. A world without this community would still be a world where people put some effort into solving the problem, but there would be less effort, by less capable people, and it would be more half-hearted/not directed at actually solving the problem/not actually taking the problem seriously.

--The other judgment? Maybe I'm too optimistic about the world where we continue working. But idk, I am rather impressed by our community and I think we've been making steady progress on all our goals over the last few years. Moreover, OpenAI and DeepMind seem to be taking safety concerns mildly seriously due to having people in our community working there. This makes me optimistic that if we keep at it, they'll take it very seriously, and that would be great.

Comment by daniel-kokotajlo on AGI Predictions · 2020-11-22T10:26:58.857Z · LW · GW

Thank you for asking this question and for giving that break-down. I was wondering something similar. I am not an AI scientist but DL seems like a very big deal to me, and thus I was surprised that so many people seemed to think we need more insights on that level. My charitable interpretation is that they don't think DL is a big deal.

Comment by daniel-kokotajlo on Persuasion Tools: AI takeover without AGI or agency? · 2020-11-21T14:08:18.118Z · LW · GW

I think I mostly agree with you about the long run, but I think we have more short-term hurdles that we need to overcome before we even make it to that point, probably. I will say that I'm optimistic that we haven't yet thought of all the ways advances in tech will help collective epistemology rather than hinder it. I notice you didn't mention debate; I am not confident debate will work but it seems like maybe it will.

In the short run, well, there's also debate I guess. And the internet having conversations being recorded by default and easily findable by everyone was probably something that worked in favor of collective epistemology. Plus there is wikipedia, etc. I think the internet in general has lots of things in it that help collective epistemology... it just also has things that hurt, and recently I think the balance is shifting in a negative direction. But I'm optimistic that maybe the balance will shift back. Maybe.

Comment by daniel-kokotajlo on Persuasion Tools: AI takeover without AGI or agency? · 2020-11-21T07:59:02.505Z · LW · GW

Yes, this is one of the examples I had in mind of "Feeders."

Comment by daniel-kokotajlo on Daniel Kokotajlo's Shortform · 2020-11-20T11:10:38.329Z · LW · GW

Another cool thing about this tax is that it would automatically counteract decreases in the cost of compute. Say we make the tax 10% of the current cost of compute. Then when the next generation of chips comes online, and the price drops by an order of magnitude, automatically the tax will be 100% of the cost. Then when the next generation comes online, the tax will be 1000%.

This means that we could make the tax basically nothing even for major corporations today, and only start to pinch them later.

Comment by daniel-kokotajlo on Daniel Kokotajlo's Shortform · 2020-11-19T13:29:39.848Z · LW · GW

Maybe a tax on compute would be a good and feasible idea?

--Currently the AI community is mostly resource-poor academics struggling to compete with a minority of corporate researchers at places like DeepMind and OpenAI with huge compute budgets. So maybe the community would mostly support this tax, as it levels the playing field. The revenue from the tax could be earmarked to fund "AI for good" research projects. Perhaps we could package the tax with additional spending for such grants, so that overall money flows into the AI community, whilst reducing compute usage. This will hopefully make the proposal acceptable and therefore feasible.

--The tax could be set so that it is basically 0 for everything except for AI projects above a certain threshold of size, and then it's prohibitive. To some extent this happens naturally since compute is normally measured on a log scale: If we have a tax that is 1000% of the cost of compute, this won't be a big deal for academic researchers spending $100 or so per experiment (Oh no! Now I have to spend $1,000! No big deal, I'll fill out an expense form and bill it to the university) but it would be prohibitive for a corporation trying to spend a billion dollars to make GPT-5. And the tax can also have a threshold such that only big-budget training runs get taxed at all, so that academics are completely untouched by the tax, as are small businesses, and big businesses making AI without the use of massive scale.

--The AI corporations and most of all the chip manufacturers would probably be against this. But maybe this opposition can be overcome.

Comment by daniel-kokotajlo on Daniel Kokotajlo's Shortform · 2020-11-19T13:20:42.609Z · LW · GW

The other day I heard this anecdote: Someone's friend was several years ago dismissive of AI risk concerns, thinking that AGI was very far in the future. When pressed about what it would take to change their mind, they said their fire alarm would be AI solving Montezuma's Revenge. Well, now it's solved, what do they say? Nothing; if they noticed they didn't say. Probably if they were pressed on it they would say they were wrong before to call that their fire alarm.

This story fits with the worldview expressed in "There's No Fire Alarm for AGI." I expect this sort of thing to keep happening well past the point of no return.

Comment by daniel-kokotajlo on Propinquity Cities So Far · 2020-11-17T13:20:32.435Z · LW · GW

This seems like a plan which, if it works, will start to pay off at least a decade from now, probably two or three. Does this assessment seem right to you?

Comment by daniel-kokotajlo on How Roodman's GWP model translates to TAI timelines · 2020-11-17T11:33:27.806Z · LW · GW

Thanks for the feedback. I'll think about it. To be honest I feel like I have higher priorities right now, but going forward I'll update more towards doing recaps in my posts.

Comment by daniel-kokotajlo on How Roodman's GWP model translates to TAI timelines · 2020-11-16T19:07:19.831Z · LW · GW

Yes, though as a nitpick I don't think the black line is the singularity that got cancelled; that one was supposed to happen in 2020 or so, and as you can see the black line diverges from history well before 1950.

Comment by daniel-kokotajlo on What considerations influence whether I have more influence over short or long timelines? · 2020-11-16T05:45:55.395Z · LW · GW

OK, sent you a PM

Comment by daniel-kokotajlo on What considerations influence whether I have more influence over short or long timelines? · 2020-11-14T14:01:19.411Z · LW · GW

I think I mostly agree with you about innovation, but (a) I think that building AI will increasingly be more like building a bigger airport or dam, rather than like inventing something new (resources are the main constraint; ideas are not, happy to discuss this further), and (b) I think that things in the USA could deteriorate, eating away at the advantage the USA has, and (c) I think algorithmic innovations created in the USA will make their way to China in less than a year on average, through various means.

Your model of influence is interesting, and different from mine. Mine is something like: "For me to positively influence the world, I need to produce ideas which then spread through a chain of people to someone important (e.g. someone building AI, or deciding whether to deploy AI). I am separated from important people in the USA by fewer degrees of separation, and moreover the links are much stronger (e.g. my former boss lives in the same house as a top researcher at OpenAI), compared to important people in China. Moreover it's just inherently more likely that my ideas will spread in the US network than in the Chinese network because my ideas are in English, etc. So I'm orders of magnitude more likely to have a positive effect in the USA than in China. (But, in the long run, there'll be fewer important people in the USA, and they'll be more degrees of separation away from me, and a greater number of poseurs will be competing for their attention, so this difference will diminish). Mine seems more intuitive/accurate to me so far.

Comment by daniel-kokotajlo on Daniel Kokotajlo's Shortform · 2020-11-14T12:03:09.190Z · LW · GW

4 is correct. :/

Comment by daniel-kokotajlo on A Correspondence Theorem in the Maximum Entropy Framework · 2020-11-14T07:31:18.156Z · LW · GW

OK, sounds good.

I'm not sure either, but it seems true to me. Here goes intuition-conveying attempt... First, the question of what counts as your data seems like a parameter that must be pinned down one way or another, and as you mention there are clearly wrong ways to do it, and meanwhile it's an open philosophical controversy, so on those grounds alone it seems plausibly relevant to building an aligned AI, at least if we are doing it in a principled way rather than through prosaic (i.e. we do an automated search for it) methods. Second, one's views on what sorts of theories fit the data depend on what you think your data is. Disputes about consciousness often come down to this, I think. If you want your AI to be physicalist rather than idealist or cartesian dualist, you need to give it the corresponding notion of data. And what kind of physicalist? Etc. Or you might want it to be uncertain and engage in philosophical reasoning about what counts as its data... which sounds like also something one has to think about, it doesn't come for free when building an AI. (It does come for free if you are searching for an AI)

Comment by daniel-kokotajlo on What considerations influence whether I have more influence over short or long timelines? · 2020-11-13T20:38:01.590Z · LW · GW

OK, cool. Well, I'm still a bit confused about why my status matters for this--it's relative influence that matters, not absolute influence. Even though my absolute influence may be low, it seems higher in the US than in Asia, and thus higher in short-timelines scenarios than long-timelines scenarios. Or so I'm thinking. (Because, as you say, my influence flows through the community.)

You might be right about the long game thing. I agree that we'll learn more and grow more in size and wealth over time. However, I think (a) the levers of the world will shift away from the USA, (b) the levers of the world will shift away from OpenAI and DeepMind and towards more distributed giant tech companies and government projects advised by prestigious academics (in other words, the usual centers of power and status will have more control over time; the current situation is an anomaly) and (c) various other things might happen that effectively impose a discount rate.

So I don't think the two ways of looking at the rationalist community are in conflict. They are both true. It's just that I think considerations a+b+c outweigh the improvement in knowledge, wealth, size etc. consideration.

Comment by daniel-kokotajlo on A Correspondence Theorem in the Maximum Entropy Framework · 2020-11-13T08:59:18.640Z · LW · GW

On policy implications: I think that the new theory almost always generates at least some policy implications. For example, relativity vs. newton changes how we design rockets and satellites. Closer to home, multiverse theory opens up the possibility of (some kinds of) acausal trade. I think "it all adds up to normality" is something that shouldn't be used to convince yourself that a new theory probably has the same implications; rather, it's something that should be used to convince yourself that the new theory is incorrect, if it seems to add up to something extremely far from normal, like paralysis or fanaticism. If it adds up to something non-normal but not that non-normal, then it's fine.

I brought up those people as an example of someone you probably disagree with. My purpose was to highlight that choices need to be made about what your data is, and different people make them differently. (For an example closer to home, solomonoff induction makes it differently than you do, I predict) This seems to me like the sort of thing one should think about when desigining an AI one hopes to align. Obviously if you are just going for capabilities rather than alignment you can probably get away with not thinking hard about this question.

Comment by daniel-kokotajlo on What considerations influence whether I have more influence over short or long timelines? · 2020-11-13T06:27:41.810Z · LW · GW

Thanks. The nonexistence of warning shots is not in my control, but neither is the existence of a black hole headed for earth. I'm justified in acting as if there isn't a black hole, because if there is, we're pretty screwed anyway. I feel like maybe something similar is true (though to a lesser extent) of warning shots, but I'm not sure. If we have a 1% chance of success without warning shots and a 10% chance with warning shots, then I probably increase our overall chance of success more if I focus on warning shot scenarios.

Rudeness no problem; did I come across as arrogant or something?

I agree that that's the major variable. And that's what I had in mind when I said what I said: It seems to me that this community has more influence in short-timeline worlds than long-timeline worlds. Significantly more. Because long-timeline worlds involve AI being made by the CCP or something. But maybe I'm wrong about that! You seem to think that long-timeline worlds involve someone like you coming up with a new paradigm, and if that's true, then yeah maybe it'll still happen in the Bay after all. Seems somewhat plausible to me.

I agree that value of information is huge.

Comment by daniel-kokotajlo on What considerations influence whether I have more influence over short or long timelines? · 2020-11-12T11:43:51.083Z · LW · GW

See, this is an important consideration for me! Currently I am unsure what the balance is. Here are some reasons to think "we" have more influence over short timelines:

--I think takeoff is more likely to be fast the longer the timelines, because there's more hardware overhang and more probability that it was some new paradigm shift or insight that precipitated the advance in AI capabilities. And I think on fast takeoff we will have fewer warning shots and warning shots are our best hope I think.

--The longer it takes for TAI to arrive, the higher the chance that it gets built in Asia rather than the West, I think. And I for one have much more influence over the West.

If you can convince me that the balance of considerations favors me working on long-timelines plans (relative to my credences) I would be very grateful.

Comment by daniel-kokotajlo on A Correspondence Theorem in the Maximum Entropy Framework · 2020-11-12T11:19:36.852Z · LW · GW

This might be a good time to talk about different ways "it all adds up to normality" is interpreted.

I sometimes hear people use it in a stronger sense, to mean not just that the new theory must make the same successful predictions but that also the policy implications are mostly the same. E.g. "Many worlds has to add up to normality, so one way or another it still makes sense for us to worry about death, try to prevent suffering, etc." Correct me if I'm wrong, but this sort of thing isn't entailed by your proof, right?

There's also the issue of what counts as the data that the new theory needs to correctly predict. Some people think that "This is a table, damn it! Not a simulated table!" is part of their data that theories need to account for. What do you say to them?