Posts

Implications of the Doomsday Argument for x-risk reduction 2020-04-02T21:42:42.810Z · score: 5 (2 votes)
How to Write a News article on the Dangers of Artificial General Intelligence 2020-02-28T02:14:48.419Z · score: 9 (4 votes)
What will quantum computers be used for? 2020-01-01T19:33:16.838Z · score: 11 (4 votes)
Anti-counterfeiting Ink - an alternative way of combating oil theft? 2019-10-19T23:04:59.069Z · score: -4 (6 votes)
If you had to pick one thing you've read that changed the course of your life, what would it be? 2019-09-14T17:50:45.292Z · score: 12 (8 votes)
Simulation Argument: Why aren't ancestor simulations outnumbered by transhumans? 2019-08-22T09:07:07.533Z · score: 9 (8 votes)

Comments

Comment by maximkazhenkov on Book Review: Narconomics · 2020-05-23T02:24:43.364Z · score: 3 (2 votes) · LW · GW
So why is it that people who would never dream of sending their friend who tried coke to prison or even the friend who sold that friend some of his stash how do we end up with draconian drug laws?

There's really nothing left to explain here. They would never dream of sending their friend who tried coke to prison because they're friends. The same doesn't hold for strangers. Similarly, you'd probably let a friend of yours who just lost his home spend the night at your place, but not any random homeless person.

Comment by maximkazhenkov on What are your greatest one-shot life improvements? · 2020-05-17T13:45:41.310Z · score: 4 (4 votes) · LW · GW

Installing the Hide YouTube Comments chrome extension stopped my habit of reading and participating in the toxic comment section of YouTube. Absolutely essential for mental hygiene if you suffer from the same habit but at same time don't want to miss out on the great video content there.

Comment by maximkazhenkov on Machine Learning Can't Handle Long-Term Time-Series Data · 2020-05-14T13:43:59.760Z · score: 1 (1 votes) · LW · GW
ML can generate classical music just fine but can't figure out the chorus/verse system used in rock & roll.

This statement seems outdated: openai.com/blog/jukebox/

To me this development came as a surprise and correspondingly an update towards "all we need for AGI is scale".

Comment by maximkazhenkov on A game designed to beat AI? · 2020-05-13T13:44:01.237Z · score: 3 (2 votes) · LW · GW
I don't really know SC2 but played Civ4, so by 'scouting' did you mean fogbusting? And the cost is to spend a unit to do it? Is fogbusting even possible in a real life board game?

Yes. There has to be some cost associated with it, so that deciding whether, when and where to scout becomes an essential part of the game. The most advanced game-playing AIs to date, AlphaStar and OpenAI5, have both demonstrated tremendous weakness in this respect.

What does it have to do with Markov property?

Markov property refers to the idea that the future only depends on the current state, thus the history can be safely ignored. This is true for e.g. chess or Go; AlphaGoZero could play a game of Go starting from any board configuration without knowing how it got there. It's not easily applicable to Starcraft because of the fog of war; what you scouted inside your opponent's base a minute ago but isn't visible right now still provides valuable information about what's the right action to take. Storing the entire history as part of the "game state" would add huge complexity (tens of thousands of static game states).

Is fogbusting even possible in a real life board game?

Yes, see Magic the Gathering for instance (it's technically a card game, but plenty of board games have card elements integrated into them). Or, replace chess pieces with small coin-like tokens with information about their identity written on the down-facing side (this wouldn't work for chess in particular because you can tell the identity by the way pieces move, but perhaps some other game with moving pieces).

BTW, what is RL?

RL stands for reinforcement learning, basically all recent advances in game-playing AI has come from this field and is the reason why it's so hard to come up with a board game that would be hard to solve for AI (you could always reconfigure the Turing test or some other AGI-complete task into a "board game" but that's cheating). I'd even guess it's impossible to design such a board game because there is just too much brute force compute now.

Comment by maximkazhenkov on Book Review: Narconomics · 2020-05-03T03:28:52.315Z · score: 6 (4 votes) · LW · GW

Great Post. The last part is a major update for my model of how drug legalization opponents think about the issue. Perhaps, just like the climate change debate, it's all value disagreements masked as factual disagreements.

Comment by maximkazhenkov on A game designed to beat AI? · 2020-04-29T05:25:01.618Z · score: 6 (4 votes) · LW · GW

Excellent question! Once again, late to the party, but here are my thoughts:

It's very hard to come up with any board game where humans would beat computers, let alone an interesting one. Board games, by their nature, are discretized and usually perfect information. This type of game is not only solved by AI, but solved by essentially a single algorithm. Card games with mixed strategy equilibrium like Poker do a little better, although Poker has been solved the algorithm doesn't generalize to other card games without significant feature engineering.

If I were to design a board game to stump AIs, I would use these elements:

  • Incomplete information, with the possibility of information gathering ("scouting") at a cost (like in Starcraft 2) to invalidate Markov property
  • Lengthy gameplay (number of moves) to make the credit assignment problem as bad as possible for RL agents
  • Patterns that require abstract reasoning to discern (e.g. the pigeonhole principle lets you conclude immediately that 1001 pigeons don't fit in 1000 pigeonholes; an insight that can't practically be learned through random exploration of permutations)

The last element in particular is a subtle art and must be used with caution, because it trades off intractability for RL against intractability for traditional AI: If the pattern is too rigid the programmer could just hard-code it into a database.

If we considered video games instead, the task becomes much easier. DOTA 2 and Starcraft 2 AIs still can't beat human professionals at the full game despite the news hype, although they probably can beat the average human player. Some games, such as Chronotron or Ultimate Chicken Horse, might be impossible for current AI techniques to even achieve average human level performance on.

Comment by maximkazhenkov on The Best Virtual Worlds for "Hanging Out" · 2020-04-29T03:37:56.351Z · score: 3 (2 votes) · LW · GW

Planetside 2 is fascinating to me. It's one of a kind, not just in the sense of being a MMO shooter, but also giving the player a sense of being part of something big and magnificent, collaborating with not only your small circle of friends, but also with hundreds of other people towards a common goal. This sort of exciting experience is only found in real world projects otherwise (EVE online and browser games notwithstanding; those are more spreadsheets than games to me), and I'm really starting to think this is a hugely neglected opportunity for the gaming industry. Who knows, maybe it will be the next big trend after Battle Royale? Although shooter games with their chaotic and computationally expensive nature is not the best fit for it - perhaps turn-based strategy games instead?

Comment by maximkazhenkov on The Samurai and the Daimyo: A Useful Dynamic? · 2020-04-29T03:25:01.014Z · score: 7 (3 votes) · LW · GW
3) Am I, as a person, actually capable of making a positive difference in general or is my presence generally going to prove useless or detrimental?

To be blunt, I don't think you are making much of a positive difference in terms of changing the exploitative nature of the world, which you seem to be passionate about in your writing. I know it sounds terribly rude but couldn't find another way to put it lest I treat it as a rhetorical question.

I'm not saying you should stop doing what you're doing or that your work isn't valuable in general, any more than I'm saying athletes and theoretical physicists are morons because it's difficult to become a millionaire that way. It's just that in a world overflowing with competing memes, playing politics (in the broader sense of recruiting more people for your tribe) is not a low-hanging fruit in general. I would say the rationalist community isn't so much an army of generals with no soldiers to command, as it is an army of recruiters with no jobs to offer (that is if you conceive rationality as a project rather than just an interest).

Is this something I can improve and if so, how?

Again, not saying you should prioritize changing the world (over doing what you like and enjoy), but in case you want to, I'd say pick a EA cause (you probably know the details better than me) and make an actionable plan. For example, if your preferred cause is AI alignment, enroll in a MOOC on AI. Less meta-level pondering, more object-level work.

Comment by maximkazhenkov on Fast Takeoff in Biological Intelligence · 2020-04-26T14:57:26.626Z · score: 1 (1 votes) · LW · GW

Einstein and von Neumann were also nowhere near superintelligent; they are far better representatives of regular humans than of superintelligences. I think the problem goes deeper. As you apply more and more optimization pressure, statistical guarantees begin to fall apart. You don't get sub-agent alignment for free, whether it's made of carbon or silicon. Case in point, human values have become adrift over time relative to the original goal of inclusive genetic fitness.

Comment by maximkazhenkov on Fast Takeoff in Biological Intelligence · 2020-04-26T02:54:52.447Z · score: 1 (1 votes) · LW · GW
Dogs and livestock have been artificially selected to emphasize unnatural traits to the point that they might not appear in a trillion wolves or boars

I think you're overestimating biology. Living things are not flexible enough to accommodate for GHz clock speed or lightspeed signal transmission despite having had evolution tinkering on it for billions of years. One in a trillion is just 40 bits, not all that impressive, not to mention dogs and livestock took millennia of selective breeding; that's not fast in our modern context.

Comment by maximkazhenkov on Fast Takeoff in Biological Intelligence · 2020-04-26T02:30:27.339Z · score: 2 (2 votes) · LW · GW
Seems like the alignment problem for genetically engineered humans is, well, basically not a problem at all (such humans won't be any less ethical than normal children).

Why? Seems unlikely to me that there exists a genetic intelligence-dial that just happens to leave all other parameters alone.

Comment by maximkazhenkov on Forbidden Technology · 2020-04-26T02:04:32.554Z · score: 2 (2 votes) · LW · GW

For me, the exact the same list, but backwards.

Comment by maximkazhenkov on Life as metaphor for everything else. · 2020-04-05T22:10:14.670Z · score: 1 (1 votes) · LW · GW

Another useful metaphor in this context: Fire

Comment by maximkazhenkov on Implications of the Doomsday Argument for x-risk reduction · 2020-04-04T01:05:29.696Z · score: 5 (3 votes) · LW · GW

Thank you for the offer, I am however currently reluctant to interact with people I met on the internet in this way. But know that your openness and forthcomingness is greatly appreciated :)

Comment by maximkazhenkov on Implications of the Doomsday Argument for x-risk reduction · 2020-04-03T23:13:48.737Z · score: 1 (1 votes) · LW · GW

Nitpick: I was arguing that the Doomsday Argument would actually discourage x-risks related work because "we're doomed anyway".

Comment by maximkazhenkov on Implications of the Doomsday Argument for x-risk reduction · 2020-04-03T19:27:36.590Z · score: 5 (3 votes) · LW · GW

I agree that Lesswrong is probably the place where crazy philosophical ideas are given the most serious consideration; elsewhere it's usually just mentioned as a mind-blowing trivia over dinner parties, if at all. I think there are two reasons why these ideas are so troubling:

  • They are big. Failing to take account of even one of them will derail one's worldview completely
  • Being humble and not taking an explicit position is still just taking the default position effectively

But alas, I guess that's just the epistemological reality we live in. We'll just have to make working assumptions and carry on.

Comment by maximkazhenkov on Implications of the Doomsday Argument for x-risk reduction · 2020-04-03T14:38:56.060Z · score: 3 (2 votes) · LW · GW
Even if I were convinced that we will almost certainly fail, I might still prioritize x-risk reduction, since the stakes are so high.

In this case, it isn't so much that "stakes are high and chances are low so they might cancel out", rather there is an exact inverse proportionality between the stakes and the chances because the Doomsday Argument operates directly through the number of observers.

If it does work though, its conclusion is not that we will all go extinct soon, but rather that ancestor simulations are one of the main uses of cosmic resources.

I feel like being in a simulation is just as terrible a predicament as doom soon; given all the horrible things that happen in our world the simulators are clearly Unfriendly, they could easily turn off the simulation or thwart our efforts in creating an AI. Basically we're already living in a post-Singularity dystopia so it's too late to work on it.

I have a much harder time accepting the Simulation Hypothesis though because there are so many alternative philosophical considerations that could be pursued. Maybe we are (I am) Boltzmann brains. Maybe we live in an inflationary universe that expands 10^37 fold every second. Maybe minds do not need instantiation, or anything like a rock could be an instantiation. Etc.

Going one meta level up, I can't help but feel like a hypocrite to lament the lack of attention given to intelligence explosion and x-risks by the general public yet fail to seriously consider all these other big weird philosophical ideas. Are we (the rationalist community) doing the same as people outside it, just with a slightly shifted Overton Window? When is it Ok to sweep ideas under the rug and throw hands up in the air?

Comment by maximkazhenkov on Implications of the Doomsday Argument for x-risk reduction · 2020-04-03T13:44:27.610Z · score: 1 (1 votes) · LW · GW

But isn't the point of the Doomsday Argument that we'll need very very VERY strong evidence to the contrary to have any confidence that we're not doomed? Perhaps we should focus on drastically controlling future population growth to better our chances of prolonged survival?

Comment by maximkazhenkov on How special are human brains among animal brains? · 2020-04-02T16:54:55.682Z · score: 1 (1 votes) · LW · GW

I think what the author meant was that the anthropic principle removes the lower bound on how likely it is for any particular species to evolve language; similar to how the anthropic principle removes the lower bound on how likely it is for life to arise on any particular planet.

So our language capability constitutes zero evidence for "evolving language is easy" (and thus dissolving any need to explain why language arose; it could just be a freak 1 in 10^50 accident); similar to our existence constituting zero evidence for "life is abundant in the universe" (and thus dissolving the Fermi paradox).

Comment by maximkazhenkov on When to assume neural networks can solve a problem · 2020-03-29T23:50:54.923Z · score: 1 (1 votes) · LW · GW
You are aware chatbots have been "beating" the original Turing test since 2014, right?

Yes, I was in fact. Seeing where this internet argument is going, I think it's best to leave it here.

Comment by maximkazhenkov on When to assume neural networks can solve a problem · 2020-03-29T19:35:50.664Z · score: -2 (2 votes) · LW · GW
ML playing any possible game better than humans assuming a team actually works on that specific game (maybe even if one doesn't), with huma-like inputs and human-like limitations in terms of granularity of taking inputs and giving outputs.

I disagree with this point in particular. I'm assuming you're basing this prediction on the recent successes of AlphaStar and OpenAI5, but there are obvious cracks upon closer inspection.

The "any possible game" part, though, is the final nail in the coffin to me since you can conceive plenty of games that are equivalent or similar to the Turing test, which is to say AGI-complete.

(Although I guess AGI-completeness is a much smaller deal to you)

Comment by maximkazhenkov on AGI in a vulnerable world · 2020-03-29T17:20:44.592Z · score: 1 (1 votes) · LW · GW

It makes no difference if the marginal distributed harm to all of society is so overwhelmingly large that your share of it is still death.

Comment by maximkazhenkov on AGI in a vulnerable world · 2020-03-29T17:13:21.300Z · score: 1 (1 votes) · LW · GW
Also, I suspect this coordination might extend further, to AGIs with different architectures also.

Why would you suppose that? The design space of AI is incredibly large and humans are clear counter-examples, so the question one ought to ask is: Is there any fundamental reason an AGI that refuses to coordinate will inevitably fall off the AI risk landscape?

Comment by maximkazhenkov on AGI in a vulnerable world · 2020-03-29T17:00:35.586Z · score: 1 (1 votes) · LW · GW

The actual bootstrapping takes months, years or even decades, but it might only take 1 second for the fate of the universe to be locked in.

Comment by maximkazhenkov on Seeing the Smoke · 2020-03-02T06:55:48.390Z · score: 0 (3 votes) · LW · GW

I'm totally against the coronavirus if that's what you're wondering about; didn't think I'd need to signal that.

Comment by maximkazhenkov on Seeing the Smoke · 2020-02-28T22:27:35.443Z · score: 2 (6 votes) · LW · GW

Millions of deaths worldwide would be kind of "meh" to be honest. In emerging economies like China and India it's the price paid yearly (lung diseases due to air pollution) in return for faster economic growth. In first world countries it's a bit more unusual, but even in the most endangered age group the risk is at worst (i.e. complete loss of containment like flu) still only on the same order of magnitude as cancer and cardiovascular diseases.

Overall, not great not terrible. I certainly wouldn't classify it as a "major global disaster".

Comment by maximkazhenkov on Making Sense of Coronavirus Stats · 2020-02-28T02:25:48.190Z · score: 1 (1 votes) · LW · GW
What I believe is that if other countries do not take similar measures to China, this thing is going to rapidly spread.

Would you say the measures taken in Italy and South Korea (particularly the lockdown of towns in Norther Italy) are sufficiently similar to China?

The rosiest outcome I can imagine is warm weather halts the spread of the disease, and then we get a vaccine ready by the time fall rolls around.

I find that rather unlikely considering the virus' spread in warm regions like Singapore.

Comment by maximkazhenkov on Response to Oren Etzioni's "How to know if artificial intelligence is about to destroy civilization" · 2020-02-28T00:56:09.951Z · score: 4 (3 votes) · LW · GW

I find that the Winograd schemas is more useful as a guideline to adversarial queries to stump AIs than an actual test. An AI reaching human-level accuracy on Winograd schemas would be much less impressive to me than an AI passing the traditional Turing test conducted by an expert who is aware of Winograd schemas and experienced in adversarial queries in general. The former is more susceptible to Goodhart's law due to the stringent format and limited problem space.

Comment by maximkazhenkov on Quarantine Preparations · 2020-02-27T14:45:54.442Z · score: 1 (1 votes) · LW · GW

Thanks for the reference

Comment by maximkazhenkov on New article from Oren Etzioni · 2020-02-27T02:02:50.809Z · score: 1 (1 votes) · LW · GW
From the sound of it, they stayed in the lane mainly via some hand-coded image processing which looked for a yellow/white strip surrounded by darker color.

That is what I heard about other research groups but a bit surprising coming from Tesla, I'd imagine things have changed dramatically since then considering this video, albeit insufficient as any sort of safety validation, still demonstrates they're way beyond just following lane markings. According to Musk they're pushing hard for end-to-end ML solutions. It would make sense seeing the custom hardware they've developed and also the data leverage they have with their massive fleet, combined with over-the-air updates.

Comment by maximkazhenkov on What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? · 2020-02-27T00:07:34.931Z · score: 1 (1 votes) · LW · GW

I agree - the fatality rate is just much too low to affect anything long term.

Comment by maximkazhenkov on New article from Oren Etzioni · 2020-02-26T23:49:43.273Z · score: 1 (1 votes) · LW · GW

What makes driving on surface streets so much different than driving on highways such that current state of the art ML techniques wouldn't be able to handle it with slightly more data and compute?

Unlike natural language processing, AI doctors or household robots, driving seems like a very limited non-AGI-complete task to me because a self-driving car never truly interacts with humans or objects beyond avoiding hitting them.

we also need to notice objects which are going to move into the lane; we need trajectory tracking and forecasting. And we need the trajectory-tracker to be robust to the object classification changing (or just being wrong altogether), or sometimes confusing which object is which across timesteps, or reflections or pictures or moving lights, or missing/unreliable lane markers, or things in the world zig-zagging around on strange trajectories, or etc.

I would claim all of the above are also required for driving on the highway.

Comment by maximkazhenkov on New article from Oren Etzioni · 2020-02-26T18:58:09.290Z · score: 1 (1 votes) · LW · GW

Is there evidence for this claim? I've only ever seen evidence to the contrary

Comment by maximkazhenkov on Quarantine Preparations · 2020-02-26T16:41:25.702Z · score: 4 (2 votes) · LW · GW

How would UDT solve anthropic reasoning? Any Links?

Comment by maximkazhenkov on New article from Oren Etzioni · 2020-02-26T16:08:01.538Z · score: 1 (1 votes) · LW · GW
A self-driving car which cannot correctly handle unrecognized objects is not safe

But so what? People are not safe; they have slower reaction time than machines, especially when intoxicated. For every example of a self-driving car causing an accident due to object recognition failure, I can point to a person causing an accident due to reaction time failure or attention failure. Why give preference to human failure modes?

You can always come up with arbitrarily contrived edge cases where a narrow AI requires robust value alignment like an AGI (e.g. this ridiculous trolley problem) to behave correctly and thereby reduce any real world narrow AI application to an AGI problem. Thing is, one day China is going to say "Fuck it, we need to get ahead on this AI issue" and just lets loose existing self-driving cars onto their streets; the rest gets sorted out by the insurance market and incremental tech improvements. That's my prediction of how we'll transition into self-driving.

Comment by maximkazhenkov on On unfixably unsafe AGI architectures · 2020-02-20T13:08:17.753Z · score: 1 (1 votes) · LW · GW
Whatever MIRI is doing in their undisclosed research program (involving Haskell I guess)

Uh... Haskell? I'm intrigued now.

Comment by maximkazhenkov on Moral public goods · 2020-01-27T13:25:02.130Z · score: 1 (1 votes) · LW · GW

The same way taxation is a coordination mechanism.

Taxation = social arrangement

Fines/prison sentence for tax evasion = enforcement mechanism

Charitable donation = social arrangement

Higher taxation = enforcement mechanism

Comment by maximkazhenkov on Moral public goods · 2020-01-27T11:02:50.169Z · score: 1 (1 votes) · LW · GW
It's not a coordination mechanism; it doesn't allow people to commit to giving money if and only if everyone else also gives money, as a tax does. Even if giving money was free (untaxed), the OP's coordination problem would remain.

Actually it is, just a bit contrived. The penalty for violating the "commitment" is having to pay extra taxes (lose the tax break). Just a matter of labels.

Comment by maximkazhenkov on Moral public goods · 2020-01-27T00:31:22.104Z · score: 2 (2 votes) · LW · GW

Maybe because there is value left on the table? You could apply the same logic to any new idea: "If it was so great, someone would have already thought of it and exploited it, so it clearly can't be that great."

Also, I would claim charity tax deduction already is such a coordination mechanism, allowing the rich to engage in philantrophy in ways they believe to be more effective than taxation (e.g. they would like more of their donations going towards foreign aid)

Comment by maximkazhenkov on Moral public goods · 2020-01-26T20:24:39.795Z · score: 6 (3 votes) · LW · GW
Some of the modern super-rich do generate disproportionately high value, e.g. from high-risk bets they made to build innovative companies. But most of their income still comes from capital and owning the tools of production and all that (citation required). And this influences the moral calculus for a lot of people. The reason for taking some of their property (income) is not just that most people want to do it or that someone else would enjoy it much more, it's that it shouldn't be theirs to begin with.

This isn't a post about social justice and wealth inequality in general. The moral calculus from the point of view of most people isn't the point of contention here, it's the point of view of the rich that's being discussed.

Comment by maximkazhenkov on Hedonic asymmetries · 2020-01-26T19:13:41.070Z · score: 3 (3 votes) · LW · GW

Ok I see, I was just confused by the wording "given some more time". I've become less optimistic over time about how long this disequilibrium will last given how quickly certain religious communities are growing with the explicit goal of outbreeding the rest of us.

Comment by maximkazhenkov on Hedonic asymmetries · 2020-01-26T17:58:11.307Z · score: 2 (2 votes) · LW · GW
And evolution doesn't seem to be likely to "fix" that given some more time.

Why would you suppose that?

We don't behave in a "Malthusian" way, investing all extra resources in increasing the number or relative proportion of our descendants in the next generation. Even though we definitely could, since population grows geometrically. It's hard to have more than 10 children, but if every descendant of yours has 10 children as well, you can spend even the world's biggest fortune. And yet such clannish behavior is not a common theme of any history I've read; people prefer to get (almost unboundedly) richer instead, and spend those riches on luxuries, not children.

Isn't that just due to the rapid advance of technology creating a world in disequilibrium? In the ancestral environment of pre-agricultural societies these behaviors you describe line up with maximizing inclusive genetic fitness pretty well; any recorded history you can read is too new and short to reflect what evolution intended to select for.

Comment by maximkazhenkov on Material Goods as an Abundant Resource · 2020-01-26T06:36:34.437Z · score: 3 (2 votes) · LW · GW

I'm no homo economicus and don't intend to become one; give me a duplicator and I shall drop out of the economy.

Comment by maximkazhenkov on Go F*** Someone · 2020-01-15T22:17:28.471Z · score: 11 (13 votes) · LW · GW

This is self-help-books-level advice

Comment by maximkazhenkov on Is backwards causation necessarily absurd? · 2020-01-15T18:37:25.962Z · score: 1 (1 votes) · LW · GW

Yes, but the direction of causality is very much preserved. The notion of present is not necessary in a directed acyclic graph.

Comment by maximkazhenkov on Predictors exist: CDT going bonkers... forever · 2020-01-15T18:31:23.135Z · score: 2 (2 votes) · LW · GW

But considering that randomness as an antidote to perfect predictions is ubiquitously available in this universe, it's hard to see what practical implications these CDT failures in highly contrived thought experiments have.

Comment by maximkazhenkov on What long term good futures are possible. (Other than FAI)? · 2020-01-12T22:30:51.593Z · score: -8 (8 votes) · LW · GW

No.

Comment by maximkazhenkov on Plausible A.I. Takeoff Scenario Short Story · 2020-01-01T22:23:01.049Z · score: 3 (3 votes) · LW · GW
Any good idea can be enough for a successful start-up. AGI is extremely narrow compared to the entire space of good ideas.

But we're not comparing the probability of "a successful start-up will be created" vs. the probability of "an AGI will be created" in the next x years, we're comparing the probability of "an AGI will be created by a large organization" vs. the probability of "an AGI will be created by a single person on his laptop" given that an AGI will be created.

Without the benefit of hindsight, is PageRank and reusable rockets any more obvious than the hypothesized AGI key insight? If someone who had no previous experience working in aeronautical engineering - a highly technical field - can out-innovate established organizations like Lockheed Martin, why wouldn't the same hold true for AGI? If anything, the theoretical foundations of AGI is less well-established and the entry barrier lower by comparison.

Comment by maximkazhenkov on Plausible A.I. Takeoff Scenario Short Story · 2020-01-01T20:04:37.574Z · score: 2 (2 votes) · LW · GW
I actually agree that the "last key insight" is somewhat plausible, but I think even if we assume that, it remains quite unlikely that an independent person has this insight rather than the people who are being paid a ton of money to work on this stuff all day.

If that were true, start-ups wouldn't be a thing, we'd all be using Yahoo Search and Lockheed Martin would be developing the first commercially successful reusable rocket. Hell, it might even make sense to switch to planned economy outright then.

Especially because even in the insight-model, there could still be some amount of details that need to be figured out after the insight, which might only take a couple of weeks for OpenAI but probably longer for a single person.

But why does it matter? Would screaming at the top of your lungs about your new discovery (or the modern equivalent, publishing a research paper on the internet) be the first thing someone who has just gained the key insight does? It certainly would be unwise to.

Comment by maximkazhenkov on Phage therapy in a post-antibiotics world · 2019-12-30T06:30:43.648Z · score: 1 (1 votes) · LW · GW

Seems unlikely as phages can evolve just as fast.