Posts

Implications of the Doomsday Argument for x-risk reduction 2020-04-02T21:42:42.810Z · score: 5 (2 votes)
How to Write a News article on the Dangers of Artificial General Intelligence 2020-02-28T02:14:48.419Z · score: 9 (4 votes)
What will quantum computers be used for? 2020-01-01T19:33:16.838Z · score: 11 (4 votes)
Anti-counterfeiting Ink - an alternative way of combating oil theft? 2019-10-19T23:04:59.069Z · score: -4 (6 votes)
If you had to pick one thing you've read that changed the course of your life, what would it be? 2019-09-14T17:50:45.292Z · score: 12 (8 votes)
Simulation Argument: Why aren't ancestor simulations outnumbered by transhumans? 2019-08-22T09:07:07.533Z · score: 9 (8 votes)

Comments

Comment by maximkazhenkov on What hard science fiction stories also got the social sciences right? · 2020-10-10T15:40:19.790Z · score: 1 (1 votes) · LW · GW

There is no perfect match with Bostrom's vulnerabilities, because the book assumed there was a relatively safe strategy: hide. If no one knows you are there, no one will attack you, because although the "nukes" are cheap, they would be destroying potentially useful resources.

Not relevant, if you succeed in hiding you simply fall off the vulnerability landscape. We only need to consider what happens when you've been exposed. Also, whose resources? It's a cosmic commons, so who cares if it gets destroyed.

The point of the Dark Forest hypothesis was precisely that in a world with such asymmetric weapons, coordination is not necessary. If you naively make yourself visible to thousand potential enemies, it is statistically almost certain that someone will pull the trigger; for whatever reason.

That's just Type-1 vulnerable world. No need for the contrived argumentation the author gave.

There is a selfish reason to pull the trigger; any alien civilization is a potential extinction threat.

Not really, cleaning up extinction threats is a public good that generally tends to fall prey to Tragedy of the Commons. Even if you made the numbers work out somehow - which is very difficult and requires certain conditions that the author has explicitly refuted (like the impossibility to colonize other stars or to send out spam messages) - it would still not be an example of Moloch. It would be an example of pan-galactic coordination, albeit a perverted one.

Comment by maximkazhenkov on What hard science fiction stories also got the social sciences right? · 2020-10-10T15:18:36.136Z · score: 4 (3 votes) · LW · GW

Very much disagree. My sense is that the book series is pretty meagre on presenting "thoughtful hard science" as well as game theory and human sociology.

To pick the most obvious example - the title of the trilogy* - the three body problem was misrepresented in the books as "it's hard to find the general analytic solution" instead of "the end state is extremely sensitive to changes in the initial condition", and the characters in the book (both humans and Trisolarians) spend eons trying to solve the problem mathematically. 

But even if an exact solution was found - which does exist for some chaotic systems like logistic maps - it would have been useless since the initial condition cannot be known perfectly. This isn't a minor nitpick like the myriad other scientific problems with the Trisolarian system that can be more easily forgiven for artistic license; this is missing what chaotic systems are about. Why even invoke the three-body problem other than as attire?

*not technically the title of the book series, but frequently referred to as such

Comment by maximkazhenkov on What hard science fiction stories also got the social sciences right? · 2020-10-02T12:08:45.865Z · score: 3 (2 votes) · LW · GW

Where exactly do you see Moloch in the books? It's quite the opposite if anything; the mature civilizations of the universe have coordinated around cleaning up the cosmos of nascent civilizations, somehow, without a clear coordination mechanism. Or perhaps it's a Type-1 vulnerable world, but it doesn't fit well with the author's argumentation. I'm not sure, and I'm not sure the author knows either.

I'm still a little puzzled by all the praises for the deep game theoretic insights the book series supposedly contains though. Maybe game theory as attire?

Comment by maximkazhenkov on Covid 10/1: The Long Haul · 2020-10-02T11:47:47.911Z · score: -1 (8 votes) · LW · GW

You're also exposed to all sorts of risks if you're "below some wealth threshold where they cannot act as they would reasonably like to because their alternative is homelessness, malnourishment, etc." even before Corona came around. The situation hasn't changed all that much. 

But, as Elon Musk famously said: "If you don't make stuff, there is no stuff".

Comment by maximkazhenkov on The Goddess of Everything Else · 2020-10-01T12:28:51.260Z · score: 0 (2 votes) · LW · GW

Spreading across the stars without number sounds more like a "KILL CONSUME MULTIPLY CONQUER" thing than it sounds like an "Everything Else" thing. I'm missing something of the point here.

 

Spreading across the stars without numbers

 

ETA: Is the point that over time Man evolved to be what he is today, we have a conception of right and wrong, and we're the first link in the chain that actually cares about making sure our morals propagate forward as we evolve? So now the force of evolution has been co-opted into spreading human morality?

No. I recommend reading Meditations on Moloch first, then everything becomes clear.

Comment by maximkazhenkov on What hard science fiction stories also got the social sciences right? · 2020-09-30T18:22:44.093Z · score: 2 (2 votes) · LW · GW

That's a pretty extreme over-dramatization. Corona isn't even 1% as bad.

Comment by maximkazhenkov on What hard science fiction stories also got the social sciences right? · 2020-09-30T18:17:46.075Z · score: 6 (4 votes) · LW · GW

The Mote in God's Eye is a pretty good example of social science fiction in addition to being a great science fiction novel in general.

Comment by maximkazhenkov on What hard science fiction stories also got the social sciences right? · 2020-09-29T11:22:16.904Z · score: 2 (2 votes) · LW · GW

If the Coronavirus had a 30% fatality rate people would care a lot about not getting infected in the real world, too.

Comment by maximkazhenkov on About a local variation of Rock-Paper-Scissors and how it self-negated its own problematic dynamic · 2020-09-27T22:41:42.373Z · score: 1 (1 votes) · LW · GW

You mean Nash equilibrium strategy? Rock-Paper-Scissors is a zero-sum game, so Pareto optimal is a trivial notion here.

Comment by maximkazhenkov on About a local variation of Rock-Paper-Scissors and how it self-negated its own problematic dynamic · 2020-09-27T22:39:02.793Z · score: 1 (3 votes) · LW · GW

Regardless of what the new player does, there is no reason to ever play scissors. I don't see any interesting "4-choice dynamic" here. Perhaps you should pick a different example with multiple Nash equilibria.

Comment by maximkazhenkov on Needed: AI infohazard policy · 2020-09-22T01:31:25.291Z · score: 1 (1 votes) · LW · GW

Another advantage AI secrecy has over nuclear secrecy is that there's a lot of noise and hype these days about ML both within and outside the community, making hiding in plain sight much easier.

Comment by maximkazhenkov on Needed: AI infohazard policy · 2020-09-22T01:27:47.194Z · score: 1 (1 votes) · LW · GW
In the midgame, it is unlikely for any given group to make it all the way to safe AGI by itself. Therefore, safe AGI is a broad collective effort and we should expect most results to be published. In the endgame, it might become likely for a given group to make it all the way to safe AGI. In this case, incentives for secrecy become stronger.

I'm not sure what "safe" means in this context, but it seems to me that publishing safe AGI is not a threat and it's the unsafe but potentially very capable AGI research we should worry about?

And the statement "In the midgame, it is unlikely for any given group to make it all the way to s̶a̶f̶e̶ AGI by itself" seems a lot more dubious given recent developments with GPT-3, at least according to most of the Lesswrong community.

Comment by maximkazhenkov on Luna First, But Not To Live There · 2020-09-22T00:22:52.016Z · score: 1 (1 votes) · LW · GW
Industrializing space is desirable because industrializing Earth has had a number of negative side effects on the biosphere, so moving production outside the biosphere would be a positive development. My argument is that the option of staying home is clearly economically preferable for now, and will be unless we see major cost reductions in space technology.

I thought your argument is that we should industrialize space because it's economically viable?

Putting that aside, environmentalism is just about the last reason for space activities. Space travel has had a negligible impact on the environment thus far only because there has been so little space travel. But on a per-kilogram payload basis, even assuming the cleanest metholox/hydrolox fuel composition produced purely from solar power, the NO2 from hot exhaust plumes and ozone-eating free radicals from reentry heat alone are enough make any environmentalist screech in horror. You'd have to go to the far end of level 3 tech to begin making this argument, and even then it still isn't an economic incentive. You can't seriously dismiss space tourism as a driver for space travel and then propose environmentalism as an alternative.

Whether SpaceX and other launch vehicle organizations can reach the Level 2 threshold you describe remains to be seen, and LVs are only part of the pricetag. Materials, equipment, and labor represent a large segment of space mission cost, and unless we can also drive those down by similar degrees do the economics of colonization start making sense.

Space is hard, sure, but how does that help your point exactly? Colonization doesn't have to (and won't) make economic sense. Industrialization does.

Note, too, that ΔV is non-trivial, even when we start getting to high specific-impulse technologies.

Not really. This isn't relevant for the Moon vs Mars debate, but even for the outer planets I would argue

  • Short travel time isn't necessary for colonizing or industrializing outer planets
  • Nuclear fusion can realistically go up to 500,000s Isp, dwarfing any reasonable requirement for travel inside the solar system

Also, all the analysis with hyperbolic orbits are kind of unnecessary as the solar gravity well becomes trivial for short transfers. You could just as well assume the target planets to be fixed points and get the Δv requirement from distance divided by desired travel time (x2 for deceleration).

Comment by maximkazhenkov on Luna First, But Not To Live There · 2020-09-21T23:34:02.200Z · score: 1 (1 votes) · LW · GW

Government: Ask Kennedy

Private sector: Ask Musk

Comment by maximkazhenkov on The Axiological Treadmill · 2020-09-18T05:37:24.796Z · score: 1 (1 votes) · LW · GW

I'm still confused about your critique, so let me ask you directly: In the scenario outlined by the OP, do you expect humans to eventually evolve to stop feeling pain from electrical shocks?

Comment by maximkazhenkov on The Axiological Treadmill · 2020-09-17T04:00:25.886Z · score: 1 (1 votes) · LW · GW

Evolution can't dictate what's harmful and what's not; bigger peacock tails can be sexually selected for until it is too costly for survival, and an equilibrium sets in. In our scenario, since pain-inducing stimuli are generally bad for survival, there is no selection pressure to increase the pain threshold for electrical shocks after a certain equilibrium point. Because we start out with a nervous system that associates electrical shocks with pain, this pain becomes a pessimistic error after the equilibrium point and never gets fixed, i.e. humans still suffer under electrical shocks, just not so bad they'd rather kill themselves.

Suffering is not rare in nature because actually harmful things are common and suffering is an adequate response to them.

Why then is it possible to suffer pain worse than death? Why do people and animals suffer just as intensely beyond their reproductive age?

Comment by maximkazhenkov on The Axiological Treadmill · 2020-09-17T00:53:05.117Z · score: 1 (1 votes) · LW · GW

Yes, that's what pessimistic errors are about. I'm not sure what exactly you're critiquing though?

Comment by maximkazhenkov on The Axiological Treadmill · 2020-09-16T00:50:39.902Z · score: 3 (2 votes) · LW · GW
The obvious reason that Moloch is the enemy is that it destroys everything we value in the name of competition and survival. But this is missing the bigger picture.

No, it isn't. What do I care which values evolution originally intended to align us with? What do I care which direction dysgenic pressure will push our values in the future? Those aren't my values, and that's all I need to know.

After all, if you forget to shock yourself, or choose not to, then you are immediately killed. So the people in this country will slowly evolve reward and motivational systems such that, from the inside, it feels like they want to shock themselves, in the same way (though maybe not to the same degree) that they want to eat.

No, there is no selection pressure to shock yourself more than the required amount, anything beyond that is still detrimental to your reproductive fitness. Once we've evolved to barely tolerate the pain of electric shocks so as to not kill ourselves, the selection towards more pain tolerance stops, and people will still suffer a great deal because there is no incentive for evolution to fix pessimistic errors. You could perhaps engineer scenarios where humans will genuinely evolve to like a dystopia, but it certainly doesn't apply to most cases, else suffering should already be a rare occurrence in nature.

Comment by maximkazhenkov on Are there non-AI projects focused on defeating Moloch globally? · 2020-09-15T10:35:28.656Z · score: 1 (1 votes) · LW · GW

Well this requirement doesn't appear to be particularly stringent compared to the ability to suppress overpopulation and other dysgenic pressures that would be necessary for such a global social system. It would have to be totalitarian anyway (though not necessarily centralized).

It is also a useful question to ask whether there are alternative existential opportunities if super-intelligent AI doesn't turn out to be a thing. For me that's the most intriguing aspect of the FAI problem; there are plenty of existential risks to go around but FAI as an existential opportunity is unique.

Comment by maximkazhenkov on Are there non-AI projects focused on defeating Moloch globally? · 2020-09-15T07:22:53.684Z · score: 1 (1 votes) · LW · GW
  • Maybe one-shot Prisoner's Dilemma is rare and Moloch doesn't turn out to be a big issue after all
  • On the other hand, perhaps the FAI solution is just sweeping all the hard problems under the AI-alignment rug and isn't any more viable than engineering a global social system that is stable over millions of years (possibly using human genetic engineering)
Comment by maximkazhenkov on The Case for Human Genetic Engineering · 2020-09-15T07:13:55.977Z · score: 1 (1 votes) · LW · GW

That's just the label for the process of how eukaryotes came about and makes no statement about its likelihood, or am I missing something?

Comment by maximkazhenkov on Are there non-AI projects focused on defeating Moloch globally? · 2020-09-15T06:44:07.567Z · score: 1 (1 votes) · LW · GW
Singleton solutions -- there will be no coordination problems if everything is ruled by one royal dynasty / one political party / one recursively self-improving artificial intelligence.

Royal dynasties and political parties are not Singletons by any stretch of the imagination. Infighting is Moloch. But even if we assumed an immortal benevolent human dictator, a dictator only exercises power through keys to power and still has to constantly fight off competition for his power. Stalin didn't start the Great Purge for shits and giggles; it rather is a pattern that keeps repeating with rulers throughout history.

The hope with artificial superintelligence is that, due to the wide design space of possible AIs, we can perhaps pick one that is sub-agent stable and free of mesa-optimization, and also more powerful than all other agents in the universe combined by a huge margin. If no AI can satisfy these conditions, we are just as doomed.

Primitivism solutions -- all problems will be simple if we make our lifestyle simple.

That's not defeating Moloch, that's surrendering completely and unconditionally to Moloch in its original form of natural selection.

Comment by maximkazhenkov on If Starship works, how much would it cost to create a system of rotable space mirrors that reduces temperatures on earth by 1° C? · 2020-09-14T21:47:42.622Z · score: 6 (4 votes) · LW · GW

Relevant:

Feasibility of cooling the Earth with a cloud of small spacecraft near the inner Lagrange point

Comment by maximkazhenkov on [deleted post] 2020-09-10T17:56:55.749Z

Reported for GPT-spamming

Comment by maximkazhenkov on Luna First, But Not To Live There · 2020-09-10T04:18:38.739Z · score: 1 (1 votes) · LW · GW
But are there no risks that could wipe out humanity on Earth that wouldn't also kill a Mars colony? A comet impacting the Earth might be at the right scale for that. Or maybe a runaway greenhouse effect triggered by our carbon emissions.

I have thought about both scenarios and, no, I don't think either is plausible. I find natural x-risks not worth defending against in general due to their unlikelihood and lack of severity. If a planet allows complex but non-technological life to exist for hundreds of millions of years, it has nothing to throw at us in the next few hundred years.

Regarding meteor impact specifically, I think a comet would have to be significantly bigger than the one that caused the Chicxulub crater and failed to wipe out the dinosaurs. Birds are not close cousins of dinosaurs, they are the direct descendants; and had that meteor missed the Earth, dinosaurs would likely have evolved into something that looks very different than what walked the Earth 65 million years ago, just like how we look very different to early mammals.

We, like the dinosaurs, are spread all over the Earth across every climate zone. Unlike the dinosaurs, we have technology at our disposal from stone tools to computers. Even the ruins of our civilization will provide many useful tools to ensure the survival of at least the tiniest fraction of humanity. I believe we are far more resilient than dinosaurs.

Since the distribution of meteor sizes follows a power law, it's unlikely for Earth to encounter a comet/asteroid large enough to wipe out humanity outright until the end of the biosphere's lifespan, let alone the next few centuries.

But if we were to hedge against such an impact, the most cost-effective way would be to create large underground bunkers with infrastructure and industry to keep a small isolated civilization running indefinitely. If we can build a self-sufficient colony on Mars, we sure as hell could do it on Earth.

Regarding runaway greenhouse effect, we have geological records testifying CO2 concentrations above 1000 ppm in the Cretaceous period which didn't cause a runaway greenhouse, and I expect climate catastrophes to limit our ability to pump more CO2 into the atmosphere well before then through regional economic collapse. Since it's a gradual process, there is also time for negative feedback like plant growth to kick in, and for drastic geoengineering efforts such as deliberately setting off a nuclear winter.

My favorite example of a x-risk-for-Earth-but-not-Mars at first glance is a microscopic black hole swallowing the Earth. Since a black hole with Earth's mass would follow the same orbit, you'd think it won't have any effect on the rest of the solar system. Unfortunately, there is 1) no physical grounding for the thesis that such a black hole would be stable and 2) the energy released in such an event would be akin to setting off a supernova inside the solar system and nuking everything from here to Pluto.

Finally, I'm not sure what you mean by "hedging against the collapse of civilization". A Mars colony doesn't stop civilization from collapsing on Earth. It would help avoid a delay in technological progress, but in the long run a delay of a few centuries is of no particular importance.

Comment by maximkazhenkov on Luna First, But Not To Live There · 2020-09-09T22:30:50.920Z · score: 2 (2 votes) · LW · GW
There’s also disagreement about the efficacy of using Luna as a refueling stop, so to speak, en route to the Red Planet. From an orbital mechanics standpoint, it’s not a slam-dunk idea, but the argument in practice depends heavily on the specific logistics. In-situ fuel production might just make such a configuration worth it.

I think it's pretty much a slam-dunk that refueling on the moon is a bad idea. Adding lots of complexity (thus failure points) and the cost of establishing the necessary infrastructure for what can be accomplished by a few re-fueling trips by earthbound tankers seems unnecessary, especially considering it's not even the right fuel. And if you're talking about expendable rockets, well, Robert Zubrin has done detailed analysis on why refueling on the moon is utterly counterproductive and Mars Direct is better. delta-v ≠ money saved.

While Mars is obviously the more attractive target for colonization, we are a very long way from building colonies on other celestial bodies, no matter how good of an idea it is.

Very much disagree on space colonies as hedge against human extinction. I could write a more detailed critique, but the bottom line is there is no x-risk severe enough to wipe out all (not merely 99.999%) humans on Earth but at the same time not severe enough to also wipe out all moon/Mars colonies.

The reason is very simple: space colonization is an unspeakably expensive proposition.

Not necessarily. Senate-run space program is definitely an unspeakably expensive proposition, though.

I have yet to think of an economic need which a self-sustaining population on Mars would fulfill, that innovative strategies could not fulfill on Earth. Farming food on Mars? We can do hydroponics here. Running out of room to house people? We’re nowhere near that kind of population density. New legal environments to test out social engineering concepts? Seasteads and charter cities are way safer and less expensive. Climate change? Just tax carbon and build nuclear power plants, sheesh.

Agreed, except the part about sea-steading. Staying home is even more safe and less expensive. Put in a less tongue-in-cheek way: The difficulty of reaching Mars is why a Mars colony has a chance to become an independent civilization in the first place. Sending supplies to Mars is so difficult that the colonists would be better off building up their own supply chains in the long term for anything but the most value-dense equipment like microprocessors. The same isn't true for a sea-stead; sure you could in theory build your own economy, but realistically you'll just end up importing everything because it's easy, become heavily reliant on the outside world and be independent in name only. You're also within reach of any tax-collecting naval power of the world.

No one will front the money to build Mars colonies until there’s an economic incentive to do so. I see no such economic incentive. I would love to be wrong about this, because Mars is the best colonization target by far. But I don’t think I am.

Depends on what amount you're talking about. If it's <$100 billion mere prestige would be enough incentive.

As the people of Earth demand an increasingly high standard of living and simultaneously a cleaner environment, I suspect that this may prove to be the ultimate driver of off-world industrialization. Again, though, speculation.

Very far-fetched argument. To relocate the vast amount of industry required to make a significant positive impact on the environment, you'd need to lower launch costs close to maritime shipping costs today. And at that point, supplying off-world colonies would be just as easy.

Critically, space industrialization is different from space colonization. Developing an off-world economy is a pre-requisite for seeing a large, permanent population above the atmosphere.

A dubious conclusion. Do you propose relocating entire supply chains off-world, or just small bits? If it's the former, it's no easier than founding a self-sufficient colony. If it's the latter, it's not worth it due to exorbitant transportation costs back and forth from Earth.

Governments may choose to pay for scientific missions to other planets; they will not front the costs of developing entire planets quite literally from the ground up. Whatever outputs space agencies may build, they will not be colonies.

colony ≠ terraforming

People won’t live there, the way that human populations have whenever establishing themselves in a new locality. There won’t be families and new businesses and the like, not for a long time.
Instead, we’re probably going to see many largely-automated operations, with minimal and possibly intermittent human presence.

I think you're seriously overestimating the capability of robots. Compare what the Apollo astronauts were able to do on the moon and what Mars rovers have done.

As we push towards human settlement in space, our focus should therefore be the development of new industries and new technologies to enable and motivate working above the atmosphere.

This sounds like a call to action, but if human settlement in space was profitable, it would happen anyway? Also, who's "we"?

One day, our species will span three worlds. That day remains very far away. Rather than fixate on terraforming dreams, we should chart a course carried by the currents of economic necessity. With the correct regulatory environment and technological investments, we can begin building sustainable off-world industries in a realistic timescale. Such industries will carry us to the planets in the pursuit of profit—a far more reliable motivator than any humanitarian spirit from politicians.
That, I suspect, is what the future of space travel is going to come down to. Do we pursue an incremental strategy that eventually carries us to the ends of the Solar System, or do we wallow on this one planet, fantasizing of an amazing future no one has any incentive to hand us? Are we going to fixate on self-sustained colonies and settle for nothing less, or shall we go to Luna first, but not to live there?

Again, colony ≠ terraforming, and again, curious to hear your thoughts on why Mars Direct/Elon Musk's plan won't pan out. In any case, whatever your vision for future human space exploration is exactly, the only thing that matters right now is lowering launch costs.

Comment by maximkazhenkov on Luna First, But Not To Live There · 2020-09-09T21:12:02.345Z · score: 6 (2 votes) · LW · GW

I would like to take the complete opposite position and argue that self-sufficient space colonies will happen long before Earth reaps any benefits from industrial activities in outer space, if ever. The reason I believe this is that outer space activities won't be economically driven for a long time, because there is no profit to be made.

It is important to clarify what sort of propulsion technologies you are basing your analysis on. I subdivide propulsion technologies into 3 categories:

  • Level 1: current non-reusable rockets; ~$10,000 per kg to LEO
  • Level 2: reusable rockets, space planes; ~$100 per kg to LEO
  • Level 3: space-elevators, orbital rings, fusion drive; <$1 per kg to LEO and beyond

With level 1 tech delta-v is absolutely crucial for any activity in outer space because every kg of fuel in LEO is worth its weight in gold. With level 2 tech, delta-v becomes a much less important consideration; there are plenty of equipment whose value exceeds $100 per kg, and the cost can be further reduced with dedicated tanker spacecrafts highly optimized for re-usability. Fuel itself is cheap, after all. And finally, with level 3 tech, delta-v becomes utterly trivial; simply climbing higher on a space-elevator can eject you straight out of the solar system.

We have been stuck on level 1 for half a century now, and there has been no outer space activities beyond a few space probes here and there. If we stay on this level, there is no reason to expect more activities in the future, either. If we ever reach level 3, setting up shop anywhere in the solar system will be just as easy as on Earth. So level 2 is the only scenario worth discussing with regard to colonization vs industrialization in my mind.

Currently, our best bet to reach level 2 by far is SpaceX, which was founded with the explicit goal of colonizing Mars. It is way ahead of the competition both in terms of currently operational re-usable rockets (having sucked up most of the global non-governmental launch market) and pushing re-usability tech further. It is in the process of becoming a space-based internet broadband provider which has the revenue potential dwarfing the satellite launch market, with no competition in sight. It is also a privately-held company and intentionally so, so that profit incentives don't get in the way of its mission.

Their road map is to lower launch cost enough such that private citizens, the government or Elon Musk himself (most likely a combination of the above) could fund a Mars homesteading campaign. Low enough that economic drives are unnecessary. I would like to hear why this plan is unrealistic.

One major advantage Mars has over the moon (regardless of colonization or industrialization) is the availability of carbon. The only fuel that can be produced in-situ on the moon is hydrogen, which is not ideal for re-usability due to its low boiling point and hydrogen embrittlement. There is a good reason all next-gen re-usable rockets use methane as fuel.

But far more importantly, you have not argued why industrializing the moon is a good idea in the first place. I wholeheartedly agree with the idea that operations on Mars will never turn a profit for Earth, but that hardly supports your point. Putting factories on the moon might make (marginally) less losses than putting factories on Mars, but so what, there is always the option to stay home and make no loss.

Comment by maximkazhenkov on Luna First, But Not To Live There · 2020-09-09T19:41:21.016Z · score: 1 (1 votes) · LW · GW
Traffic between the moon and a space station, or space station construction site, is much cheaper than between the earth and those sites.

Other than delta-v, I don't see any reason to think that. However, to exploit even that advantage, you'd have to build the space station on the moon using local materials in bulk. This is at least as hard as colonizing the moon since space stations require lots of high-tech manufacturing to produce, whereas colonization just requires air, water, food and construction materials in bulk which are much lower-tech.

Helium-3 for a fuel and/or energy source

He-3 fusion is way harder (higher Coulomb-barrier) than D-T fusion which itself hasn't been cracked. The only advantage Helium-3 provides over Deuterium-Tritium is aneutronicity, which doesn't matter if you're just building a power plant. Aneutronicity is only important if you want to use direct thrust from the fusion products to propel your spacecraft, and at that sort of tech level it's better to mine He-3 from the gas giants which have way more supply.

Depot for storing fuel, equipment, and supplies in bulk

Where do those fuel, equipment and supplies come from? If they come from Earth, there are no delta-v savings. If they come from the moon itself, the argument becomes circular.

Comment by maximkazhenkov on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-03T02:41:32.785Z · score: 1 (1 votes) · LW · GW

Thanks, that cleared up a lot.

Comment by maximkazhenkov on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-02T14:13:56.662Z · score: 1 (1 votes) · LW · GW

Would you agree that, given that the multiverse exists (verified by independent evidence), the WAP is sufficient to explain the fundamental parameters?

Comment by maximkazhenkov on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-02T00:27:41.682Z · score: 4 (2 votes) · LW · GW
The idea of the fine-tuned universe is invalid.

Could you elaborate? What's the paradox that's being dissolved here? As far as I know SSA does not indicate a fine-tuned universe, just that our existence doesn't give us a clue about how likely life is to arise in any universe/planet.

Comment by maximkazhenkov on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-13T18:33:20.319Z · score: 3 (2 votes) · LW · GW

What would inner alignment failures even look like? Overdosing on meth sure makes the dopamine system happy. Perhaps human values reside in the prefrontal complex, and all of humanity is a catastrophic alignment failure of the dopamine system (except a small minority of drug addicts) on top of being a catastrophic alignment failure of natural selection.

Comment by maximkazhenkov on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-13T18:19:04.639Z · score: 1 (1 votes) · LW · GW

Isn't evolution a better analogy for deep learning anyway? All natural selection does is gradient descent (hill climbing technically), with no capacity for lookahead. And we've known this one for 150 years!

Comment by maximkazhenkov on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-13T00:14:03.545Z · score: 13 (6 votes) · LW · GW

What you're describing is humans being mesa-optimizers inside the natural selection algorithm. The phenomenon this post talks about is one level deeper.

Comment by maximkazhenkov on What a 20-year-lead in military tech might look like · 2020-08-01T01:39:06.097Z · score: 2 (2 votes) · LW · GW

Fighting the Taliban also fulfills the purpose of funneling money to friends and supporters.

Comment by maximkazhenkov on What a 20-year-lead in military tech might look like · 2020-07-30T21:25:27.920Z · score: 5 (3 votes) · LW · GW
One of the major problems that Western nations have run into in the past half century is that we're in wars where (a) we don't just want to kill everyone, and (b) there is no strong central control of the opposition (or at least none we want to preserve), so we're effectively forced into the last scenario above.

This argument is only supportive of your main point "command and control by far most important" insofar future wars will also be exclusively asymmetric. That assumption, though, is problematic even today. The US isn't spending billions of dollars on stealth fighters and bombers to fight the Taliban.

Comment by maximkazhenkov on What a 20-year-lead in military tech might look like · 2020-07-30T21:16:11.442Z · score: 1 (1 votes) · LW · GW

How can an AI that is 10 times as smart and innovative as Elon Musk not be godlike? xD

But seriously, if an AI is really capable of making such great headway in weapons technology, it is then surely capable of bootstrapping itself to superintelligence.

Comment by maximkazhenkov on What a 20-year-lead in military tech might look like · 2020-07-30T21:07:09.067Z · score: 1 (1 votes) · LW · GW

In the limit of large swarms of cheap, small drones, the attacker always has an intrinsic advantage. The attacking drones are trying to hit large, relatively slow moving targets while the defender is trying to "hit a bullet with another bullet". The only scalable countermeasure in my mind are directed energy weapons; you can't get faster or smaller than elementary particles. If a laser is fast and accurate enough to shoot down mosquitoes out of the air, it can probably shoot down drones, too.

Comment by maximkazhenkov on What a 20-year-lead in military tech might look like · 2020-07-30T20:50:44.457Z · score: 1 (1 votes) · LW · GW

The US has gained a lot of experience in asymmetric warfare in the last few decades, but due to the Long Peace no one can be sure of which military technologies actually work well in the context of a symmetric war between major powers; none of it has really been validated. So the "lead" the US has over the rest is somewhat theoretical.

Comment by maximkazhenkov on What a 20-year-lead in military tech might look like · 2020-07-30T20:45:12.088Z · score: 2 (2 votes) · LW · GW

If you buy into the Great Stagnation theory then a 20-year lead today should be a lesser deal than in 1900.

Comment by maximkazhenkov on What a 20-year-lead in military tech might look like · 2020-07-30T20:42:10.750Z · score: 1 (1 votes) · LW · GW

Drones, yes, Terminators less so. It depends on whether AI technology can thread the needle of being powerful enough to navigate a very complex environment but not general enough to be a superintelligence. I kinda doubt that such a gap even exists.

Comment by maximkazhenkov on Are we in an AI overhang? · 2020-07-27T19:00:22.827Z · score: 4 (3 votes) · LW · GW

If you extrapolated those straight lines further, doesn't it mean that even small businesses will be able to afford training their own quadrillion-parameter-models just a few years after Google?

Comment by maximkazhenkov on Are we in an AI overhang? · 2020-07-27T18:56:28.010Z · score: 3 (2 votes) · LW · GW

Is density even relevant when your computations can be run in parallel? I feel like price-performance will be the only relevant measure, even if that means slower clock cycles.

Comment by maximkazhenkov on To what extent is GPT-3 capable of reasoning? · 2020-07-22T10:32:09.439Z · score: 3 (2 votes) · LW · GW

You can listen to his thoughts on AGI in this video

I find that he has an exceptionally sharp intuition about why deep learning works, from the original AlexNet to Deep Double Descent. You can see him predicting the progress in NLP here

Comment by maximkazhenkov on To what extent is GPT-3 capable of reasoning? · 2020-07-22T08:44:23.591Z · score: 1 (1 votes) · LW · GW
"Why isn't it an AGI?" here can be read as "why hasn't it done the things I'd expect from an AGI?" or "why doesn't it have the characteristics of general intelligence?", and there's a subtle shade of difference here that requires two different answers.
For the first, GPT-3 isn't capable of goal-driven behaviour.

Why would goal-driven behavior be necessary for passing a Turing test? It just needs to predict human behavior in a limited context, which was what GPT-3 was trained to do. It's not an RL setting.

and by saying that GPT-3 definitely isn't a general intelligence (for whatever reason), you're assuming what you set out to prove.

I would like to dispute that by drawing the analogy to the definition of fire before modern chemistry. We didn't know exactly what fire is, but it's a "you know it when you see it" kind of deal. It's not helpful to pre-commit to a certain benchmark, like we did with chess - at one point we were sure beating the world champion in chess would be a definitive sign of intelligence, but Deep Blue came and went and we now agree that chess AIs aren't general intelligence. I know this sounds like moving the goal-post, but then again, the point of contention here isn't whether OpenAI deserves some brownie points or not.

"Passing the Turing test with competent judges" is an evasion, not an answer to the question – a very sensible one, though.

It seems like you think I made that suggestion in bad faith, but I was being genuine with that idea. The "competent judges" part was so that the judges, you know, are actually asking adversarial questions, which is the point of the test. Cases like Eugene Goostman should get filtered out. I would grant the AI be allowed to be trained on a corpus of adversarial queries from past Turing tests (though I don't expect this to help), but the judges should also have access to this corpus so they can try to come up with questions orthogonal to it.

I think the point at which our intuitions depart is: I expect there to be a sharp distinction between general and narrow intelligence, and I expect the difference to resolve very unambiguously in any reasonably well designed test, which is why I don't care too much about precise benchmarks. Since you don't share this intuition, I can see why you feel so strongly about precisely defining these benchmarks.

I could offer some alternative ideas in an RL setting though:

  • An AI that solves Snake perfectly on any map (maps should be randomly generated and separated between training and test set), or
  • An AI that solves unseen Chronotron levels at test time within a reasonable amount of game time (say <10x human average) while being trained on a separate set of levels

I hope you find these tests fair and precise enough, or at least get a sense of what I'm trying to see in an agent with "reasoning ability"? To me these tasks demonstrate why reasoning is powerful and why we should care about it in the first place. Feel free to disagree though.

Comment by maximkazhenkov on To what extent is GPT-3 capable of reasoning? · 2020-07-21T20:08:54.442Z · score: 4 (3 votes) · LW · GW

Yeah the terms are always a bit vague; as far as existence proof for AGI goes there's already humans and evolution, so my definition of a harbinger would be something like 'A prototype that clearly shows no more conceptual breakthroughs are needed for AGI'.

I still think we're at least one breakthrough away from that point, however that belief is dampened by Ilya Sutskever's position on this whose opinion I greatly respect. But either way GPT-3 in particular just doesn't stand out to me from the rest of DL achievements over the years, from AlexNet to AlphaGo to OpenAI5.

And yes, I believe there will be fast takeoff.

Comment by maximkazhenkov on To what extent is GPT-3 capable of reasoning? · 2020-07-21T19:17:59.780Z · score: 5 (4 votes) · LW · GW

I don't think GPT-3 is a harbinger. I'm not sure if there ever will be a harbinger (at least to the public); leaning towards no. An AI system that passes the Turing test wouldn't be a harbinger, it's the real deal.

Comment by maximkazhenkov on To what extent is GPT-3 capable of reasoning? · 2020-07-21T16:08:46.802Z · score: 2 (4 votes) · LW · GW
See, it does break down in that it thinks moving >5 degrees to the right is also bad. What's going on with the "car locks", or the "algorithm"? I agree that's weird. But the concept is still understood, and, AFAICT, is not "just associating" (in the way you mean it).

That's the exact opposite impression I got from this new segment. In what world is confusing "right" and "left" a demonstration of reasoning over mere association? How much more wrong could GPT-3 have gotten the answer? "Turning forward"? No, that wouldn't appear in the corpus. What's the concept that's being understood here?

And why wouldn't it be amazing for some (if not all) of its rolls to exhibit impressive-for-an-AI reasoning?

Because GPT-3 isn't using reasoning to arrive at those answers? Associating gravity with falling doesn't require reasoning, determining whether something would fall in a specific circumstance does, but that leaves only a small space of answers, so guessing right a few times and wrong at other times (like GPT-3 is doing) isn't evidence of reasoning. The reasoning doesn't have to do any work of locating the hypothesis because you're accepting vague answers and frequent wrong answers.

Comment by maximkazhenkov on To what extent is GPT-3 capable of reasoning? · 2020-07-21T15:50:31.537Z · score: 3 (3 votes) · LW · GW

I didn't mean to imply we should wait for AI to pass the Turing test before doing alignment work. Perhaps the disagreement comes down to you thinking "We should take GPT-3 as a fire-alarm for AGI and must push forward AI alignment work" whereas I'm thinking "There is and will be no fire-alarm and we must push forward AI alignment work"

Comment by maximkazhenkov on To what extent is GPT-3 capable of reasoning? · 2020-07-21T14:20:38.259Z · score: 3 (3 votes) · LW · GW
So a good exercise becomes: what minimally-complex problem could you give to GPT-3 that would differentiate between pattern-matching and predicting?

Passing the Turing test with competent judges. If you feel like that's too harsh yet insist on GPT-3 being capable of reasoning, then ask yourself: what's still missing? It's capable of both pattern recognition and reasoning, so why isn't it an AGI yet?