Posts

AI Winter Is Coming - How to profit from it? 2020-12-05T20:23:51.309Z
Implications of the Doomsday Argument for x-risk reduction 2020-04-02T21:42:42.810Z
How to Write a News article on the Dangers of Artificial General Intelligence 2020-02-28T02:14:48.419Z
What will quantum computers be used for? 2020-01-01T19:33:16.838Z
Anti-counterfeiting Ink - an alternative way of combating oil theft? 2019-10-19T23:04:59.069Z
If you had to pick one thing you've read that changed the course of your life, what would it be? 2019-09-14T17:50:45.292Z
Simulation Argument: Why aren't ancestor simulations outnumbered by transhumans? 2019-08-22T09:07:07.533Z

Comments

Comment by maximkazhenkov on Matt Levine on "Fraud is no fun without friends." · 2021-01-20T00:20:53.828Z · LW · GW

And that might be great for society. We don't want people working a job primarily because it's fun and they like their coworkers. We want them working a job because they're providing valuable goods and services that meet pre-existing demand.

Who's "we" and who's "society"?

Comment by maximkazhenkov on AR Glasses: Much more than you wanted to know · 2021-01-16T06:49:04.511Z · LW · GW

I’d be very interested in hearing arguments why this actually wouldn’t be that big of a deal.

It won't be a big deal because smartphones was not a big deal. People still wake up, go to work, eat, sleep, sit hours in front of a screen - TV, smartphone, AR, who cares. No offense to Steve Jobs, but if the greatest technological achievement of your age is the popularization of the smartphone, exciting is about the last adjective I'd describe it with.

Speaking of Black Mirror, I find the show to be a pretty accurate representation of the current intellectual Zeitgeist (not the future it depicts; I mean the show itself) - pretentious, hollow, lame, desperate to signal profundity through social commentary.

Comment by maximkazhenkov on How long till Inverse AlphaFold? · 2020-12-22T15:21:55.851Z · LW · GW

By "bias" I didn't mean biases in the learned model, I meant "the class of proteins whose structures can be predicted by ML algorithms at all is biased towards biomolecules". What you're suggesting is still within the local search paradigm, which might not be sufficient for the protein folding problem in general, any more than it is sufficient for 3-SAT in general. No sampling is dense enough if large swaths of the problem space is discontinuous.

Comment by maximkazhenkov on Ideal Chess - drop chess perfected · 2020-12-19T11:29:15.004Z · LW · GW

Thank you for the response, I will definitely check out these variants. I'm trying to understand what sort of simple rules let good, deeply strategic games emerge out of them, and how inventors of such games come up with these ideas.

Comment by maximkazhenkov on The next AI winter will be due to energy costs · 2020-12-19T08:09:21.233Z · LW · GW

6 orders of magnitude from FLOPs to bit erasure conversion

Does it take a million bit erasures to conduct a single floating point operation? That seems a bit excessive to me.

Comment by maximkazhenkov on Ideal Chess - drop chess perfected · 2020-12-19T06:17:29.380Z · LW · GW

Excellent post!

I would greatly appreciate a follow-up, perhaps compilations of game variants for other popular games such as Go or Poker?

Comment by maximkazhenkov on How long till Inverse AlphaFold? · 2020-12-19T04:10:39.426Z · LW · GW

But is AlphaFold2(x) a (mostly) convex function? More importantly, is the real structure(x) convex?

I can see a potential bias here, in that AlphaFold and inverse AlphaFold might work well for biomolecules because evolution is also a kind of local search, so if it can find a certain solution, AlphaFold will find it, too. But both processes will be blind to a vast design space that might contain extremely useful designs.

Then again, we are biology, so maybe we only care about biomolecules and adjacent synthetic molecules anyway.

Comment by maximkazhenkov on The next AI winter will be due to energy costs · 2020-12-17T10:55:22.297Z · LW · GW

Wait a minute - does this mean that microprocessors have already far surpassed the switching energy efficiency of the human brain? That came to me as a surprise

Comment by maximkazhenkov on AI Winter Is Coming - How to profit from it? · 2020-12-06T18:07:03.982Z · LW · GW

Hmm that's betting on the market overreacting to AI winter in addition to betting on AI winter occurring itself. I guess it's only applicable to scenarios where there's a sudden crash instead of a slow, steady decline of investments, but still, thank you for the idea!

Comment by maximkazhenkov on AI Winter Is Coming - How to profit from it? · 2020-12-06T17:59:17.871Z · LW · GW

These are some valuable ideas, thanks! Do you also see any opportunity for long positions? I.e. are there companies/industries that will actually benefit from AI failing?

Comment by maximkazhenkov on Notes on Humility · 2020-11-30T19:50:07.316Z · LW · GW

Is this what humble people tell themselves?

Comment by maximkazhenkov on Pain is the unit of Effort · 2020-11-25T22:01:47.071Z · LW · GW

Note 3: Just because you achieved your goal through hard work and dedication doesn't mean it was worth it in the end. This is what they don't tell you in self-help books.

Comment by maximkazhenkov on Are we in an AI overhang? · 2020-11-23T18:45:17.035Z · LW · GW

So the God of Straight Lines dissolves into a puff of smoke at just the right time to bring about AI doom? Seems awfully convenient.

Comment by maximkazhenkov on Nuclear war is unlikely to cause human extinction · 2020-11-20T05:42:15.865Z · LW · GW

Not even the right order of magnitude. Yellowstone magma chamber is 5km beneath the surface. If you had a nuke large enough to set off a supervolcano, you wouldn't need to set off a supervolcano. Not to mention Yellowstone isn't ready to blow anyway.

Comment by maximkazhenkov on Nuclear war is unlikely to cause human extinction · 2020-11-08T07:19:49.790Z · LW · GW

There is no scientific basis for 2 and 3

Comment by maximkazhenkov on Nuclear war is unlikely to cause human extinction · 2020-11-08T07:11:41.768Z · LW · GW

My mainline expectation is that in a nuclear war scenario, chemical, biological, and conventional weapon effects would be dwarfed by the effects from nuclear weapons.

I would classify biological weapons as more dangerous than nuclear, but that's a different topic. Besides, biological and nuclear warfare don't mix well - without commercial air travel and trade biological agents don't spread well.

Comment by maximkazhenkov on What hard science fiction stories also got the social sciences right? · 2020-10-10T15:40:19.790Z · LW · GW

There is no perfect match with Bostrom's vulnerabilities, because the book assumed there was a relatively safe strategy: hide. If no one knows you are there, no one will attack you, because although the "nukes" are cheap, they would be destroying potentially useful resources.

Not relevant, if you succeed in hiding you simply fall off the vulnerability landscape. We only need to consider what happens when you've been exposed. Also, whose resources? It's a cosmic commons, so who cares if it gets destroyed.

The point of the Dark Forest hypothesis was precisely that in a world with such asymmetric weapons, coordination is not necessary. If you naively make yourself visible to thousand potential enemies, it is statistically almost certain that someone will pull the trigger; for whatever reason.

That's just Type-1 vulnerable world. No need for the contrived argumentation the author gave.

There is a selfish reason to pull the trigger; any alien civilization is a potential extinction threat.

Not really, cleaning up extinction threats is a public good that generally tends to fall prey to Tragedy of the Commons. Even if you made the numbers work out somehow - which is very difficult and requires certain conditions that the author has explicitly refuted (like the impossibility to colonize other stars or to send out spam messages) - it would still not be an example of Moloch. It would be an example of pan-galactic coordination, albeit a perverted one.

Comment by maximkazhenkov on What hard science fiction stories also got the social sciences right? · 2020-10-10T15:18:36.136Z · LW · GW

Very much disagree. My sense is that the book series is pretty meagre on presenting "thoughtful hard science" as well as game theory and human sociology.

To pick the most obvious example - the title of the trilogy* - the three body problem was misrepresented in the books as "it's hard to find the general analytic solution" instead of "the end state is extremely sensitive to changes in the initial condition", and the characters in the book (both humans and Trisolarians) spend eons trying to solve the problem mathematically. 

But even if an exact solution was found - which does exist for some chaotic systems like logistic maps - it would have been useless since the initial condition cannot be known perfectly. This isn't a minor nitpick like the myriad other scientific problems with the Trisolarian system that can be more easily forgiven for artistic license; this is missing what chaotic systems are about. Why even invoke the three-body problem other than as attire?

*not technically the title of the book series, but frequently referred to as such

Comment by maximkazhenkov on What hard science fiction stories also got the social sciences right? · 2020-10-02T12:08:45.865Z · LW · GW

Where exactly do you see Moloch in the books? It's quite the opposite if anything; the mature civilizations of the universe have coordinated around cleaning up the cosmos of nascent civilizations, somehow, without a clear coordination mechanism. Or perhaps it's a Type-1 vulnerable world, but it doesn't fit well with the author's argumentation. I'm not sure, and I'm not sure the author knows either.

I'm still a little puzzled by all the praises for the deep game theoretic insights the book series supposedly contains though. Maybe game theory as attire?

Comment by maximkazhenkov on Covid 10/1: The Long Haul · 2020-10-02T11:47:47.911Z · LW · GW

You're also exposed to all sorts of risks if you're "below some wealth threshold where they cannot act as they would reasonably like to because their alternative is homelessness, malnourishment, etc." even before Corona came around. The situation hasn't changed all that much. 

But, as Elon Musk famously said: "If you don't make stuff, there is no stuff".

Comment by maximkazhenkov on The Goddess of Everything Else · 2020-10-01T12:28:51.260Z · LW · GW

Spreading across the stars without number sounds more like a "KILL CONSUME MULTIPLY CONQUER" thing than it sounds like an "Everything Else" thing. I'm missing something of the point here.

 

Spreading across the stars without numbers

 

ETA: Is the point that over time Man evolved to be what he is today, we have a conception of right and wrong, and we're the first link in the chain that actually cares about making sure our morals propagate forward as we evolve? So now the force of evolution has been co-opted into spreading human morality?

No. I recommend reading Meditations on Moloch first, then everything becomes clear.

Comment by maximkazhenkov on What hard science fiction stories also got the social sciences right? · 2020-09-30T18:22:44.093Z · LW · GW

That's a pretty extreme over-dramatization. Corona isn't even 1% as bad.

Comment by maximkazhenkov on What hard science fiction stories also got the social sciences right? · 2020-09-30T18:17:46.075Z · LW · GW

The Mote in God's Eye is a pretty good example of social science fiction in addition to being a great science fiction novel in general.

Comment by maximkazhenkov on What hard science fiction stories also got the social sciences right? · 2020-09-29T11:22:16.904Z · LW · GW

If the Coronavirus had a 30% fatality rate people would care a lot about not getting infected in the real world, too.

Comment by maximkazhenkov on About a local variation of Rock-Paper-Scissors and how it self-negated its own problematic dynamic · 2020-09-27T22:41:42.373Z · LW · GW

You mean Nash equilibrium strategy? Rock-Paper-Scissors is a zero-sum game, so Pareto optimal is a trivial notion here.

Comment by maximkazhenkov on About a local variation of Rock-Paper-Scissors and how it self-negated its own problematic dynamic · 2020-09-27T22:39:02.793Z · LW · GW

Regardless of what the new player does, there is no reason to ever play scissors. I don't see any interesting "4-choice dynamic" here. Perhaps you should pick a different example with multiple Nash equilibria.

Comment by maximkazhenkov on Needed: AI infohazard policy · 2020-09-22T01:31:25.291Z · LW · GW

Another advantage AI secrecy has over nuclear secrecy is that there's a lot of noise and hype these days around ML both within and outside the community, making hiding in plain sight much easier.

Comment by maximkazhenkov on Needed: AI infohazard policy · 2020-09-22T01:27:47.194Z · LW · GW
In the midgame, it is unlikely for any given group to make it all the way to safe AGI by itself. Therefore, safe AGI is a broad collective effort and we should expect most results to be published. In the endgame, it might become likely for a given group to make it all the way to safe AGI. In this case, incentives for secrecy become stronger.

I'm not sure what "safe" means in this context, but it seems to me that publishing safe AGI is not a threat and it's the unsafe but potentially very capable AGI research we should worry about?

And the statement "In the midgame, it is unlikely for any given group to make it all the way to s̶a̶f̶e̶ AGI by itself" seems a lot more dubious given recent developments with GPT-3, at least according to most of the Lesswrong community.

Comment by maximkazhenkov on Luna First, But Not To Live There · 2020-09-22T00:22:52.016Z · LW · GW
Industrializing space is desirable because industrializing Earth has had a number of negative side effects on the biosphere, so moving production outside the biosphere would be a positive development. My argument is that the option of staying home is clearly economically preferable for now, and will be unless we see major cost reductions in space technology.

I thought your argument is that we should industrialize space because it's economically viable?

Putting that aside, environmentalism is just about the last reason for space activities. Space travel has had a negligible impact on the environment thus far only because there has been so little space travel. But on a per-kilogram payload basis, even assuming the cleanest metholox/hydrolox fuel composition produced purely from solar power, the NO2 from hot exhaust plumes and ozone-eating free radicals from reentry heat alone are enough make any environmentalist screech in horror. You'd have to go to the far end of level 3 tech to begin making this argument, and even then it still isn't an economic incentive. You can't seriously dismiss space tourism as a driver for space travel and then propose environmentalism as an alternative.

Whether SpaceX and other launch vehicle organizations can reach the Level 2 threshold you describe remains to be seen, and LVs are only part of the pricetag. Materials, equipment, and labor represent a large segment of space mission cost, and unless we can also drive those down by similar degrees do the economics of colonization start making sense.

Space is hard, sure, but how does that help your point exactly? Colonization doesn't have to (and won't) make economic sense. Industrialization does.

Note, too, that ΔV is non-trivial, even when we start getting to high specific-impulse technologies.

Not really. This isn't relevant for the Moon vs Mars debate, but even for the outer planets I would argue

  • Short travel time isn't necessary for colonizing or industrializing outer planets
  • Nuclear fusion can realistically go up to 500,000s Isp, dwarfing any reasonable requirement for travel inside the solar system

Also, all the analysis with hyperbolic orbits are kind of unnecessary as the solar gravity well becomes trivial for short transfers. You could just as well assume the target planets to be fixed points and get the Δv requirement from distance divided by desired travel time (x2 for deceleration).

Comment by maximkazhenkov on Luna First, But Not To Live There · 2020-09-21T23:34:02.200Z · LW · GW

Government: Ask Kennedy

Private sector: Ask Musk

Comment by maximkazhenkov on The Axiological Treadmill · 2020-09-18T05:37:24.796Z · LW · GW

I'm still confused about your critique, so let me ask you directly: In the scenario outlined by the OP, do you expect humans to eventually evolve to stop feeling pain from electrical shocks?

Comment by maximkazhenkov on The Axiological Treadmill · 2020-09-17T04:00:25.886Z · LW · GW

Evolution can't dictate what's harmful and what's not; bigger peacock tails can be sexually selected for until it is too costly for survival, and an equilibrium sets in. In our scenario, since pain-inducing stimuli are generally bad for survival, there is no selection pressure to increase the pain threshold for electrical shocks after a certain equilibrium point. Because we start out with a nervous system that associates electrical shocks with pain, this pain becomes a pessimistic error after the equilibrium point and never gets fixed, i.e. humans still suffer under electrical shocks, just not so bad they'd rather kill themselves.

Suffering is not rare in nature because actually harmful things are common and suffering is an adequate response to them.

Why then is it possible to suffer pain worse than death? Why do people and animals suffer just as intensely beyond their reproductive age?

Comment by maximkazhenkov on The Axiological Treadmill · 2020-09-17T00:53:05.117Z · LW · GW

Yes, that's what pessimistic errors are about. I'm not sure what exactly you're critiquing though?

Comment by maximkazhenkov on The Axiological Treadmill · 2020-09-16T00:50:39.902Z · LW · GW
The obvious reason that Moloch is the enemy is that it destroys everything we value in the name of competition and survival. But this is missing the bigger picture.

No, it isn't. What do I care which values evolution originally intended to align us with? What do I care which direction dysgenic pressure will push our values in the future? Those aren't my values, and that's all I need to know.

After all, if you forget to shock yourself, or choose not to, then you are immediately killed. So the people in this country will slowly evolve reward and motivational systems such that, from the inside, it feels like they want to shock themselves, in the same way (though maybe not to the same degree) that they want to eat.

No, there is no selection pressure to shock yourself more than the required amount, anything beyond that is still detrimental to your reproductive fitness. Once we've evolved to barely tolerate the pain of electric shocks so as to not kill ourselves, the selection towards more pain tolerance stops, and people will still suffer a great deal because there is no incentive for evolution to fix pessimistic errors. You could perhaps engineer scenarios where humans will genuinely evolve to like a dystopia, but it certainly doesn't apply to most cases, else suffering should already be a rare occurrence in nature.

Comment by maximkazhenkov on Are there non-AI projects focused on defeating Moloch globally? · 2020-09-15T10:35:28.656Z · LW · GW

Well this requirement doesn't appear to be particularly stringent compared to the ability to suppress overpopulation and other dysgenic pressures that would be necessary for such a global social system. It would have to be totalitarian anyway (though not necessarily centralized).

It is also a useful question to ask whether there are alternative existential opportunities if super-intelligent AI doesn't turn out to be a thing. For me that's the most intriguing aspect of the FAI problem; there are plenty of existential risks to go around but FAI as an existential opportunity is unique.

Comment by maximkazhenkov on Are there non-AI projects focused on defeating Moloch globally? · 2020-09-15T07:22:53.684Z · LW · GW
  • Maybe one-shot Prisoner's Dilemma is rare and Moloch doesn't turn out to be a big issue after all
  • On the other hand, perhaps the FAI solution is just sweeping all the hard problems under the AI-alignment rug and isn't any more viable than engineering a global social system that is stable over millions of years (possibly using human genetic engineering)
Comment by maximkazhenkov on The Case for Human Genetic Engineering · 2020-09-15T07:13:55.977Z · LW · GW

That's just the label for the process of how eukaryotes came about and makes no statement about its likelihood, or am I missing something?

Comment by maximkazhenkov on Are there non-AI projects focused on defeating Moloch globally? · 2020-09-15T06:44:07.567Z · LW · GW
Singleton solutions -- there will be no coordination problems if everything is ruled by one royal dynasty / one political party / one recursively self-improving artificial intelligence.

Royal dynasties and political parties are not Singletons by any stretch of the imagination. Infighting is Moloch. But even if we assumed an immortal benevolent human dictator, a dictator only exercises power through keys to power and still has to constantly fight off competition for his power. Stalin didn't start the Great Purge for shits and giggles; it rather is a pattern that keeps repeating with rulers throughout history.

The hope with artificial superintelligence is that, due to the wide design space of possible AIs, we can perhaps pick one that is sub-agent stable and free of mesa-optimization, and also more powerful than all other agents in the universe combined by a huge margin. If no AI can satisfy these conditions, we are just as doomed.

Primitivism solutions -- all problems will be simple if we make our lifestyle simple.

That's not defeating Moloch, that's surrendering completely and unconditionally to Moloch in its original form of natural selection.

Comment by maximkazhenkov on If Starship works, how much would it cost to create a system of rotable space mirrors that reduces temperatures on earth by 1° C? · 2020-09-14T21:47:42.622Z · LW · GW

Relevant:

Feasibility of cooling the Earth with a cloud of small spacecraft near the inner Lagrange point

Comment by maximkazhenkov on [deleted post] 2020-09-10T17:56:55.749Z

Reported for GPT-spamming

Comment by maximkazhenkov on Luna First, But Not To Live There · 2020-09-10T04:18:38.739Z · LW · GW
But are there no risks that could wipe out humanity on Earth that wouldn't also kill a Mars colony? A comet impacting the Earth might be at the right scale for that. Or maybe a runaway greenhouse effect triggered by our carbon emissions.

I have thought about both scenarios and, no, I don't think either is plausible. I find natural x-risks not worth defending against in general due to their unlikelihood and lack of severity. If a planet allows complex but non-technological life to exist for hundreds of millions of years, it has nothing to throw at us in the next few hundred years.

Regarding meteor impact specifically, I think a comet would have to be significantly bigger than the one that caused the Chicxulub crater and failed to wipe out the dinosaurs. Birds are not close cousins of dinosaurs, they are the direct descendants; and had that meteor missed the Earth, dinosaurs would likely have evolved into something that looks very different than what walked the Earth 65 million years ago, just like how we look very different to early mammals.

We, like the dinosaurs, are spread all over the Earth across every climate zone. Unlike the dinosaurs, we have technology at our disposal from stone tools to computers. Even the ruins of our civilization will provide many useful tools to ensure the survival of at least the tiniest fraction of humanity. I believe we are far more resilient than dinosaurs.

Since the distribution of meteor sizes follows a power law, it's unlikely for Earth to encounter a comet/asteroid large enough to wipe out humanity outright until the end of the biosphere's lifespan, let alone the next few centuries.

But if we were to hedge against such an impact, the most cost-effective way would be to create large underground bunkers with infrastructure and industry to keep a small isolated civilization running indefinitely. If we can build a self-sufficient colony on Mars, we sure as hell could do it on Earth.

Regarding runaway greenhouse effect, we have geological records testifying CO2 concentrations above 1000 ppm in the Cretaceous period which didn't cause a runaway greenhouse, and I expect climate catastrophes to limit our ability to pump more CO2 into the atmosphere well before then through regional economic collapse. Since it's a gradual process, there is also time for negative feedback like plant growth to kick in, and for drastic geoengineering efforts such as deliberately setting off a nuclear winter.

My favorite example of a x-risk-for-Earth-but-not-Mars at first glance is a microscopic black hole swallowing the Earth. Since a black hole with Earth's mass would follow the same orbit, you'd think it won't have any effect on the rest of the solar system. Unfortunately, there is 1) no physical grounding for the thesis that such a black hole would be stable and 2) the energy released in such an event would be akin to setting off a supernova inside the solar system and nuking everything from here to Pluto.

Finally, I'm not sure what you mean by "hedging against the collapse of civilization". A Mars colony doesn't stop civilization from collapsing on Earth. It would help avoid a delay in technological progress, but in the long run a delay of a few centuries is of no particular importance.

Comment by maximkazhenkov on Luna First, But Not To Live There · 2020-09-09T22:30:50.920Z · LW · GW
There’s also disagreement about the efficacy of using Luna as a refueling stop, so to speak, en route to the Red Planet. From an orbital mechanics standpoint, it’s not a slam-dunk idea, but the argument in practice depends heavily on the specific logistics. In-situ fuel production might just make such a configuration worth it.

I think it's pretty much a slam-dunk that refueling on the moon is a bad idea. Adding lots of complexity (thus failure points) and the cost of establishing the necessary infrastructure for what can be accomplished by a few re-fueling trips by earthbound tankers seems unnecessary, especially considering it's not even the right fuel. And if you're talking about expendable rockets, well, Robert Zubrin has done detailed analysis on why refueling on the moon is utterly counterproductive and Mars Direct is better. delta-v ≠ money saved.

While Mars is obviously the more attractive target for colonization, we are a very long way from building colonies on other celestial bodies, no matter how good of an idea it is.

Very much disagree on space colonies as hedge against human extinction. I could write a more detailed critique, but the bottom line is there is no x-risk severe enough to wipe out all (not merely 99.999%) humans on Earth but at the same time not severe enough to also wipe out all moon/Mars colonies.

The reason is very simple: space colonization is an unspeakably expensive proposition.

Not necessarily. Senate-run space program is definitely an unspeakably expensive proposition, though.

I have yet to think of an economic need which a self-sustaining population on Mars would fulfill, that innovative strategies could not fulfill on Earth. Farming food on Mars? We can do hydroponics here. Running out of room to house people? We’re nowhere near that kind of population density. New legal environments to test out social engineering concepts? Seasteads and charter cities are way safer and less expensive. Climate change? Just tax carbon and build nuclear power plants, sheesh.

Agreed, except the part about sea-steading. Staying home is even more safe and less expensive. Put in a less tongue-in-cheek way: The difficulty of reaching Mars is why a Mars colony has a chance to become an independent civilization in the first place. Sending supplies to Mars is so difficult that the colonists would be better off building up their own supply chains in the long term for anything but the most value-dense equipment like microprocessors. The same isn't true for a sea-stead; sure you could in theory build your own economy, but realistically you'll just end up importing everything because it's easy, become heavily reliant on the outside world and be independent in name only. You're also within reach of any tax-collecting naval power of the world.

No one will front the money to build Mars colonies until there’s an economic incentive to do so. I see no such economic incentive. I would love to be wrong about this, because Mars is the best colonization target by far. But I don’t think I am.

Depends on what amount you're talking about. If it's <$100 billion mere prestige would be enough incentive.

As the people of Earth demand an increasingly high standard of living and simultaneously a cleaner environment, I suspect that this may prove to be the ultimate driver of off-world industrialization. Again, though, speculation.

Very far-fetched argument. To relocate the vast amount of industry required to make a significant positive impact on the environment, you'd need to lower launch costs close to maritime shipping costs today. And at that point, supplying off-world colonies would be just as easy.

Critically, space industrialization is different from space colonization. Developing an off-world economy is a pre-requisite for seeing a large, permanent population above the atmosphere.

A dubious conclusion. Do you propose relocating entire supply chains off-world, or just small bits? If it's the former, it's no easier than founding a self-sufficient colony. If it's the latter, it's not worth it due to exorbitant transportation costs back and forth from Earth.

Governments may choose to pay for scientific missions to other planets; they will not front the costs of developing entire planets quite literally from the ground up. Whatever outputs space agencies may build, they will not be colonies.

colony ≠ terraforming

People won’t live there, the way that human populations have whenever establishing themselves in a new locality. There won’t be families and new businesses and the like, not for a long time.
Instead, we’re probably going to see many largely-automated operations, with minimal and possibly intermittent human presence.

I think you're seriously overestimating the capability of robots. Compare what the Apollo astronauts were able to do on the moon and what Mars rovers have done.

As we push towards human settlement in space, our focus should therefore be the development of new industries and new technologies to enable and motivate working above the atmosphere.

This sounds like a call to action, but if human settlement in space was profitable, it would happen anyway? Also, who's "we"?

One day, our species will span three worlds. That day remains very far away. Rather than fixate on terraforming dreams, we should chart a course carried by the currents of economic necessity. With the correct regulatory environment and technological investments, we can begin building sustainable off-world industries in a realistic timescale. Such industries will carry us to the planets in the pursuit of profit—a far more reliable motivator than any humanitarian spirit from politicians.
That, I suspect, is what the future of space travel is going to come down to. Do we pursue an incremental strategy that eventually carries us to the ends of the Solar System, or do we wallow on this one planet, fantasizing of an amazing future no one has any incentive to hand us? Are we going to fixate on self-sustained colonies and settle for nothing less, or shall we go to Luna first, but not to live there?

Again, colony ≠ terraforming, and again, curious to hear your thoughts on why Mars Direct/Elon Musk's plan won't pan out. In any case, whatever your vision for future human space exploration is exactly, the only thing that matters right now is lowering launch costs.

Comment by maximkazhenkov on Luna First, But Not To Live There · 2020-09-09T21:12:02.345Z · LW · GW

I would like to take the complete opposite position and argue that self-sufficient space colonies will happen long before Earth reaps any benefits from industrial activities in outer space, if ever. The reason I believe this is that outer space activities won't be economically driven for a long time, because there is no profit to be made.

It is important to clarify what sort of propulsion technologies you are basing your analysis on. I subdivide propulsion technologies into 3 categories:

  • Level 1: current non-reusable rockets; ~$10,000 per kg to LEO
  • Level 2: reusable rockets, space planes; ~$100 per kg to LEO
  • Level 3: space-elevators, orbital rings, fusion drive; <$1 per kg to LEO and beyond

With level 1 tech delta-v is absolutely crucial for any activity in outer space because every kg of fuel in LEO is worth its weight in gold. With level 2 tech, delta-v becomes a much less important consideration; there are plenty of equipment whose value exceeds $100 per kg, and the cost can be further reduced with dedicated tanker spacecrafts highly optimized for re-usability. Fuel itself is cheap, after all. And finally, with level 3 tech, delta-v becomes utterly trivial; simply climbing higher on a space-elevator can eject you straight out of the solar system.

We have been stuck on level 1 for half a century now, and there has been no outer space activities beyond a few space probes here and there. If we stay on this level, there is no reason to expect more activities in the future, either. If we ever reach level 3, setting up shop anywhere in the solar system will be just as easy as on Earth. So level 2 is the only scenario worth discussing with regard to colonization vs industrialization in my mind.

Currently, our best bet to reach level 2 by far is SpaceX, which was founded with the explicit goal of colonizing Mars. It is way ahead of the competition both in terms of currently operational re-usable rockets (having sucked up most of the global non-governmental launch market) and pushing re-usability tech further. It is in the process of becoming a space-based internet broadband provider which has the revenue potential dwarfing the satellite launch market, with no competition in sight. It is also a privately-held company and intentionally so, so that profit incentives don't get in the way of its mission.

Their road map is to lower launch cost enough such that private citizens, the government or Elon Musk himself (most likely a combination of the above) could fund a Mars homesteading campaign. Low enough that economic drives are unnecessary. I would like to hear why this plan is unrealistic.

One major advantage Mars has over the moon (regardless of colonization or industrialization) is the availability of carbon. The only fuel that can be produced in-situ on the moon is hydrogen, which is not ideal for re-usability due to its low boiling point and hydrogen embrittlement. There is a good reason all next-gen re-usable rockets use methane as fuel.

But far more importantly, you have not argued why industrializing the moon is a good idea in the first place. I wholeheartedly agree with the idea that operations on Mars will never turn a profit for Earth, but that hardly supports your point. Putting factories on the moon might make (marginally) less losses than putting factories on Mars, but so what, there is always the option to stay home and make no loss.

Comment by maximkazhenkov on Luna First, But Not To Live There · 2020-09-09T19:41:21.016Z · LW · GW
Traffic between the moon and a space station, or space station construction site, is much cheaper than between the earth and those sites.

Other than delta-v, I don't see any reason to think that. However, to exploit even that advantage, you'd have to build the space station on the moon using local materials in bulk. This is at least as hard as colonizing the moon since space stations require lots of high-tech manufacturing to produce, whereas colonization just requires air, water, food and construction materials in bulk which are much lower-tech.

Helium-3 for a fuel and/or energy source

He-3 fusion is way harder (higher Coulomb-barrier) than D-T fusion which itself hasn't been cracked. The only advantage Helium-3 provides over Deuterium-Tritium is aneutronicity, which doesn't matter if you're just building a power plant. Aneutronicity is only important if you want to use direct thrust from the fusion products to propel your spacecraft, and at that sort of tech level it's better to mine He-3 from the gas giants which have way more supply.

Depot for storing fuel, equipment, and supplies in bulk

Where do those fuel, equipment and supplies come from? If they come from Earth, there are no delta-v savings. If they come from the moon itself, the argument becomes circular.

Comment by maximkazhenkov on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-03T02:41:32.785Z · LW · GW

Thanks, that cleared up a lot.

Comment by maximkazhenkov on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-02T14:13:56.662Z · LW · GW

Would you agree that, given that the multiverse exists (verified by independent evidence), the WAP is sufficient to explain the fundamental parameters?

Comment by maximkazhenkov on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-02T00:27:41.682Z · LW · GW
The idea of the fine-tuned universe is invalid.

Could you elaborate? What's the paradox that's being dissolved here? As far as I know SSA does not indicate a fine-tuned universe, just that our existence doesn't give us a clue about how likely life is to arise in any universe/planet.

Comment by maximkazhenkov on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-13T18:33:20.319Z · LW · GW

What would inner alignment failures even look like? Overdosing on meth sure makes the dopamine system happy. Perhaps human values reside in the prefrontal complex, and all of humanity is a catastrophic alignment failure of the dopamine system (except a small minority of drug addicts) on top of being a catastrophic alignment failure of natural selection.

Comment by maximkazhenkov on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-13T18:19:04.639Z · LW · GW

Isn't evolution a better analogy for deep learning anyway? All natural selection does is gradient descent (hill climbing technically), with no capacity for lookahead. And we've known this one for 150 years!

Comment by maximkazhenkov on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-13T00:14:03.545Z · LW · GW

What you're describing is humans being mesa-optimizers inside the natural selection algorithm. The phenomenon this post talks about is one level deeper.