Posts

"Go west, young man!" - Preferences in (imperfect) maps 2020-07-31T07:50:59.520Z · score: 18 (6 votes)
Learning Values in Practice 2020-07-20T18:38:50.438Z · score: 24 (6 votes)
The Goldbach conjecture is probably correct; so was Fermat's last theorem 2020-07-14T19:30:14.806Z · score: 68 (26 votes)
Why is the impact penalty time-inconsistent? 2020-07-09T17:26:06.893Z · score: 16 (5 votes)
Dynamic inconsistency of the inaction and initial state baseline 2020-07-07T12:02:29.338Z · score: 30 (7 votes)
Models, myths, dreams, and Cheshire cat grins 2020-06-24T10:50:57.683Z · score: 21 (8 votes)
Results of $1,000 Oracle contest! 2020-06-17T17:44:44.566Z · score: 53 (19 votes)
Comparing reward learning/reward tampering formalisms 2020-05-21T12:03:54.968Z · score: 9 (1 votes)
Probabilities, weights, sums: pretty much the same for reward functions 2020-05-20T15:19:53.265Z · score: 11 (2 votes)
Learning and manipulating learning 2020-05-19T13:02:41.838Z · score: 38 (11 votes)
Reward functions and updating assumptions can hide a multitude of sins 2020-05-18T15:18:07.871Z · score: 16 (5 votes)
How should AIs update a prior over human preferences? 2020-05-15T13:14:30.805Z · score: 17 (5 votes)
Distinguishing logistic curves 2020-05-15T11:38:04.516Z · score: 23 (9 votes)
Distinguishing logistic curves: visual 2020-05-15T10:33:08.901Z · score: 9 (1 votes)
Kurzweil's predictions' individual scores 2020-05-07T17:10:36.637Z · score: 17 (8 votes)
Assessing Kurzweil predictions about 2019: the results 2020-05-06T13:36:18.788Z · score: 126 (55 votes)
Maths writer/cowritter needed: how you can't distinguish early exponential from early sigmoid 2020-05-06T09:41:49.370Z · score: 39 (14 votes)
Consistent Glomarization should be feasible 2020-05-04T10:06:55.928Z · score: 13 (9 votes)
Last chance for assessing Kurzweil 2020-04-22T11:51:02.244Z · score: 12 (2 votes)
Databases of human behaviour and preferences? 2020-04-21T18:06:51.557Z · score: 10 (2 votes)
Solar system colonisation might not be driven by economics 2020-04-21T17:10:32.845Z · score: 26 (12 votes)
"How conservative" should the partial maximisers be? 2020-04-13T15:50:00.044Z · score: 20 (7 votes)
Assessing Kurzweil's 1999 predictions for 2019 2020-04-08T14:27:21.689Z · score: 37 (13 votes)
Call for volunteers: assessing Kurzweil, 2019 2020-04-02T12:07:57.246Z · score: 27 (9 votes)
Anthropics over-simplified: it's about priors, not updates 2020-03-02T13:45:11.710Z · score: 9 (1 votes)
If I were a well-intentioned AI... IV: Mesa-optimising 2020-03-02T12:16:15.609Z · score: 26 (8 votes)
If I were a well-intentioned AI... III: Extremal Goodhart 2020-02-28T11:24:23.090Z · score: 20 (7 votes)
If I were a well-intentioned AI... II: Acting in a world 2020-02-27T11:58:32.279Z · score: 20 (7 votes)
If I were a well-intentioned AI... I: Image classifier 2020-02-26T12:39:59.450Z · score: 35 (17 votes)
Other versions of "No free lunch in value learning" 2020-02-25T14:25:00.613Z · score: 16 (5 votes)
Subagents and impact measures, full and fully illustrated 2020-02-24T13:12:05.014Z · score: 32 (10 votes)
(In)action rollouts 2020-02-18T14:48:19.160Z · score: 11 (2 votes)
Counterfactuals versus the laws of physics 2020-02-18T13:21:02.232Z · score: 16 (3 votes)
Subagents and impact measures: summary tables 2020-02-17T14:09:32.029Z · score: 11 (2 votes)
Appendix: mathematics of indexical impact measures 2020-02-17T13:22:43.523Z · score: 12 (3 votes)
Stepwise inaction and non-indexical impact measures 2020-02-17T10:32:01.863Z · score: 12 (3 votes)
In theory: does building the subagent have an "impact"? 2020-02-13T14:17:23.880Z · score: 17 (5 votes)
Building and using the subagent 2020-02-12T19:28:52.320Z · score: 17 (6 votes)
Plausibly, almost every powerful algorithm would be manipulative 2020-02-06T11:50:15.957Z · score: 41 (13 votes)
The Adventure: a new Utopia story 2020-02-05T16:50:42.909Z · score: 53 (35 votes)
"But that's your job": why organisations can work 2020-02-05T12:25:59.636Z · score: 75 (32 votes)
Appendix: how a subagent could get powerful 2020-01-28T15:28:56.434Z · score: 53 (13 votes)
ACDT: a hack-y acausal decision theory 2020-01-15T17:22:48.676Z · score: 48 (14 votes)
Predictors exist: CDT going bonkers... forever 2020-01-14T16:19:13.256Z · score: 42 (18 votes)
Preference synthesis illustrated: Star Wars 2020-01-09T16:47:26.567Z · score: 19 (8 votes)
12020: a fine future for these holidays 2019-12-25T15:01:33.788Z · score: 40 (17 votes)
When Goodharting is optimal: linear vs diminishing returns, unlikely vs likely, and other factors 2019-12-19T13:55:28.954Z · score: 24 (8 votes)
Oracles: reject all deals - break superrationality, with superrationality 2019-12-05T13:51:27.196Z · score: 20 (3 votes)
"Fully" acausal trade 2019-12-04T16:39:46.481Z · score: 16 (6 votes)
A test for symbol grounding methods: true zero-sum games 2019-11-26T14:15:14.776Z · score: 23 (9 votes)

Comments

Comment by stuart_armstrong on "Go west, young man!" - Preferences in (imperfect) maps · 2020-08-01T07:37:15.252Z · score: 2 (1 votes) · LW · GW

Stuart, I'm writing a review of all the work done on corrigibility. Would you mind if I asked you some questions on your contributions?

No prob. Email or Zoom/Hangouts/Skype?

Comment by stuart_armstrong on The ground of optimization · 2020-07-31T15:48:12.804Z · score: 6 (3 votes) · LW · GW

Very good. A lot of potential there, I feel.

Comment by stuart_armstrong on "Go west, young man!" - Preferences in (imperfect) maps · 2020-07-31T11:35:20.984Z · score: 2 (1 votes) · LW · GW

The information to distinguish between these interpretations is not within the request to travel west.

Yes, but I'd argue that most of moral preferences are similarly underdefined when the various interpretations behind them come apart (eg purity).

Comment by stuart_armstrong on mAIry's room: AI reasoning to solve philosophical problems · 2020-07-30T15:00:10.808Z · score: 3 (2 votes) · LW · GW

There are computer programs that can print their own code: https://en.wikipedia.org/wiki/Quine_(computing)

There are also programs which can print their own code and add something to it. Isn't that a way in which the program fully knows itself?

Comment by stuart_armstrong on The Goldbach conjecture is probably correct; so was Fermat's last theorem · 2020-07-18T11:31:49.187Z · score: 2 (1 votes) · LW · GW

Thanks! It's cool to see his approach.

Comment by stuart_armstrong on The Goldbach conjecture is probably correct; so was Fermat's last theorem · 2020-07-16T13:59:13.657Z · score: 2 (1 votes) · LW · GW

Wiles proved the presence of a very rigid structure - not the absence - and the presence of this structure implied FLT via the work of other mathematicians.

If you say that "Wiles proved the Taniyama–Shimura conjecture" (for semistable elliptic curves), then I agree: he's proved a very important structural result in mathematics.

If you say he proved Fermat's last theorem, then I'd say he's proved an important-but-probable lack of structure in mathematics.

So yeah, he proved the existence of structure in one area, and (hence) the absence of structure in another area.

And "to prove Fermat's last theorem, you have to go via proving the Taniyama–Shimura conjecture", is, to my mind, strong evidence for "proving lack of structure is hard".

Comment by stuart_armstrong on The Goldbach conjecture is probably correct; so was Fermat's last theorem · 2020-07-16T13:51:33.270Z · score: 2 (1 votes) · LW · GW

You can see this as sampling times sorta-independently, or as sampling times with less independence (ie most sums are sampled twice).

Either view works, and as you said, it doesn't change the outcome.

Comment by stuart_armstrong on The Goldbach conjecture is probably correct; so was Fermat's last theorem · 2020-07-15T22:18:15.038Z · score: 2 (1 votes) · LW · GW

Yes, I got that result too. The problem is that the prime number theorem isn't a very good approximation for small numbers. So we'd need a slightly more sophisticated model that has more low numbers.

I suspect that moving from "sampling with replacement" to "sampling without replacement" might be enough for low numbers, though.

Comment by stuart_armstrong on The Goldbach conjecture is probably correct; so was Fermat's last theorem · 2020-07-15T22:13:03.799Z · score: 2 (1 votes) · LW · GW

Note that the probabilistic argument fails for n=3 for Fermat's last theorem; call this (3,2) (power=3, number of summands is 2).

So we know (3,2) is impossible; Euler's conjecture is the equivalent of saying that (n+1,n) is also impossible for all n. However, the probabilistic argument fails for (n+1,n) the same way as it fails for (3,2). So we'd expect Euler's conjecture to fail, on probabilistic grounds.

In fact, the surprising thing on probabilistic grounds is that Fermat's last theorem is true for n=3.

Comment by stuart_armstrong on Dynamic inconsistency of the inaction and initial state baseline · 2020-07-14T16:52:13.279Z · score: 4 (2 votes) · LW · GW

Good, cheers!

Comment by stuart_armstrong on Dynamic inconsistency of the inaction and initial state baseline · 2020-07-07T15:47:58.411Z · score: 4 (2 votes) · LW · GW

Another key reason for time-inconsistent preferences: bounded rationality.

Comment by stuart_armstrong on Dynamic inconsistency of the inaction and initial state baseline · 2020-07-07T15:39:57.790Z · score: 2 (1 votes) · LW · GW

Why do the absolute values cancel?

Because , so you can remove the absolute values.

Comment by stuart_armstrong on Dynamic inconsistency of the inaction and initial state baseline · 2020-07-07T12:54:38.276Z · score: 4 (2 votes) · LW · GW

Cheers, interesting read.

Comment by stuart_armstrong on Tradeoff between desirable properties for baseline choices in impact measures · 2020-07-07T12:10:19.312Z · score: 4 (2 votes) · LW · GW

I also think the pedestrian example illustrates why we need more semantic structure: "pedestrian alive" -> "pedestrian dead" is bad, but "pigeon on road" -> "pigeon in flight" is fine.

Comment by stuart_armstrong on Is Molecular Nanotechnology "Scientific"? · 2020-07-07T12:04:14.663Z · score: 3 (2 votes) · LW · GW

Nope! Part of my own research has made more optimistic about the possibilities of understanding and creating intelligence.

Comment by stuart_armstrong on Tradeoff between desirable properties for baseline choices in impact measures · 2020-07-07T12:02:16.107Z · score: 4 (2 votes) · LW · GW

I think this shows that the step-wise inaction penalty is time-inconsistent: https://www.lesswrong.com/posts/w8QBmgQwb83vDMXoz/dynamic-inconsistency-of-the-stepwise-inaction-baseline

Comment by stuart_armstrong on Assessing Kurzweil predictions about 2019: the results · 2020-06-25T12:12:15.810Z · score: 5 (3 votes) · LW · GW

https://www.futuretimeline.net/forum/topic/17903-kurzweils-2009-is-our-2019/ , forwarded to me by Daniel Kokotajlo (I added a link in the post as well).

Comment by stuart_armstrong on Models, myths, dreams, and Cheshire cat grins · 2020-06-25T12:09:01.782Z · score: 4 (2 votes) · LW · GW

"How to think about features of models and about consistency", in a relatively fun way as an intro to a big post I'm working on.

Comment by stuart_armstrong on Models, myths, dreams, and Cheshire cat grins · 2020-06-25T12:08:43.652Z · score: 2 (1 votes) · LW · GW

Then it has a wrong view of wings and fur (as well as a wrong view of pigs). The more features it has to get right, the harder the adversarial model is to construct - it's not just moving linearly in a single direction.

Comment by stuart_armstrong on Models, myths, dreams, and Cheshire cat grins · 2020-06-24T12:02:38.123Z · score: 6 (3 votes) · LW · GW

Thanks! Good insights there. Am reproducing the comment here for people less willing to click through:

I haven't read the literature on "how counterfactuals ought to work in ideal reasoners" and have no opinion there. But the part where you suggest an empirical description of counterfactual reasoning in humans, I think I basically agree with what you wrote.

I think the neocortex has a zoo of generative models, and a fast way of detecting when two are compatible, and if they are, snapping them together like Legos into a larger model.

For example, the model of "falling" is incompatible with the model of "stationary"—they make contradictory predictions about the same boolean variables—and therefore I can't imagine a "falling stationary rock". On the other hand, I can imagine "a rubber wine glass spinning" because my rubber model is about texture etc., my wine glass model is about shape and function, and my spinning model is about motion. All 3 of those models make non-contradictory predictions (mostly because they're issuing predictions about non-overlapping sets of variables), so the three can snap together into a larger generative model.

So for counterfactuals, I suppose that we start by hypothesizing some core of a model ("a bird the size of an adult blue whale") and then searching out more little generative model pieces that can snap onto that core, growing it out as much as possible in different ways, until you hit the limits where you can't snap on any more details without making it unacceptably self-contradictory. Something like that...

Comment by stuart_armstrong on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-05-22T14:55:54.665Z · score: 2 (1 votes) · LW · GW

(this is, obviously, very speculative ^_^ )

Comment by stuart_armstrong on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-05-22T13:02:25.968Z · score: 2 (1 votes) · LW · GW

...which also means that they didn't have an empire to back them up?

Comment by stuart_armstrong on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-05-22T10:59:09.485Z · score: 4 (2 votes) · LW · GW

Thanks for your research, especially the Afonso stuff. One question for that: were these empires used to gaining/losing small pieces of territory? ie did they really dedicate all their might to getting these ports back, or did they eventually write them off as minor losses not worth the cost of fighting (given Portuguese naval advantages)?

Comment by stuart_armstrong on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-05-21T11:27:10.348Z · score: 4 (2 votes) · LW · GW

Based on what I recall reading about Pizzaro's conquest, I feel you might be underestimating the importance of horses. It took centuries for European powers to figure out how to break a heavy cavalry charge with infantry; the amerindians didn't have the time to figure it out (see various battles where small cavalry forces routed thousands of troops). Once they had got more used to horses, later Inca forces (though much diminished) were more able to win open battles against the Spanish.

Maybe this was the problem for these empires: they were used to winning open battles, but were presented with a situation where only irregular warfare or siege defences could win. They reacted as an empire, when they should have been reacting as a recalcitrant province.

Comment by stuart_armstrong on Learning and manipulating learning · 2020-05-20T09:31:20.441Z · score: 2 (1 votes) · LW · GW

Also, giving something more points for killing people than making cake sounds like a bad incentive scheme.

In the original cake-or-death example, it wasn't that killing got more points, it's that killing is easier (and hence gets more points over time). This is a reflection of the fact that "true" human values are complex and difficult to maximise, but many other values are much easier to maximise.

Comment by stuart_armstrong on Reward functions and updating assumptions can hide a multitude of sins · 2020-05-18T16:54:02.467Z · score: 4 (2 votes) · LW · GW

My main note is that my comment was just about the concept of rigging a learning process given a fixed prior over rewards. I certainly agree that the general strategy of "update a distribution over reward functions" has lots of as-yet-unsolved problems.

Ah, ok, I see ^_^ Thanks for making me write this post, though, as it has useful things for other people to see, that I had been meaning to write up for some time.

On your main point: if the prior and updating process are over things that are truly beyond the AI's influence, then there will be no rigging (or, in my terms: uninfluenceable->unriggable). But there are many things that look like this, that are entirely riggable. For example, "have a prior 50-50 on cake and death, and update according to what the programmer says". This seems to be a prior-and-update combination, but it's entirely riggable.

So, another way of seeing my paper is "this thing looks like a prior-and-update process. If it's also unriggable, then (given certain assumptions) it's truly beyond the AI's influence".

Comment by stuart_armstrong on How should AIs update a prior over human preferences? · 2020-05-18T15:21:00.362Z · score: 4 (2 votes) · LW · GW

Thanks! Responded here: https://www.lesswrong.com/posts/EYEkYX6vijL7zsKEt/reward-functions-and-updating-assumptions-can-hide-a

Comment by stuart_armstrong on How should AIs update a prior over human preferences? · 2020-05-18T15:20:43.843Z · score: 2 (1 votes) · LW · GW

Thanks! Responded here: https://www.lesswrong.com/posts/EYEkYX6vijL7zsKEt/reward-functions-and-updating-assumptions-can-hide-a

Comment by stuart_armstrong on How should AIs update a prior over human preferences? · 2020-05-16T09:09:43.219Z · score: 2 (1 votes) · LW · GW

thanks! Changed title (and corrected badly formatted footnote)

Comment by stuart_armstrong on How should AIs update a prior over human preferences? · 2020-05-15T19:19:25.854Z · score: 2 (1 votes) · LW · GW

I agree that for such a system, the optimal policy of the actor is to rig the estimator, and to "intentionally" bias it towards easy-to-satisfy rewards like "the human loves heroin".

The part that confuses me is why we're having two separate systems with different objectives where one system is dumb and the other system is smart.

We don't need to have two separate systems. There's two meaning to your "bias it towards" phrase: the first one is the informal human one, where "the human loves heroin" is clearly a bias. The second is some formal definition of what is biasing and what isn't. And the system doesn't have that. The "estimator" doesn't "know" that "the human loves heroin" is a bias; instead, it sees this as a perfectly satisfactory way of accomplishing its goals, according to the bridging function it's been given. There is no conflict between estimator and actor.

Imagine that you have a complex CIRL game that models the real world well but assumes that the human is Boltzmann-rational. [...] Such a policy is going to "try" to learn preferences, learn incorrectly, and then act according to those incorrect learned preferences, but it is not going to "intentionally" rig the learning process.

The AI would not see any of these actions as "rigging", even if we would.

It might think "hey, I should check whether the human likes heroin by giving them some", and then think "oh they really do love heroin, I should pump them full of it".

It will do this if it can't already predict the effect of giving them heroin.

It won't think "aha, if I give the human heroin, then they'll ask for more heroin, causing my Boltzmann-rationality estimator module to predict they like heroin, and then I can get easy points by giving humans heroin".

If it can predict the effect of giving humans heroin, it will think something like that. It think: "if I give the humans heroin, they'll ask for more heroin; my Boltzmann-rationality estimator module confirms that this means they like heroin, so I can efficiently satisfy their preferences by giving humans heroin".

Comment by stuart_armstrong on Assessing Kurzweil predictions about 2019: the results · 2020-05-06T17:16:14.732Z · score: 5 (3 votes) · LW · GW

Strong upvote for writing your own predictions before seeing the 2019 graph.

Comment by stuart_armstrong on Assessing Kurzweil predictions about 2019: the results · 2020-05-06T16:34:37.193Z · score: 7 (4 votes) · LW · GW

Plot visualised:

Comment by stuart_armstrong on Assessing Kurzweil predictions about 2019: the results · 2020-05-06T15:02:56.396Z · score: 2 (1 votes) · LW · GW

I didn't judge whether it was plausible or trivial; I just took out every thing that was formulated as a prediction for the future.

Comment by stuart_armstrong on Assessing Kurzweil predictions about 2019: the results · 2020-05-06T14:12:46.904Z · score: 8 (5 votes) · LW · GW

Edited my post to reflect this possibility.

Personally, I don't think there will be anything consistent like that; just some predictions right, some premature, some wrong. I note that most of the predictions seem to be of the type "once true, always true".

Comment by stuart_armstrong on Maths writer/cowritter needed: how you can't distinguish early exponential from early sigmoid · 2020-05-06T10:21:26.741Z · score: 4 (2 votes) · LW · GW

even if you know it is logistic, the parameters that you typically care about (inflexion point location and asymptote) are badly behaved until after the inflection point.

Did a minor edit to reflect this.

Comment by stuart_armstrong on Consistent Glomarization should be feasible · 2020-05-05T12:36:56.168Z · score: 2 (1 votes) · LW · GW

Because the proportion of time where we might have done something we wish to hide is low, while the proportion of time you might counterfactually have done something to hide is high. So by asking you every day, a questioner can figure out that 99% of the time, you didn't actually do anything to hide.

Comment by stuart_armstrong on If I were a well-intentioned AI... I: Image classifier · 2020-04-28T09:51:37.416Z · score: 3 (2 votes) · LW · GW

I don't think critiques are necessarily bad ^_^

Comment by stuart_armstrong on If I were a well-intentioned AI... I: Image classifier · 2020-04-27T12:35:07.946Z · score: 3 (2 votes) · LW · GW

I think that might be a generally good critique, but I don't think it applies to this post (it may apply better to post #3 in the series).

I used "metal with knobs" and "beefy arm" as human-parsable examples, but the main point is detecting when something is out-off-distribution, which relies on the image being different in AI-detectable ways, not on the specifics of the categories I mentioned.

Comment by stuart_armstrong on Databases of human behaviour and preferences? · 2020-04-27T09:45:05.223Z · score: 2 (1 votes) · LW · GW

Thanks!

Comment by stuart_armstrong on Solar system colonisation might not be driven by economics · 2020-04-24T08:05:15.228Z · score: 2 (1 votes) · LW · GW

I do rate energy as a plausible reason to go to near-Earth orbit.

Comment by stuart_armstrong on Problem relaxation as a tactic · 2020-04-23T09:45:01.390Z · score: 15 (9 votes) · LW · GW

A key AI safety skill is moving back and forth, as needed, between "could we solve problem X if we assume Y?" and "can we assume Y?".

Comment by stuart_armstrong on Solar system colonisation might not be driven by economics · 2020-04-23T09:26:03.478Z · score: 2 (1 votes) · LW · GW

I agree with that reformulation.

Weren't the early british and french colonies in North America driven by geopolitics rather than economics?

Comment by stuart_armstrong on Solar system colonisation might not be driven by economics · 2020-04-23T09:21:39.812Z · score: 4 (3 votes) · LW · GW

We won't run out of space, material, or energy on Earth in any meaningful sense for a long time (when adding substitution and recycling possibilities), especially as growth seems to be in the services sector. My old post is related: http://blog.practicalethics.ox.ac.uk/2011/11/we-dont-have-a-problem-with-water-food-or-energy/ ; since then, most of the technologies I talked about have gotten cheaper.

(Even if people did expand into space before we needed the resources, it wouldn't matter much since they'd be easily overtaken by later colonists.)

The early expansion might pay the large fixed costs, allowing economically viable expansion to start much sooner...

Comment by stuart_armstrong on Solar system colonisation might not be driven by economics · 2020-04-23T09:16:11.924Z · score: 3 (2 votes) · LW · GW

I am, maybe obviously, thinking of people living off-Earth as hedging.

I also think that mastering closed systems in space would help a lot for having semi-closed systems on Earth.

Comment by stuart_armstrong on Databases of human behaviour and preferences? · 2020-04-23T09:02:06.212Z · score: 2 (1 votes) · LW · GW

Suggested elsewhere by Max Daniel:

  • Ultimatum game or other widely studied games in psych/behavioral econ?
  • Ebay bidding, or other auctions?
  • Chess or other games?
  • Voting in elections
  • Gambling: casinos, online poker ...
  • Online dating behavior

Suggested by Ozzie Gooen:

  • This sounds a bit to me like psychology experiments with children, or perhaps some well studied psychology experiments (where there are large amounts of data, with relatively narrow options).
  • Websites would have more than enough data for narrow decisions, like, “Which ad will this user click”, or on Netflix, “Which movie/tv show will they select?”
  • There’s a fair bit of data for the main decisions of chess/starcraft/etc, Like, “which race will be chosen/ which character will be chosen / which strategy will be chosen”

Suggested by Jan Brauner:

  • Any ML dataset with labels. The labels were created by humans. E.g. ImageNet: a human was shown an image and had to choose one of 1000 options.
Comment by stuart_armstrong on Solar system colonisation might not be driven by economics · 2020-04-22T11:31:07.204Z · score: 2 (1 votes) · LW · GW

Interesting.

Comment by stuart_armstrong on Solar system colonisation might not be driven by economics · 2020-04-22T10:41:34.256Z · score: 3 (2 votes) · LW · GW

There are multiple different things that we call space colonization that all have a different case.

Agreed. "A base in low earth orbit" and "A base on earth's moon" have much stronger economic cases than the others.

But note "at current prices". Obviously, if we brought down a few trillions of tons of copper, the price of that metal might suffer a slight dip. That's an argument that can reduce the size of space colonies but not one that leads us to have no space colonies at all.

The advantage of space is that there's a lot of resources available; the disadvantage is that there are huge fixed costs to getting there. I'm highlighting why the "lots of resources" is not enough to overcome "huge fixed costs".

It's plausible that it's helpful to have computers that are stored at a place that can easily be cooled down to very cold temperatures.

Interesting argument, but wouldn't bases in low-to-moon Earth orbit be enough for this?

Comment by stuart_armstrong on Solar system colonisation might not be driven by economics · 2020-04-22T10:37:20.359Z · score: 3 (2 votes) · LW · GW

We can create so much value with even just the matter inside the Moon or Mercury, let alone Jupiter or the Sun. Why would we pass up on it?

Because the value, in GDP terms, is currently very low compared with the costs.

Comment by stuart_armstrong on Solar system colonisation might not be driven by economics · 2020-04-22T10:36:20.207Z · score: 5 (4 votes) · LW · GW

humanity surviving Earth ceasing to be habitable?

The problem with most of those arguments is that almost anything we can do in space, we can do better on Earth. If we can make a a self-sustaining closed ecosystem is space or on another planet... then we can do the same on Earth, more easily, in almost all circumstances (including nuclear war, supervolcano, massive global warming, meteor impact...).

It makes sense to expland if the Earth is an easy target; eg in a war or a meteor impact. But if we can expand into space, then other planets also become realistic war targets, AND we should have the capacity to deviate a meteor.

But maybe I'm missing some other scenarios?

Comment by stuart_armstrong on Solar system colonisation might not be driven by economics · 2020-04-22T10:32:33.164Z · score: 4 (2 votes) · LW · GW

I'd like to see some numbers crunched a bit more.

So would I ^_^

My gut tells me the fixed costs of space mining are huge, but once you pay them, the marginal costs will be tiny.

Yes, but is the demand huge enough to pay those fixed costs? If demand is not very eleastic, then no-one will pay them.

To develop the metaphor I used: given the glass a metre away, the lake a kilometre away, and the vast reservoir over the mountain. Suppose that, given a pipeline, the marginal cost for the reservoir is cheaper than the cost of filtering the lake water. That's as may be, but if we're a small village (or if we're just me), it makes no sense to pay the immense cost of the pipeline.

This will change, of course, if space infrastructure can also serve other purposes (in the metaphor, if we have other reasons to build a pipeline or at least explore the mountains).