## Posts

## Comments

**ClimateDoc (OxDoc)**on How to (hopefully ethically) make money off of AGI · 2023-11-21T18:50:17.901Z · LW · GW

Why's that? They seem to be going for AGI, can afford to invest billions if Zuckerberg chooses, their effort is led by one of the top AI researchers and they have produced some systems that seem impressive (at least to me). If you wanted to cover your bases, wouldn't it make sense to include them? Though 3-5% may be a bit much (but I also think it's a bit much for the listed companies besides MS and Google). Or can a strong argument be made for why, if AGI were attained in the near term, they wouldn't be the ones to profit from it?

**ClimateDoc (OxDoc)**on How to (hopefully ethically) make money off of AGI · 2023-11-20T14:19:07.347Z · LW · GW

- Invest like 3-5% of my portfolio into each of Nvidia, TSMC, Microsoft, Google, ASML and Amazon

Should Meta be in the list? Are the big Chinese tech companies considered out of the race?

**ClimateDoc (OxDoc)**on How should we model complex systems? · 2020-04-15T18:13:29.378Z · LW · GW

Do you mean you'd be adding the probability distribution with that covariance matrix on top of the mean prediction from f, to make it a probabilistic prediction? I was talking about deterministic predictions before, though my text doesn't make that clear. For probabilistic models, yes adding an uncertainty distribution may make result in non-zero likelihoods. But if we know the true dynamics are deterministic (pretend there's no quantum effects, which are largely irrelevant for our prediction errors for systems in the classical physics domain), then we still know the model is not true, and so it seems difficult to interpret p if we were to do Bayesian updating.

Likelihoods are also not obviously (to me) very good measures of model quality for chaotic systems, either - in these cases we know that even if we had the true model, its predictions would diverge from reality due to errors in the initial condition estimates, but it would trace out the correct attractor - and its the attractor geometry (conditional on boundary conditions) that we'd seem to really want to assess. Perhaps then it would have a higher likelihood than every other model, but it's not obvious to me, and it's not obvious that there's not a better metric for leading to good inferences when we don't have the true model.

Basically the logic that says to use Bayes for deducing the truth does not seem to carry over in an obvious way (to me) to the case when we want to predict but can't use the true model.

**ClimateDoc (OxDoc)**on How should we model complex systems? · 2020-04-14T20:56:48.522Z · LW · GW

Yes I'd selected that because I thought it might get it to work. And now I've unselected it, it seems to be working. It's possible this was a glitch somewhere or me just being dumb before I guess.

**ClimateDoc (OxDoc)**on How should we model complex systems? · 2020-04-14T20:50:11.592Z · LW · GW

I wonder whether the models are so coarse that the cyclones that do emerge are in a sense the minimum size.

It's not my area, but I don't think that's the case. My impression is that part of what drives very high wind speeds in the strongest hurricanes is convection on the scale of a few km in the eyewall, so models with that sort of spatial resolution can generate realistically strong systems, but that's ~20x finer than typical climate model resolutions at the moment, so it will be a while before we can simulate those systems routinely (though, some argue we could do it if we had a computer costing a few billion dollars).

**ClimateDoc (OxDoc)**on How should we model complex systems? · 2020-04-14T20:44:46.152Z · LW · GW

do you know what discretization methods are typically used for the fluid dynamics?

There's a mixture - finite differencing used to be used a lot but seems to be less common now, semi-Lagrangian advection seems to have taken over from that in models that used it, then some work by doing most of the computations in spectral space and neglecting the smallest spatial scales. Recently newer methods have been developed to work better on massively parallel computers. It's not my area, though, so I can't give a very expert answer - but I'm pretty sure the people working on it think hard about trying to not smooth out intense structures (though, that has to be balanced against maintaining numerical stability).

**ClimateDoc (OxDoc)**on How should we model complex systems? · 2020-04-14T20:38:37.903Z · LW · GW

I'm using Chrome 80.0.3987.163 in Mac OSX 10.14.6. But I also tried it in Firefox and didn't get formatting options. But maybe I'm just doing the wrong thing...

**ClimateDoc (OxDoc)**on How should we model complex systems? · 2020-04-13T20:36:47.708Z · LW · GW

Thanks, yes this is very relevant to thinking about climate modelling, with the dominant paradigm being that we can separately model phenomena above and below the resolved scale - there's an ongoing debate, though, about whether a different approach would work better, and it gets tricky when the resolved scale gets close to the size of important types of weather system.

**ClimateDoc (OxDoc)**on How should we model complex systems? · 2020-04-13T20:29:09.796Z · LW · GW

climate models are already "low-level physics" except that "low-level" means coarse aggregates of climate/weather measurements that are so big that they don't include tropical cyclones!

Just as as aside, a typical modern climate model will simulate tropical cyclones as emergent phenomena from the coarse-scale fluid dynamics, albeit not enough of the most intense ones. Though, much smaller tropical thunderstorm-like systems are much more crudely represented.

**ClimateDoc (OxDoc)**on How should we model complex systems? · 2020-04-13T20:25:58.830Z · LW · GW

Thanks again.

I think I need to think more about the likelihood issue. I still feel like we might be thinking about different things - when you say "a deterministic model which uses fundamental physics", this would not be in the set of models that we could afford to run to make predictions for complex systems. For the models we could afford to run, it seems to me that no choice of initial conditions would lead them to match the data we observe, except by extreme coincidence (analogous to a simple polynomial just happening to pass through all the datapoints produced by a much more complex function).

I've gone through Jaynes' paper now from the link you gave. His point about deciding what macroscopic variables matter is well-made. But you still need a model of how the macroscopic variables you observe relate to the ones you want to predict. In modelling atmospheric processes, simple spatial averaging of the fluid dynamics equations over resolved spatial scales gets you some way, but then changing the form of the function relating the future to present states ("adding representations of processes" as I put it before) adds additional skill. And Jaynes' paper doesn't seem to say how you should choose this function.

**ClimateDoc (OxDoc)**on How should we model complex systems? · 2020-04-13T20:04:38.413Z · LW · GW

Thanks again. OK I'll try using MarkDown...

I think 'algorithm' is an imprecise term for this discussion.

Perhaps I used the term imprecisely - I basically meant it in a very general sense of being some process, set of rules etc. that a computer or other agent could follow to achieve the goal.

We need good decision theories to know when to search for more or better bottom-up models. What are we missing? How should we search? (When should we give up?)

The name for 'algorithms' (in the expansive sense) that can do what you're asking is 'general intelligence'. But we're still working on understanding them!

Yes I see the relevance of decision theories there and that solving this well would be requiring a lot of what would be needed for AGI. I guess when I originally asked, I was wondering if there might have been some insights people had worked out on the way to that - just any parts of such an algorithm that people have figured out, or that at least would reduce the error of a typical scientist. But maybe that will be another while yet...

I think you're right that such an algorithm would need to make measurements of the real system, or systems with properties matching component parts (e.g. a tank of air for climate), and have some way to identify the best measurements to make. I guess determining whether there is some important effect that's not been accounted for yet would require a certain amount of random experimentation to be done (e.g. for climate, heating up patches of land and tanks of ocean water by a few degrees and seeing what happens to the ecology, just as we might do).

This is not necessarily impractical for something like atmospheric or oceanic modelling, where we can run trustworthy high-resolution models over small spatial regions and get data on how things change with different boundary conditions, so we can tell how the coarse models should behave. So then criteria for deciding where and when to run these simulations would be needed. Regions where errors compared to Earth observations are large and regions that exhibit relatively large changes with global warming could be a high priority. I'd have to think if there could be a sensible systematic way of doing it - I guess it would require an estimate of how much the metric of future prediction skill would decrease with information gained from a particular experiment, which could perhaps be approximated using the sensitivity of the future prediction to the estimated error or uncertainty in predictions of a particular variable. I'd need to think about that more.

**ClimateDoc (OxDoc)**on How should we model complex systems? · 2020-04-13T10:13:54.544Z · LW · GW

Thanks for your reply. (I repeat my apology from below for not apparently being able to use formatting options in my browser in this.)

"I think it's an open question whether we can generally model complex systems at all – at least in the sense of being able to make precise predictions about the detailed state of entire complex systems."

I agree modelling the detailed state is perhaps not possible. However, there are at least some complex systems we can model and get substantial positive skill at predicting particular variables without needing to model all the details e.g. the weather, for particular variables up to a particular amount of time ahead, and predictions of global mean warming made from the 1980s seem to have validated quite well so far (for decadal averages). So human minds seem to succeed at least sometimes, but without seeming to follow a particular algorithm. Presumably it's possible to do better, so my question is essentially how would an algorithm that could do better look?

I agree that statistical mechanics is one useful set of methods. But, thinking of the area of climate model development that I know something about, statistical averaging of fluid mechanics does form the backbone to modelling the atmosphere and oceans, but adding representations of processes that are missed by that has added a lot of value (e.g. tropical thunderstorms that are well below the spacing of the numerical grid over which the fluid mechanics equations were averaged). So there seems to be something additional to averaging that can be used, to do with coming up with simplified models of processes you can see are missed out by the averaging. It would be nice to have an algorithm for that, but that's probably asking for a lot...

"I interpret "approximately model complex systems" as 'top-down' 'statistical' modeling"

I didn't mean this to be top-down rather than bottom-up - it could follow whatever modelling strategy is determined to be optimal.

"answering this question demands a complete epistemology and decision theory!"

That's what I was worried about... (though, is decision theory relevant when we just want to predict a given system and maximise a pre-specified skill metric?)

**ClimateDoc (OxDoc)**on How should we model complex systems? · 2020-04-13T09:35:45.007Z · LW · GW

Thanks for your detailed reply. (And sorry I couldn't format the below well - I don't seem to get any formatting options in my browser.)

"It is rarely too difficult to specify the true model...this means that "every member of the set of models available to us is false" need not hold"

I agree we could find a true model to explain the economy, climate etc. (presumably the theory of everything in physics). But we don't have the computational power to make predictions of such systems with that model - so my question is about how should we make predictions when the true model is not practically applicable? By "the set of models available to us", I meant the models we could actually afford to make predictions with. If the true model is not in that set, then it seems to be that all of these models must be false.

'"different processes may become important in future" is not actually a problem for Ockham's razor per se. That's a problem for causal models'

To take the climate example, say scientists had figured out that there were a biological feedback that kicks in once global warming has gone past 2C (e.g. bacteria become more efficient at decomposing soil and releasing CO2). Suppose you have one model that includes a representation of that feedback (e.g. as a subprocess) and one that does not but is equivalent in every other way (e.g. is coded like the first model but lacks the subprocess). Then isn't the second model simpler according to metrics like the minimum description length, so that it would be weighted higher if we penalised models using such metrics? But this seems the wrong thing to do, if we think the first model is more likely to give a good prediction.

Now the thought that occurred to me when writing that is that the data the scientists used to deduce the existence of the feedback ought to be accounted for by the models that are used, and this would give low posterior weight to models that don't include the feedback. But doing this in practice seems hard. Also, it's not clear to me if there would be a way to tell between models that represent the process but don't connect it properly to predicting the climate e.g. they have a subprocess that says more CO2 is produced by bacteria at warming higher than 2C, but then don't actually add this CO2 to the atmosphere, or something.

"likelihoods are never actually zero, they're just very small"

If our models were deterministic, then if they were not true, wouldn't it be impossible for them to produce the observed data exactly, so that the likelihood of the data given any of those models would be zero? (Unless there was more than one process that could give rise to the same data, which seems unlikely in practice.) Now if we make the models probabilistic and try to design them such that there is a non-zero chance that the data would be a possible sample from the model, then the likelihood can be non-zero. But it doesn't seem necessary to do this - models that are false can still give predictions that are useful for decision-making. Also, it's not clear if we could make a probabilistic model that would have non-zero likelihoods for something as complex as the climate that we could run on our available computers (and that isn't something obviously of low value for prediction like just giving probability 1/N to each of N days of observed data). So it still seems like it would be valuable to have a principled way of predicting using models that give a zero likelihood of the data.

"the central challenge is to find rigorous approximations of the true underlying models. The main field I know of which studies this sort of problem directly is statistical mechanics, and a number of reasonably-general-purpose tools exist in that field which could potentially be applied in other areas (e.g. this)."

Yes I agree. Thanks for the link - it looks very relevant and I'll check it out. Edit - I'll just add, echoing part of my reply to Kenny's answer, that whilst statistical averaging has got human modellers a certain distance, adding representations of processes whose effects get missed by the averaging seems to add a lot of value (e.g. tropical thunderstorms in the case of climate). So there seems to be something additional to averaging that can be used, to do with coming up with simplified models of processes you can see are missed out by the averaging.

On causality, whilst of course correcting this is desirable, if the models we can afford to compute with can't reproduce the data, then presumably they are also not reproducing the correct causal graph exactly? And any causal graph we could compute with will not be able to reproduce the data? (Else it would seem that a causal graph could somehow hugely compress the true equations without information loss - great if so!)

**ClimateDoc (OxDoc)**on How should we model complex systems? · 2020-04-13T08:40:27.584Z · LW · GW

OK, I made some edits. I left the "rational" in the last paragraph because it seemed to me to be the best word to use there.