Posts

What does GPT-3 understand? Symbol grounding and Chinese rooms 2021-08-03T13:14:42.106Z
Reward splintering for AI design 2021-07-21T16:13:17.917Z
Bayesianism versus conservatism versus Goodhart 2021-07-16T23:39:18.059Z
Underlying model of an imperfect morphism 2021-07-16T13:13:10.483Z
Anthropic decision theory for self-locating beliefs 2021-07-12T14:11:40.715Z
Generalised models: imperfect morphisms and informational entropy 2021-07-09T17:35:21.039Z
Practical anthropics summary 2021-07-08T15:10:44.805Z
Anthropics and Fermi: grabby, visible, zoo-keeping, and early aliens 2021-07-08T15:07:30.891Z
The SIA population update can be surprisingly small 2021-07-08T10:45:02.803Z
Anthropics in infinite universes 2021-07-08T06:56:05.666Z
Non-poisonous cake: anthropic updates are normal 2021-06-18T14:51:43.143Z
The reverse Goodhart problem 2021-06-08T15:48:03.041Z
Dangerous optimisation includes variance minimisation 2021-06-08T11:34:04.621Z
The underlying model of a morphism 2021-06-04T22:29:49.635Z
SIA is basically just Bayesian updating on existence 2021-06-04T13:17:20.590Z
The blue-minimising robot and model splintering 2021-05-28T15:09:54.516Z
Human priors, features and models, languages, and Solmonoff induction 2021-05-10T10:55:12.078Z
Anthropics: different probabilities, different questions 2021-05-06T13:14:06.827Z
Consistencies as (meta-)preferences 2021-05-03T15:10:50.841Z
Why unriggable *almost* implies uninfluenceable 2021-04-09T17:07:07.016Z
A possible preference algorithm 2021-04-08T18:25:25.855Z
If you don't design for extrapolation, you'll extrapolate poorly - possibly fatally 2021-04-08T18:10:52.420Z
Which counterfactuals should an AI follow? 2021-04-07T16:47:42.505Z
Toy model of preference, bias, and extra information 2021-03-24T10:14:34.629Z
Preferences and biases, the information argument 2021-03-23T12:44:46.965Z
Why sigmoids are so hard to predict 2021-03-18T18:21:51.203Z
Connecting the good regulator theorem with semantics and symbol grounding 2021-03-04T14:35:40.214Z
Cartesian frames as generalised models 2021-02-16T16:09:20.496Z
Generalised models as a category 2021-02-16T16:08:27.774Z
Counterfactual control incentives 2021-01-21T16:54:59.309Z
Short summary of mAIry's room 2021-01-18T18:11:36.035Z
Syntax, semantics, and symbol grounding, simplified 2020-11-23T16:12:11.678Z
The ethics of AI for the Routledge Encyclopedia of Philosophy 2020-11-18T17:55:49.952Z
Extortion beats brinksmanship, but the audience matters 2020-11-16T21:13:18.822Z
Humans are stunningly rational and stunningly irrational 2020-10-23T14:13:59.956Z
Knowledge, manipulation, and free will 2020-10-13T17:47:12.547Z
Dehumanisation *errors* 2020-09-23T09:51:53.091Z
Anthropomorphisation vs value learning: type 1 vs type 2 errors 2020-09-22T10:46:48.807Z
Technical model refinement formalism 2020-08-27T11:54:22.534Z
Model splintering: moving from one imperfect model to another 2020-08-27T11:53:58.784Z
Learning human preferences: black-box, white-box, and structured white-box access 2020-08-24T11:42:34.734Z
AI safety as featherless bipeds *with broad flat nails* 2020-08-19T10:22:14.987Z
Learning human preferences: optimistic and pessimistic scenarios 2020-08-18T13:05:23.697Z
Strong implication of preference uncertainty 2020-08-12T19:02:50.115Z
"Go west, young man!" - Preferences in (imperfect) maps 2020-07-31T07:50:59.520Z
Learning Values in Practice 2020-07-20T18:38:50.438Z
The Goldbach conjecture is probably correct; so was Fermat's last theorem 2020-07-14T19:30:14.806Z
Why is the impact penalty time-inconsistent? 2020-07-09T17:26:06.893Z
Dynamic inconsistency of the inaction and initial state baseline 2020-07-07T12:02:29.338Z
Models, myths, dreams, and Cheshire cat grins 2020-06-24T10:50:57.683Z

Comments

Comment by Stuart_Armstrong on What does GPT-3 understand? Symbol grounding and Chinese rooms · 2021-08-04T19:51:51.635Z · LW · GW

The multiplication example is good, and I should have thought about it and worked it into the post.

Comment by Stuart_Armstrong on What does GPT-3 understand? Symbol grounding and Chinese rooms · 2021-08-04T07:21:27.479Z · LW · GW

I have only very limited access to GPT-3; it would be interesting if others played around with my instructions, making them easier for humans to follow, while still checking that GPT-3 failed.

Comment by Stuart_Armstrong on Stuart_Armstrong's Shortform · 2021-07-21T10:44:00.569Z · LW · GW

Here are a few examples of model splintering in the past:

  1. The concept of honour; which includes concepts such as: "nobility of soul, magnanimity, and a scorn of meanness" [...] personal integrity [...] reputation [...] fame [...] privileges of rank or birth [...] respect [...] consequence of power [...] chastity". That is a grab-bag of different concepts, but in various times and social situations, "honour" was seen as single, clear concept.
  2. Gender. We're now in a period where people are questioning and redefining gender, but gender has been splintering for a long time. In middle class Victorian England, gender would define so much about a person (dress style, acceptable public attitudes, genitals, right to vote, right to own property if married, whether they would work or not, etc...). In other times (and in other classes of society, and other locations), gender is far less informative.
  3. Consider a Croat, communist, Yugoslav nationalist in the 1980s. They would be clear in their identity, which would be just one thing. Then the 1990s come along, and all these aspects come into conflict with each other.

Here are a few that might happen in the future; the first two could result from technological change, while the last could come from social change:

  1. A human subspecies created who want to be left alone without interactions with others, but who are lonely and unhappy when solitary. This splinters preferences and happiness (more than they are today), and changes the standard assumptions about personal freedom and
  2. A brain, or parts of a human brain, that loop forever through feelings of "I am am happy" and "I want this moment to repeat forever". This splinters happiness-and-preferences from identity.
  3. We have various ages of consent and responsibility; but, by age 21, most people are taken to be free to make decisions, are held responsible for their actions, and are seen to have a certain level of understanding about their world. With personalised education, varying subcultures, and more precise psychological measurements, we might end up in a world where "maturity" splinters into lots of pieces, with people having different levels of autonomy, responsibility, and freedom in different domains - and these might not be particularly connected with their age.
Comment by Stuart_Armstrong on The topic is not the content · 2021-07-20T08:06:59.141Z · LW · GW

A very good point.

I'd add the caveat that a key issue in a job is not the just the content, but who you interact in. eg a graduate student job in a lab can be very interesting even if the work is mindless, because of the people you get to interact with.

Comment by Stuart_Armstrong on Dangerous optimisation includes variance minimisation · 2021-07-16T18:27:19.372Z · LW · GW

This is a variant of my old question:

  • There is a button at your table. If you press it, it will give you absolute power. Do you press it?

More people answer no. Followed by:

  • Hitler is sitting at the same table, and is looking at the button. Now do you press it?
Comment by Stuart_Armstrong on The SIA population update can be surprisingly small · 2021-07-15T09:01:02.573Z · LW · GW

Nope, that's not the model. Your initial expected population is . After the anthropic update, your probabilities of being in the boxes are , , and (roughly , , and ). The expected population, however is . That's an expected population update of 3.27 times.

Note that, in this instance, the expected population update and the probability update are roughly equivalent, but that need not be the case. Eg if your prior odds are about the population being , , or , then the expected population is roughly , the anthropic-updated odds are , and the updated expected population is roughly . So the probability boost to the larger population is roughly (, but the boost to the expected population is roughly .

Comment by Stuart_Armstrong on The SIA population update can be surprisingly small · 2021-07-13T16:36:25.156Z · LW · GW

Anthropic updates do not increase the probability of life in general; they increase the probability of you existing specifically (which, since you've observed many other humans and heard about a lot more, is roughly the same as the probability of any current human existing), and this might have indirect effects on life in general.

So they does not distinguish between "simple life is very hard, but getting from that to human-level life is very easy" and "simple life is very easy, but getting from that to human-level life is very hard". So panspermia remains at its prior, relative to other theories of the same type (see here).

However, panspermia gets a boost from the universe seeming empty, as some versions of panspermia would make humans unexpectedly early (since panspermia needs more time to get going); this means that these theories avoid the penalty from the universe seeming empty, a much larger effect than the anthropic update (see here).

Comment by Stuart_Armstrong on Anthropic decision theory for self-locating beliefs · 2021-07-13T10:40:53.046Z · LW · GW

Yep.

Comment by Stuart_Armstrong on The SIA population update can be surprisingly small · 2021-07-13T10:37:04.585Z · LW · GW

Yep. Though I've found that, in most situations, the observations "we don't see anyone" has a much stronger effect than the anthropic update. It's not always exactly comparable, as anthropic updates are "multiply by and renormalise", while observing no-one is "multiply by and renormalise" - but generally I find the second effect to be much stronger.

Comment by Stuart_Armstrong on The SIA population update can be surprisingly small · 2021-07-09T16:30:09.595Z · LW · GW

I adapted the presumptuous philosopher for densities, because we'd been using densities in the rest of the post. The argument works for total population as well, moving from an average population of (for some ) to an average population of roughly .

Comment by Stuart_Armstrong on Practical anthropics summary · 2021-07-09T16:26:31.427Z · LW · GW

If there are no exact duplicates, FNC=SIA whatever the reference class is.

Comment by Stuart_Armstrong on The SIA population update can be surprisingly small · 2021-07-08T11:23:08.930Z · LW · GW

Thanks!

Comment by Stuart_Armstrong on Non-poisonous cake: anthropic updates are normal · 2021-06-19T21:43:39.326Z · LW · GW

More SIAish for conventional anthropic problems. Other theories are more applicable for more specific situations, specific questions, and for duplicate issues.

Comment by Stuart_Armstrong on Non-poisonous cake: anthropic updates are normal · 2021-06-19T14:16:56.705Z · LW · GW

Thanks! The typo is now corrected.

Comment by Stuart_Armstrong on The reverse Goodhart problem · 2021-06-14T12:23:14.181Z · LW · GW

Cheers, these are useful classifications.

Comment by Stuart_Armstrong on The reverse Goodhart problem · 2021-06-10T11:31:57.229Z · LW · GW

Almost equally hard to define. You just need to define , which, by assumption, is easy.

Comment by Stuart_Armstrong on The reverse Goodhart problem · 2021-06-09T10:25:51.288Z · LW · GW

By Goodhart's law, this set has the property that any will with probability 1 be uncorrelated with outside the observed domain.

If we have a collection of variables , and , then is positively correlated in practice with most expressed simply in terms of the variables.

I've seen Goodhart's law as an observation or a fact of human society - you seem to have a mathematical version of it in mind. Is there a reference for that.

Comment by Stuart_Armstrong on The reverse Goodhart problem · 2021-06-09T10:17:11.832Z · LW · GW

It seems I didn't articulate my point clearly. What I was saying is that V and V' are equally hard to define, yet we all assume that true human values has a Goodhart problem (rather than a reverse Goodhart problem). This can't be because of the complexity (since the complexity is equal) nor because we are maximising a proxy (because both have the same proxy).

So there is something specific about (our knowledge of) human values which causes us to expect Goodhart problems rather than reverse Goodhart problems. It's not too hard to think of plausible explanations (fragility of value can be re-expressed in terms of simple underlying variables to get results like this), but it does need explaining. And it might not always be valid (eg if we used different underlying variables, such as the smooth-mins of the ones we previously used, then fragility of value and Goodhart effects are much weaker), so we may need to worry about them less in some circumstances.

Comment by Stuart_Armstrong on The reverse Goodhart problem · 2021-06-09T10:15:08.353Z · LW · GW

It seems I didn't articulate my point clearly. What I was saying is that V and V' are equally hard to define, yet we all assume that true human values has a Goodhart problem (rather than a reverse Goodhart problem). This can't be because of the complexity (since the complexity is equal) nor because we are maximising a proxy (because both have the same proxy).

So there is something specific about (our knowledge of) human values which causes us to expect Goodhart problems rather than reverse Goodhart problems. It's not too hard to think of plausible explanations (fragility of value can be re-expressed in terms of simple underlying variables to get results like this), but it does need explaining. And it might not always be valid (eg if we used different underlying variables, such as the smooth-mins of the ones we previously used, then fragility of value and Goodhart effects are much weaker), so we may need to worry about them less in some circumstances.

Comment by Stuart_Armstrong on Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI · 2021-06-09T10:03:08.212Z · LW · GW

Thanks for writing this.

For myself, I know that power dynamics are important, but I've chosen to specialise down on the "solve technical alignment problem towards a single entity" and leave those multi-agent concerns to others (eg the GovAI part of the FHI), except when they ask for advice.

Comment by Stuart_Armstrong on The reverse Goodhart problem · 2021-06-08T19:32:52.454Z · LW · GW

V and V' are symmetric; indeed, you can define V as 2U-V'. Given U, they are as well defined as each other.

Comment by Stuart_Armstrong on The reverse Goodhart problem · 2021-06-08T17:21:12.142Z · LW · GW

The idea that maximising the proxy will inevitably end up reducing the true utility seems a strong implicit part of Goodharting the way it's used in practice.

After all, if the deviation is upwards, Goodharting is far less of a problem. It's "suboptimal improvement" rather than "inevitable disaster".

Comment by Stuart_Armstrong on SIA is basically just Bayesian updating on existence · 2021-06-07T09:48:19.692Z · LW · GW

Ah, understood. And I think I agree.

Comment by Stuart_Armstrong on SIA is basically just Bayesian updating on existence · 2021-06-06T14:07:12.581Z · LW · GW

SIA is the Bayesian update on knowing your existence (ie if they were always going to ask if dadadarren existed, and get a yes or no answer). The other effects come from issues like "how did they learn of your existence, and what else could they have learnt instead?" This often does change the impact of learning facts, but that's not a specifically anthropics problem.

Comment by Stuart_Armstrong on SIA is basically just Bayesian updating on existence · 2021-06-04T15:35:54.773Z · LW · GW

Depends; when you constructed your priors, did you already take that fact into explicit account? You can "know" things, but not have taken them into account.

Comment by Stuart_Armstrong on Anthropics: different probabilities, different questions · 2021-06-04T11:29:33.735Z · LW · GW

So regardless of how we describe the difference between T1 and T2, SIA will definitely think that T1 is a lot more likely once we start colonising space, if we ever do that.

SIA isn't needed for that; standard probability theory will be enough (as our becoming grabby is evidence that grabbiness is easier than expected, and vice-versa).

I think there's a confusion with SIA and reference classes and so on. If there are no other exact copies of me, then SIA is just standard Bayesian update on the fact that I exist. If theory T_i has prior probability p_i and gives a probability q_i of me existing, then SIA changes its probability to q_i*p_i (and renormalises).

Effects that increase the expected number of other humans, other observers, etc... are indirect consequences of this update. So a theory that says life in general is easy also says that me existing is easy, so gets boosted. But "Earth is special" theories also get boosted: if a theory claims life is very easy but only on Earth-like planets, then those also get boosted.

Comment by Stuart_Armstrong on Anthropics: different probabilities, different questions · 2021-06-04T11:20:27.874Z · LW · GW

In this process, I never have to consider "this awakening" as a member of any reference class. Do you think "keeping the score" this way invalid?

Different ways of keeping the score give different answers. So, no, I don't think that's invalid.

Comment by Stuart_Armstrong on Anthropics: different probabilities, different questions · 2021-06-02T12:41:29.420Z · LW · GW

In the classical sleeping beauty problem, if I guess the coin was tails, I will be correct in 50% of the experiments, and in 67% of my guesses. Whether you score by "experiments" or by "guesses" gives a different optimal performance.

Comment by Stuart_Armstrong on Anthropics: different probabilities, different questions · 2021-06-02T12:36:23.365Z · LW · GW

I didn't fully define those theories, and, indeed, if they depended on commonness of life, then SAI would prefer .

But if I posited instead that and differ only in the propensity for aliens to become grabby or not, then SIA would indeed be indifferent between them.

Comment by Stuart_Armstrong on The blue-minimising robot and model splintering · 2021-05-28T19:27:13.277Z · LW · GW

That is the aim. It's easy to program an AI that doesn't care too much about the reward signal - the trick is to find a way that it doesn't care in a specific way that aligns it with our preferences.

eg what would you do if you had been told to maximise some goal, but were told that your reward signal would be corrupted and over-simplified? You can start doing some things in that situation to maximise your chance of not-wireheading; I want to program the AI to do similarly.

Comment by Stuart_Armstrong on Covid 5/27: The Final Countdown · 2021-05-28T08:54:48.342Z · LW · GW

Yep, that seems to be right. One minor caveat; instead of

it is often reasonable to expect that part of what future evidence can tell us is already included in these updates.

I'd say something like:

"Past evidence affects how we interpret future evidence, sometimes weakening its impact."

Thinking of the untrustworthy witness example, I wouldn't say that "the witness's testimony is already included in the fact that they are untrustworthy" (="part of B' already included in B"), but I would say "the fact they are untrustworthy affects how we interpret their testimony" (="B affects how we interpret B' ").

But that's a minor caveat.

Comment by Stuart_Armstrong on Covid 5/27: The Final Countdown · 2021-05-27T21:53:03.983Z · LW · GW

Imagine you have a coin of unknown bias (taken to be uniform on [0,1]).

If you flip this coin and get a heads (an event of initial probability 1/2), you update the prior strongly and your probability of heads on the next flip is 2/3.

Now suppose instead you have already flipped the coin two million times, and got a million heads and a million tails. The probability of heads on the next flip is still 1/2; however, you will barely update on that, and the probability of another heads after that is barely above 1/2[1].

In the first case you have no evidence either way, in the second case you have strong evidence either way, and so things update less.

In terms of odds ratios, let H be your hypothesis (with negative ¬H), B your past observation, and B' your future observation.

Then O(H|B',B) = P(B'|H,B) / P(B'|¬H,B) * O(H|B).

The Bayes factor is P(B'|H,B) / P(B'|¬H,B). If you've made a lot of observations in B, then this odds ratio might be close to 1. It's not the same thing as P(B'|H) / P(B'|¬H), which might be very different from 1. Why? Because P(B'|H,B) / P(B'|¬H,B) measures how likely B' is, given H and B versus how likely it is, given ¬H and B. The B might completely screen off the effect of H versus ¬H.

In a court case, for example, if you've already established a witness is untrustworthy (B), then their claims (B') have little weight, and are pretty independent of guilt or not (H vs ¬H) - even if the claims would have weight if you didn't know their trustworthiness.

Note you can still get massive updates if B' is pretty independent of B. So if someone brings in camera footage of the crime, that has no connection with the previous witness's trustworthiness, and can throw the odds strongly in one direction or another (in equation, independence means that P(B'|H,B) / P(B'|¬H,B) = P(B'|H) / P(B'|¬H)).

So:

At this point, I think I am somewhat below Nate Silver’s 60% odds that the virus escaped from the lab, and put myself at about 40%, but I haven’t looked carefully and this probability is weakly held.

This means that they expect that it's quite likely that there is evidence out there that could change their mind (which makes sense, as they haven't looked carefully). They would have a strongly held probability if they had looked at all the available evidence and converged on 40% at the end of weighing it all up; it's unlikely that there's anything major they missed, so they don't expect anything new to change their estimate much.


  1. It's , I believe. ↩︎

Comment by Stuart_Armstrong on Introduction To The Infra-Bayesianism Sequence · 2021-05-24T21:27:29.691Z · LW · GW

I want a formalism capable of modelling and imitating how humans handle these situations, and we don't usually have dynamic consistency (nor do boundedly rational agents).

Now, I don't want to weaken requirements "just because", but it may be that dynamic consistency is too strong a requirement to properly model what's going on. It's also useful to have AIs model human changes of morality, to figure out what humans count as values, so getting closer to human reasoning would be necessary.

Comment by Stuart_Armstrong on Introduction To The Infra-Bayesianism Sequence · 2021-05-13T10:48:01.804Z · LW · GW

Hum... how about seeing enforcement of dynamic consistency as having a complexity/computation cost, and Dutch books (by other agents or by the environment) providing incentives to pay the cost? And hence the absence of these Dutch books meaning there is little incentive to pay that cost?

Comment by Stuart_Armstrong on Introduction To The Infra-Bayesianism Sequence · 2021-05-12T22:38:09.237Z · LW · GW

Desideratum 1: There should be a sensible notion of what it means to update a set of environments or a set of distributions, which should also give us dynamic consistency.

I'm not sure how important dynamic consistency should be. When I talk about model splintering, I'm thinking of a bounded agent making fundamental changes to their model (though possibly gradually), a process that is essentially irreversible and contingent the circumstance of discovering new scenarios. The strongest arguments for dynamic consistency are the Dutch-book type arguments, which depend on returning to a scenario very similar to the starting scenario, and these seem absent from model splintering as I'm imagining it.

Now, adding dynamic inconsistency is not useful, it just seems that removing all of it (especially for a bounded agent) doesn't seem worth the effort.

Is there some form of "not loose too much utility to dynamic inconsistency" requirement that could be formalised?

Comment by Stuart_Armstrong on Human priors, features and models, languages, and Solmonoff induction · 2021-05-11T08:17:07.686Z · LW · GW

For real humans, I think this is a more gradual process - they learn and use some distinctions, and forget others, until their mental models are quite different a few years down the line.

The splintering can happen when a single feature splinters; it doesn't have to be dramatic.

Comment by Stuart_Armstrong on MIRI location optimization (and related topics) discussion · 2021-05-10T12:20:45.334Z · LW · GW

If you're willing to explore beyond the US, there are things like this: https://search.savills.com/property-detail/gbedruedr190005

Beautiful (for living and website photos), lots of space for offices and accommodation.

Possibly high upkeep costs, though.

Comment by Stuart_Armstrong on Which counterfactuals should an AI follow? · 2021-04-08T10:18:17.143Z · LW · GW

I like the subagent approach there.

Comment by Stuart_Armstrong on Counterfactual control incentives · 2021-04-05T18:05:13.218Z · LW · GW

Thanks. I think we mainly agree here.

Comment by Stuart_Armstrong on Preferences and biases, the information argument · 2021-03-24T07:31:41.903Z · LW · GW

No. But I expect that it would be much more in the right ballpark than other approaches, and I think it might be refined to be correct.

Comment by Stuart_Armstrong on Preferences and biases, the information argument · 2021-03-24T07:30:52.389Z · LW · GW

Look at the paper linked for more details ( https://arxiv.org/abs/1712.05812 ).

Basically "humans are always fully rational and always take the action they want to" is a full explanation of all of human behaviour, that is strictly simpler than any explanation which includes human biases and bounded rationality.

Comment by Stuart_Armstrong on Why sigmoids are so hard to predict · 2021-03-19T15:55:12.821Z · LW · GW

"Aha! We seem to be past the inflection point!"

It's generally possible to see where the inflection point is, when we're past it.

Comment by Stuart_Armstrong on Why sigmoids are so hard to predict · 2021-03-19T12:00:22.603Z · LW · GW

Possibly. What would be the equivalent of a dampening term for a superexponential? A further growth term?

Comment by Stuart_Armstrong on Model splintering: moving from one imperfect model to another · 2021-03-03T13:16:19.175Z · LW · GW

But if you are expecting a 100% guarantee that the uncertainty metrics will detect every possible bad situation

I'm more thinking of how we could automate the navigating of these situations. The detection will be part of this process, and it's not a Boolean yes/no, but a matter of degree.

Comment by Stuart_Armstrong on Model splintering: moving from one imperfect model to another · 2021-02-24T22:28:38.186Z · LW · GW

I agree that once you have landed in the bad situation, mitigation options might be much the same, e.g. switch off the agent.

I'm most interested in mitigation options the agent can take itself, when it suspects it's out-of-distribution (and without being turned off, ideally).

Comment by Stuart_Armstrong on Model splintering: moving from one imperfect model to another · 2021-02-24T08:00:43.566Z · LW · GW

Thanks! Lots of useful insights in there.

So I might classify moving out-of-distribution as something that happens to a classifier or agent, and model splintering as something that the machine learning system does to itself.

Why do you think it's important to distinguish these two situations? It seems that the insights for dealing with one situation may apply to the other, and vice versa.

Comment by Stuart_Armstrong on Generalised models as a category · 2021-02-19T15:30:07.142Z · LW · GW

Cheers! My opinion on category theory has changed a bit, because of this post; by making things fit into the category formulation, I developed insights into how general relations could be used to connect different generalised models.

Comment by Stuart_Armstrong on Generalised models as a category · 2021-02-17T10:25:41.430Z · LW · GW

Thanks! Corrected both of those; is a subset of .

Comment by Stuart_Armstrong on Stuart_Armstrong's Shortform · 2021-02-17T09:31:40.649Z · LW · GW

Thanks! That's useful to know.

Comment by Stuart_Armstrong on Introduction to Cartesian Frames · 2021-02-16T16:24:59.921Z · LW · GW

Did posts on generalised models as a category and how one can see Cartesian frames as generalised models.