Posts

Theory and Data as Constraints 2020-02-21T22:00:00.783Z · score: 28 (6 votes)
Exercises in Comprehensive Information Gathering 2020-02-15T17:27:19.753Z · score: 69 (27 votes)
Demons in Imperfect Search 2020-02-11T20:25:19.655Z · score: 63 (18 votes)
Category Theory Without The Baggage 2020-02-03T20:03:13.586Z · score: 95 (32 votes)
What Money Cannot Buy 2020-02-01T20:11:05.090Z · score: 170 (73 votes)
Algorithms vs Compute 2020-01-28T17:34:31.795Z · score: 28 (6 votes)
Coordination as a Scarce Resource 2020-01-25T23:32:36.309Z · score: 94 (29 votes)
Material Goods as an Abundant Resource 2020-01-25T23:23:14.489Z · score: 57 (21 votes)
Constraints & Slackness as a Worldview Generator 2020-01-25T23:18:54.562Z · score: 28 (11 votes)
Technology Changes Constraints 2020-01-25T23:13:17.428Z · score: 70 (26 votes)
Theory of Causal Models with Dynamic Structure? 2020-01-23T19:47:22.825Z · score: 25 (5 votes)
Formulating Reductive Agency in Causal Models 2020-01-23T17:03:44.758Z · score: 28 (6 votes)
(A -> B) -> A in Causal DAGs 2020-01-22T18:22:28.791Z · score: 33 (8 votes)
Logical Representation of Causal Models 2020-01-21T20:04:54.218Z · score: 30 (7 votes)
Use-cases for computations, other than running them? 2020-01-19T20:52:01.756Z · score: 29 (9 votes)
Example: Markov Chain 2020-01-10T20:19:31.309Z · score: 15 (4 votes)
How to Throw Away Information in Causal DAGs 2020-01-08T02:40:05.489Z · score: 15 (2 votes)
Definitions of Causal Abstraction: Reviewing Beckers & Halpern 2020-01-07T00:03:42.902Z · score: 20 (5 votes)
Homeostasis and “Root Causes” in Aging 2020-01-05T18:43:33.038Z · score: 43 (19 votes)
Humans Are Embedded Agents Too 2019-12-23T19:21:15.663Z · score: 74 (20 votes)
Causal Abstraction Intro 2019-12-19T22:01:46.140Z · score: 23 (6 votes)
Abstraction, Causality, and Embedded Maps: Here Be Monsters 2019-12-18T20:25:04.584Z · score: 25 (7 votes)
Is Causality in the Map or the Territory? 2019-12-17T23:19:24.301Z · score: 23 (11 votes)
Examples of Causal Abstraction 2019-12-12T22:54:43.565Z · score: 21 (5 votes)
Causal Abstraction Toy Model: Medical Sensor 2019-12-11T21:12:50.845Z · score: 30 (10 votes)
Applications of Economic Models to Physiology? 2019-12-10T18:09:43.494Z · score: 38 (8 votes)
What is Abstraction? 2019-12-06T20:30:03.849Z · score: 26 (7 votes)
Paper-Reading for Gears 2019-12-04T21:02:56.316Z · score: 116 (38 votes)
Gears-Level Models are Capital Investments 2019-11-22T22:41:52.943Z · score: 89 (33 votes)
Wrinkles 2019-11-19T22:59:30.989Z · score: 64 (24 votes)
Evolution of Modularity 2019-11-14T06:49:04.112Z · score: 83 (29 votes)
Book Review: Design Principles of Biological Circuits 2019-11-05T06:49:58.329Z · score: 126 (52 votes)
Characterizing Real-World Agents as a Research Meta-Strategy 2019-10-08T15:32:27.896Z · score: 27 (10 votes)
What funding sources exist for technical AI safety research? 2019-10-01T15:30:08.149Z · score: 24 (8 votes)
Gears vs Behavior 2019-09-19T06:50:42.379Z · score: 53 (19 votes)
Theory of Ideal Agents, or of Existing Agents? 2019-09-13T17:38:27.187Z · score: 16 (8 votes)
How to Throw Away Information 2019-09-05T21:10:06.609Z · score: 20 (7 votes)
Probability as Minimal Map 2019-09-01T19:19:56.696Z · score: 45 (14 votes)
The Missing Math of Map-Making 2019-08-28T21:18:25.298Z · score: 33 (16 votes)
Don't Pull a Broken Chain 2019-08-28T01:21:37.622Z · score: 27 (13 votes)
Cartographic Processes 2019-08-27T20:02:45.263Z · score: 23 (8 votes)
Embedded Agency via Abstraction 2019-08-26T23:03:49.989Z · score: 34 (13 votes)
Time Travel, AI and Transparent Newcomb 2019-08-22T22:04:55.908Z · score: 12 (7 votes)
Embedded Naive Bayes 2019-08-22T21:40:05.972Z · score: 15 (6 votes)
Computational Model: Causal Diagrams with Symmetry 2019-08-22T17:54:11.274Z · score: 42 (16 votes)
Markets are Universal for Logical Induction 2019-08-22T06:44:56.532Z · score: 67 (27 votes)
Why Subagents? 2019-08-01T22:17:26.415Z · score: 111 (36 votes)
Compilers/PLs book recommendation? 2019-07-28T15:49:17.570Z · score: 10 (4 votes)
Results of LW Technical Background Survey 2019-07-26T17:33:01.999Z · score: 43 (15 votes)
Cross-Validation vs Bayesian Model Comparison 2019-07-21T18:14:34.207Z · score: 21 (7 votes)

Comments

Comment by johnswentworth on Tessellating Hills: a toy model for demons in imperfect search · 2020-02-21T00:37:29.187Z · score: 11 (3 votes) · LW · GW
I don't see why the gradient with respect to x0 ever changes, and so am confused about why it would ever stop increasing in the x0 direction.

Looks like the splotch functions are each a random mixture of sinusoids in each direction - so each will have some variation along . The argument of is all of , not just .

Comment by johnswentworth on Tessellating Hills: a toy model for demons in imperfect search · 2020-02-20T17:55:20.756Z · score: 8 (4 votes) · LW · GW

Very nice work. The graphs in particular are quite striking.

I sat down and thought for a bit about whether that objective function is actually a good model for the behavior we're interested in. Twice I thought I saw an issue, then looked back at the definition and realized you'd set up the function to avoid that issue. Solid execution; I think you have actually constructed a demonic environment.

Comment by johnswentworth on A 'Practice of Rationality' Sequence? · 2020-02-18T07:42:34.736Z · score: 8 (4 votes) · LW · GW

Re:winning, I was recently thinking about how to explain what my own goals are for which rationality is a key tool. One catch phrase I like is: source code access.

Here’s the idea: imagine that our whole world is a video game, and we’re all characters in it. This can mean the physical world, the economic world, the social world, all of the above, etc. My goal is to be able to read and modify the source code of the game.

That formulation makes the role of epistemic rationality quite central: we're all agents embedded in this universe, we already have access to the source code of economic/social/other systems, the problem is that we don't understand the code well enough to know what changes will have what effects.

Comment by johnswentworth on [deleted post] 2020-02-18T07:34:09.634Z

Imagine a search algorithm that finds local minima, similar to gradient descent, but has faster big-O performance than gradient descent. (For instance, an efficient roughly-n^2 matrix multiplication algorithm would likely yield such a thing, by making true Newton steps tractable on large systems - assuming it played well with sparsity.) That would be a general efficiency gain, and would likely stem from some sudden theoretical breakthrough (e.g. on fast matrix multiplication). And it is exactly the sort of thing which tends to come from a single person/team - the gradual theoretical progress we've seen on matrix multiplication is not the kind of breakthrough which makes the whole thing practical; people generally think we're missing some key idea which will make the problem tractable.

Comment by johnswentworth on [deleted post] 2020-02-18T06:54:47.908Z
Suppose someone told you that they had an ingenious idea for a new algorithm that would classify images with identical performance to CNNs, but with 1% the overhead memory costs. They explain that CNNs are using memory extremely inefficiently; image classification has a simple core, and when you discover this core, you can radically increase the efficiency of your system. If someone said this, what would be your reaction be?

My reaction would be "sure, that sounds like exactly the sort of thing that happens from time to time". In fact, if you replace the word "memory" with either "data" or "compute", then this has already happened with the advent of transformer architectures just within the past few years, on the training side of things.

Reducing costs for some use-case (compute, data, memory, whatever) by multiple orders of magnitude is the default thing I expect to happen when someone comes up with an interesting new algorithm. One such algorithm was backpropagation. CNNs themselves were another. It shouldn't be surprising at this point.

And search? You really want to tell me that there aren't faster reasonably-general-purpose search algorithms (i.e. about as general as backprop + gradient descent) awaiting discovery? Or that faster reasonably-general-purpose search algorithms wouldn't lead to a rapid jump in AI/ML capabilities?

Comment by johnswentworth on Exercises in Comprehensive Information Gathering · 2020-02-16T18:04:34.071Z · score: 13 (4 votes) · LW · GW
Given how inexpensive and useful it is to do this, why do so few people it?

I actually considered putting a paragraph on this in the OP. I think we're currently in a transitional state - prior to the internet, it would have been far more expensive to conduct this sort of exercise. People haven't had much time to figure out how to get lots of value out of the internet, and this is one example which I expect will become more popular over time.

Comment by johnswentworth on A 'Practice of Rationality' Sequence? · 2020-02-15T23:45:20.169Z · score: 42 (9 votes) · LW · GW

I think we're overdue for a general overhaul of "applied epistemic rationality".

Superforecasting and adjacent skills were, in retrospect, the wrong places to put the bulk of the focus. General epistemic hygiene is a necessary foundational element, but predictive power is only one piece of what makes a model useful. It's a necessary condition, not a sufficient one.

Personally, I expect/hope that the next generation of applied rationality will be more explicitly centered around gears-level models. The goal of epistemic rationality 2.0 will be, not just a predictively-accurate model, but an accurate gears-level understanding.

I've been trying to push in this direction for a few months now. Gears vs Behavior talked about why we want gears-level models rather than generic predictively-powerful models. Gears-Level Models are Capital Investments talked more about the tradeoffs involved. And a bunch of posts showed how to build gears-level models in various contexts.

Some differences I expect compared to prediction-focused epistemic rationality:

  • Much more focus on the object level. A lot of predictive power comes from general outside-view knowledge about biases and uncertainty; gears-level model-building benefits much more from knowing a whole lot about the gears of a very wide variety of systems in the world.
  • Much more focus on causality, rather than just correlations and extrapolations.
  • Less outsourcing of knowledge/thinking to experts, but much more effort trying to extract experts' models, and to figure out where the models came from and how reliable the model-sources are.
Comment by johnswentworth on Simulation of technological progress (work in progress) · 2020-02-12T18:38:06.255Z · score: 3 (2 votes) · LW · GW

This was an interesting post, it got me thinking a bit about the right way to represent "technology" in a mathematical model.

I think I have a pretty solid qualitative understanding of how technology changes impact economic production - constraints are the right representation for that. But it's not clear how that feeds back into further technological development. What qualitative model structure captures the key aspects of recursive technological progress?

A few possible threads to pull on:

  • Throwing economic resources at research often yields technological progress, but what's the distribution of progress yielded by this?
  • Some targeted, incremental research is aimed at small changes to parameters of production constraints - e.g. cutting the amount of some input required for some product by 10%. That sort of thing slots nicely into the constraints framework, and presumably throwing more resources at research will result in more incremental progress (though it's not clear how quickly marginal returns decrease/increase with research investments).
  • There are often underlying constraints to technologies themselves - i.e. physical constraints. It feels like there should be an elegant way to represent these in production-space, via duality (i.e. constraints on production are dual to production, so constraints on the constraints should be in production space).
  • Related: in cases of "discrete" technological progress, it feels like there's usually an underlying constraint on a broad class of technologies. So representing constraints-on-constraints is important to capturing jumps in progress.
  • If there are production constraints and constraints on the constraints, presumably we could go even more meta, but at the moment I can't think of any useful meaning to higher meta-levels.
Comment by johnswentworth on Demons in Imperfect Search · 2020-02-12T18:05:09.595Z · score: 3 (2 votes) · LW · GW

In the ball example, it's the selection process that's interesting - the ball ending up rolling alongside one bump or another, and bumps "competing" in the sense that the ball will eventually end up rolling along at most one of them (assuming they run in different directions).

Couldn't you say a local minima involves a secondary optimizing search process that has that minima as its objective?

Only if such a search process is actually taking place. That's why it's key to look at the process, rather than the bumps and valleys themselves.

To use your ball analogy, what exactly is the difference between these twisty demon hills and a simple crater-shaped pit?

There isn't inherently any important difference between those two. That said, there are some environments in which "bumps" which effectively steer a ball will tend to continue to do so in the future, and other environments in which the whole surface is just noise with low spatial correlation. The latter would not give rise to demons (I think), while the former would. This is part of what I'm still confused about - what, quantitatively, are the properties of the environment necessary for demons to show up?

Does that help clarify, or should I take another stab at it?

Comment by johnswentworth on Demons in Imperfect Search · 2020-02-12T02:58:52.754Z · score: 2 (1 votes) · LW · GW

I expect this problem would show up in any less-than-perfect optimizer, including SA variants. Heck, the metabolic example is basically the physical system which SA was based on in the first place. But it would look different with different optimizers, mainly depending on what the optimizer "sees" and what's needed to "hide" information from it.

Comment by johnswentworth on Demons in Imperfect Search · 2020-02-12T02:54:06.312Z · score: 4 (2 votes) · LW · GW

I love the example, I'd never heard of that project before.

I'm agnostic on demonic intelligence. I think the key point is not the demons themselves but the process which produces them. Somehow, an imperfect optimizing search process induces a secondary optimizer, and it's that secondary optimizer which produces the demons. For instance, in the metabolism example, evolution is the secondary optimizer, and its goals are (often) directly opposed to the original optimizer - it wants to conserve free energy, in order to "trade" with the free energy optimizer later. The demons themselves (i.e. cells/enzymes in the metabolism example) are inner optimizers of the secondary optimizer; I expect that Risks From Learned Optimization already describes the secondary optimizer <-> demon relationship fairly well, including when the demons will be more/less intelligent.

The interesting/scary point is that the secondary optimizer is consistently opposed to the original optimizer; the two are basically playing a game where the secondary tries to hide information from the original.

Comment by johnswentworth on Demons in Imperfect Search · 2020-02-11T22:50:21.352Z · score: 6 (4 votes) · LW · GW

Updated the long paragraph in the fable a bit, hopefully that will help somewhat. It's hard to make it really concrete when I don't have a good mathematical description of how these things pop up; I'm not sure which aspects of the environment make it happen, so I don't know what to emphasize.

Comment by johnswentworth on Why do we refuse to take action claiming our impact would be too small? · 2020-02-11T00:15:36.116Z · score: 16 (5 votes) · LW · GW

Everything has an opportunity cost. I'd claim that when impact is very small, it is almost always the case that the opportunity cost is not worthwhile. In general, one can have far more impact by focusing on one or two high-impact actions rather than spending the same aggregate time/effort on lots of little things.

Much more detail is in The Epsilon Fallacy; also see the comments on that post for some significant counterarguments.

(I'm definitely not claiming that the psychological mechanism by which people ignore small-impact actions is to think through all of this rationally. But I do think that people have basically-correct instincts in this regard, at least when political signalling is not involved.)

Comment by johnswentworth on What Money Cannot Buy · 2020-02-09T17:28:46.374Z · score: 5 (3 votes) · LW · GW

That is an awesome example, thank you!

It does still require some manipulation ability - we have to be able to experimentally intervene (at reasonable expense). That doesn't open up all possibilities, but it's at least a very large space. I'll have to chew on it some more.

Comment by johnswentworth on What Money Cannot Buy · 2020-02-08T17:21:04.244Z · score: 3 (2 votes) · LW · GW
The existence of problems whose answers are hard to verify does not entail that this verification is harder than finding the answer itself.

That's not quite the relevant question. The point of hiring an expert is that it's easier to outsource the answer-finding to the expert than to do it oneself; the relevant question is whether there are problems for which verification is not any easier than finding the answer. That's what I mean by "hard to verify" - questions for which we can't verify the answer any faster than we can find the answer.

I thought some more about the IP analogy yesterday. In many cases, the analogy just doesn't work - verifying claims about the real world (i.e. "I've never heard of a milkmaid who had cowpox later getting smallpox") or about human aesthetic tastes (i.e. "this car is ugly") is fundamentally different from verifying a computation; we can verify a computation without needing to go look at anything in the physical world. It does seem like there are probably use-cases for which the analogy works well enough to plausibly adopt IP-reduction algorithms to real-world expert-verification, but I do not currently have a clear example of such a use-case.

Comment by johnswentworth on What Money Cannot Buy · 2020-02-07T16:45:40.263Z · score: 3 (2 votes) · LW · GW

In CS, there are some problems whose answer is easier to verify than to create. The same is certainly true in the world in general - there are many objectives whose completion we can easily verify, and those are well-suited to outsourcing. But even in CS, there are also (believed to be) problems whose answer is hard to verify.

But the answer being hard to verify is different from a proof being hard to verify - perhaps the right analogy is not NP, but IP or some variant thereof.

This line of reasoning does suggest some interesting real-world strategies - in particular, we know that MIP = NEXPTIME, so quizzing multiple alleged experts in parallel (without allowing them to coordinate answers) could be useful. Although that's still not quite analogous, since IP and MIP aren't about distinguishing real from fake experts - just true from false claims.

Comment by johnswentworth on Category Theory Without The Baggage · 2020-02-06T18:25:45.213Z · score: 2 (1 votes) · LW · GW

Yup, that's basically the idea.

Comment by johnswentworth on Category Theory Without The Baggage · 2020-02-05T23:58:35.245Z · score: 4 (2 votes) · LW · GW

I do like that Rosetta Stone paper you linked, thanks for that. And I also recently finished going through a set of applied category theory lectures based on that book you linked. That's exactly the sort of thing which informs my intuitions about where the field is headed, although it's also exactly the sort of thing which informs my intuition that some key foundational pieces are still missing. Problem is, these "applications" are mostly of the form "look we can formalize X in the language of category theory"... followed by not actually doing much with it. At this point, it's not yet clear what things will be done with it, which in turn means that it's not yet clear we're using the right formulations. (And even just looking at applied category theory as it exists today, the definitions are definitely too unwieldy, and will drive away anyone not determined to use category theory for some reason.)

I'm the wrong person to write about the differences in how mathematicians and physicists approach group theory, but I'll give a few general impressions. Mathematicians in group theory tend to think of groups abstractly, often only up to isomorphism. Physicists tend to think of groups as matrix groups; the representation of group elements as matrices is central. Physicists have famously little patience for the very abstract formulation of group theory often used in math; thus the appeal of more concrete matrix groups. Mathematicians often use group theory just as a language for various things, without even using any particular result - e.g. many things are defined as quotient groups. Again, physicists have no patience for this. Physicists' use of group theory tends to involve more concrete objectives - e.g. evaluating integrals over Lie groups. Finally, physicists almost always ascribe some physical symmetry to a group; it's not just symbols.

Comment by johnswentworth on Category Theory Without The Baggage · 2020-02-05T18:37:51.115Z · score: 4 (2 votes) · LW · GW

The main reason I like the paths formulation is that I can look around at the real world and immediately pick out systems which might make sense to model as categories. "Paths in graphs" is something I can recognize at a glance. Thinking about links on the internet? The morphisms are paths of links. Friend connections on facebook? The morphisms are friend-of-friend paths. Plane flights? The morphisms are travel plans. Etc.

I expect that the big applications of category theory will eventually come from something besides sets and functions, and I want to be able to recognize such applications at a glance when opportunity comes knocking.

The cost is needing to build some intuition for path equivalence. I'm still building that intuition, and that is indeed what tripped me up. It will come with practice.

Comment by johnswentworth on Category Theory Without The Baggage · 2020-02-05T18:29:38.770Z · score: 5 (5 votes) · LW · GW
My second reaction, when reading the introduction and the "Path in Graphs" section, was to feel like every useful part of category theory had been thrown away.

I was expecting/hoping someone would say this. I'll take the opportunity to clarify my goals here.

I expect "applied category theory" is going to be a field in its own right in the not-so-distant future. When I say e.g. "broader adoption of category theory is limited in large part by bad definitions", that's what I'm talking about. I expect practical applications of category theory will mostly not resemble the set-centered usage of today, and I think that getting rid of the set-centered viewpoint is one of the main bottlenecks to moving forward.

(A good analogy here might be group-theory-as-practiced-by-mathematicians vs group-theory-as-practiced-by-physicists - a.k.a. representation theory.)

I generally agree that the usage of category theory today benefits from not thinking of things as graphs, from using set as a primary example, etc. But today, all uses of category theory come after the fact. I want that to change, I've seen enough to think that will change, and that's why I'm experimenting with a presentation which throws out most of the existing value.

Comment by johnswentworth on Category Theory Without The Baggage · 2020-02-05T02:43:41.942Z · score: 4 (2 votes) · LW · GW

Good question, I spent a while chewing on that myself.

Vertical composition is relatively simple - we're looking for two natural transformations, where pattern-target of one is pattern-source of the other. Equivalently, we're pattern-matching on a pattern which has three copies of the original category stacked one-on-top-the-other, rather than just two copies.

Horizontal composition is trickier. We're taking the pattern for a natural transformation (i.e. two copies of the original category) and using that as the original category for another natural transformation. So we end up with a pattern containing four copies of the original category, connected in a square shape, with arrows going from one corner (the pattern-source for the composite transformation) to the opposite corner (the pattern-target for the composite transformation).

Comment by johnswentworth on Category Theory Without The Baggage · 2020-02-04T18:37:12.748Z · score: 2 (1 votes) · LW · GW

Sorry, there's a few too many things here which need names. The pattern (original category + copy + etc) is the query; the target is whatever category we like. We're looking for the original category and a model of it, all embedded within some other category. I should probably clarify that in the OP; part what makes it interesting is that we're looking for a map-territory pair all embedded in one big system.

UPDATE: changed the source-target language to system-model for natural transformations.

Comment by johnswentworth on Category Theory Without The Baggage · 2020-02-04T18:30:23.233Z · score: 9 (3 votes) · LW · GW

The problem is that these are all just definitions; they don't actually say anything about the things they're defining, and the extent to which real-world systems fit these definitions is not obvious.

  • Ok, we can define invertibility in a way that depends on the context, but what does that actually buy us? What's a practical application where we say "oh, this is a special kind of inverse" and thereby prove something we wouldn't have proved otherwise?
  • "The properties of an object are encoded in the maps we allow to/from the object" is not a fact about the world, it is a fact about what sort of things category theory talks about. One of my big open questions is whether "maps to/from the object" are actually sufficient to describe all the properties I care about in real-world systems, like a pot of water.
  • Universal constructions are cute, but I have yet to see a new insight from them - i.e. a fact about a particular system which wasn't already fairly obvious. Heck, I have yet to even see a general theorem about a universal construction other than "if it exists, then it's unique up to isomorphism".

These aren't rhetorical questions, btw - I'd really appreciate meaty examples, it would make learning this stuff a lot less frustrating.

Comment by johnswentworth on Category Theory Without The Baggage · 2020-02-04T18:17:40.391Z · score: 4 (2 votes) · LW · GW

Ooh that's a good comment you linked. You mention relating various applications of the poisson equation via natural transformation; could you unpack a bit what that would look like? One of the things I've had trouble with is how to represent the sorts of real-world abstractions I want to think about (e.g. poisson equation) in the language of category theory; it's still unclear to me whether morphism relationships are enough to represent all the semantics. If you know how to do it with that example, it would be really helpful.

Comment by johnswentworth on Category Theory Without The Baggage · 2020-02-03T22:59:52.642Z · score: 9 (5 votes) · LW · GW

Thanks for pointing out the identified edges thing, I hadn't noticed it before. I'll update the examples once I've updated my intuition.

Also I'm glad you like it! :)

UPDATE: fixed it.

Comment by johnswentworth on Category Theory Without The Baggage · 2020-02-03T22:45:48.747Z · score: 4 (2 votes) · LW · GW

Great question. I don't think I answer it outright, but the section on natural transformations should at least offer some intuition for the kind of questions which category theory looks at, but which graph theory doesn't really look at. That doesn't answer the question of what you'd gain, and frankly, I have yet to see a really compelling answer to that question myself - category theory has an awful lot of definitions, but doesn't seem to actually do much with them. But I'm still pretty new to this, so I'm holding out hope.

Comment by johnswentworth on [Link] Beyond the hill: thoughts on ontologies for thinking, essay-completeness and forecasting · 2020-02-03T00:48:40.742Z · score: 4 (2 votes) · LW · GW

It is really slick; I was mostly confused because the text itself talked about using the interface to make predictions. The only interface-specific annoyance was that there didn't seem to be a way to close the prediction sidebar once it was open.

Comment by johnswentworth on [Link] Beyond the hill: thoughts on ontologies for thinking, essay-completeness and forecasting · 2020-02-02T22:59:42.317Z · score: 4 (2 votes) · LW · GW

I was very confused by the notebook interface at first. I think you need to log in for it to work?

Comment by johnswentworth on Instrumental Occam? · 2020-02-01T05:13:49.783Z · score: 3 (2 votes) · LW · GW

Intuition: "It seems like a good idea to keep one's rules/policies simple."

Me: "Ok, why? What are some prototypical examples where keeping one's rules/policies simple would be useful?"

Intuition: "Well, consider legal codes - a complex criminal code turns into de-facto police discretion. A complex civil code turns into patent trolls and ambulance chasers, people who specialize in understanding the complexity and leveraging it against people who don't. On personal level, in order for others to take your precommitments seriously, those precommitments need to be both easily communicated and clearly delineate lines which must not be crossed - otherwise people will plead ignorance or abuse grey areas, respectively. On an internal level, your rules necessarily adjust as the world throws new situations at you - new situations just aren't covered by old rules, unless we keep those old rules very simple."

Me: "Ok, trying to factor out the common denominator there... complexity implies grey areas. And when we try to make precommitments (i.e. follow general policies), grey areas can be abused by bad actors."

Intuition: <mulls for a minute> "Yeah, that sounds about right. Complexity is bad because grey areas are bad, and complexity creates grey areas."

Me: "The problem with grey areas is simple enough; Thomas Schelling already offers good models of that. But why does complexity necessarily imply grey areas in the policy? Is that an inherent feature of complexity in general, or is it specific to the kinds of complexity we're imagining?"

Intuition: "A complex computer program might not have any grey areas, but as a policy that would have other problems..."

Me: "Like what?"

Intuition: "Well, my knee-jerk is to say it won't actually be a policy we want, but when I actually picture it... it's more like, the ontology won't match the real world."

Me: "And a simple policy would match real-world ontology better than a complex one? That not what I usually hear people say..."

Intuition: "Ok, imagine I draw a big circle on the ground, and say 'stay out of this circle'. But some places there's patches of grass or a puddle or whatever, so the boundary isn't quite clear - grey areas. Now instead of a circle, I make it some complicated fractal shape. Then there will be more grey areas."

Me: "Why would there be more - oh wait, I see. Surface area. More surface area means more grey areas. That makes sense."

Intuition: "Right, exactly. Grey areas, in practice, occur in proportion to surface area. More complexity means more surface means more grey areas means more abuse of the rules."

Comment by johnswentworth on Coordination as a Scarce Resource · 2020-01-31T18:27:39.297Z · score: 2 (1 votes) · LW · GW

Interesting point. Rings true, though I'll have to mull it over a bit. Certainly important-if-true.

Comment by johnswentworth on Create a Full Alternative Stack · 2020-01-31T18:01:05.525Z · score: 33 (12 votes) · LW · GW

My interpretation of the previous several posts is: alignment of organizations is hard, and if you're even a little bit misaligned, the mazeys will exploit that misalignment to the hilt. Allow any divergence between measures of performance and actual performance, and a whole bureaucracy will soon arise, living off of that divergence and expanding it whenever possible.

My interpretation of this post is: let's solve it by making a fund which pays people to always be aligned! The only hard part is figuring out how to verify that they are, in fact, aligned.

... Which was the whole problem to begin with.

The underlying problem is that alignment is hard. If we had a better way to align organizations, then organizations which use that method would already be outperforming everyone else. The technique would already be used. Invent that technology (possibly a social technology), and it will spread. The mazes will fight it, and the mazes will die. But absent some sort of alignment technology, there is not much else which will help.

This is a problem which fundamentally cannot be fixed by throwing money at it. Setting up a fund to pay people for being aligned will result in people trying to look aligned. Without some way of making one's measure of alignment match actual alignment, this will not do any good at all.

Comment by johnswentworth on If brains are computers, what kind of computers are they? (Dennett transcript) · 2020-01-30T20:07:32.594Z · score: 15 (5 votes) · LW · GW

Alright, first I'll summarize what I think are Dennett's core points, then give some reactions. Dennett's core points:

  • There's a general idea that intelligent systems can be built out of non-intelligent parts, by non-intelligent processes. "Competence without comprehension" - evolution is a non-intelligent process which built intelligent systems, Turing machines are intelligent systems built from non-intelligent parts.
  • Among these reductively-intelligent systems, there's a divide between things-like-evolution and things-like-Turing-machines. One operates by heterogenous, bottom-up competitive pressure, the other operates by top-down direction and standardized parts.
  • People who instinctively react against reductive ideas about human minds (the first bullet) are often really pointing to the distinction in the second bullet - i.e. human minds are more like evolution than like Turing machines.
  • In particular, the things-which-evolve in the human mind are memes.

The first point is basically pointing at embedded agency. The last point I'll ignore; there are ways that human minds could be more like evolution than like Turing machines even without dragging memes into the picture, and I find the more general idea more interesting.

So, summarizing Dennett very roughly... we have two models of reductive intelligence: things-like-evolution and things-like-Turing-machines. Human minds are more like evolution.

My reaction:

Evolution and Turing machines are both low-level primitive computational models which tend to instrumentally converge on much more similar high-level models.

In particular, I'll talk about modularity, although I don't think that's the only dimension along which we see convergence - just a particularly intuitive and dramatic dimension.

Biological evolution (and many things-like-evolution) has a lot more structure to it than it first seems - it's surprisingly modular. On the other side of the equation, Turing machines are a particularly terrible way to represent computation - there's no modularity unless the programmer adds it. That's why we usually work with more modular, higher-level abstractions - and those high-level abstractions much more closely resemble the modularity of evolution. In practice, both evolution and Turing machines end up leveraging modularity a lot, and modular-systems-built-on-evolution actually look pretty similar to modular-systems-built-on-Turing-machines.

In more detail...

Here's how I think most people picture evolution: there's a random small change, changes are kept if-and-only-if they help, so over time things tend to drift up their incentive gradient. It's a biased random walk. I've heard this model credited to Fisher (the now-infamous frequentist statistician), and it was how most biologists pictured evolution for a very long time.

But with the benefit of modern sequencing and genetic engineering, that picture looks a lot less accurate. A particularly dramatic example: a single mutation can turn a fly's antennae into legs. Nothing about that suggests small changes or drift along a gradient. Rather, it suggests a very modular architecture, where entire chunks can be copy-pasted into different configurations. If you take a class on evolutionary developmental biology (evo-devo), you'll see all sorts of stuff like this. (Also, see evolution of modularity for more on why modularity evolves in the first place.)

This also sounds a lot more like how humans "intelligently" design things. We build abstract, modular subsystems which can be composed in a variety of configurations. Large chunks can be moved around or reused without worrying about their internal structure. That's arguably the single most universal idea in software engineering: modularity is good. That's why we build more modular, higher-level abstractions on top of our low-level random-access Turing machines.

So even though evolution and Turing machines are very different low-level computational models, we end up with surprisingly similar systems built on top of them.

Comment by johnswentworth on Potential Ways to Fight Mazes · 2020-01-29T20:12:01.554Z · score: 13 (5 votes) · LW · GW
I'm currently interpreting this to mean "talking about this through and economic lens, and maybe abstracted political lens, but don't bring up any current politicians or parties or whatnot."

The main way I personally think about how-to-achieve-political-goals while avoiding mindkill is to always structure the question as "what could a small well-funded team do?".

Usually, political discussions of policy say something like "if we passed a law saying X, then Y would happen". The problem with that formulation is the word "we" - it immediately and automatically makes this a group-identity thing. If only "we" all behaved like <ingroup>, "we" would pass law X, and everything would be better!

Thinking about what a small well-funded team can do forces several better habits:

  • it forces thinking about the underlying gears of the whole system, in order to achieve maximum leverage
  • it forces thinking about realistic, minimal (i.e. "keyhole") interventions
  • it mostly eliminates excuses to say "yay/boo <group>"; particular political groups just become gears in the model
Comment by johnswentworth on Algorithms vs Compute · 2020-01-28T19:32:34.436Z · score: 3 (2 votes) · LW · GW

The underlying question I want to answer is: ML performance is limited by both available algorithms and available compute. Both of those have (presumably) improved over time. Relatively speaking, how taut are those two constraints? Has progress come primarily from better algorithms, or from more/cheaper compute?

Comment by johnswentworth on Embedded Agency via Abstraction · 2020-01-28T17:32:21.013Z · score: 2 (1 votes) · LW · GW

Thanks for the pointer, sounds both relevant and useful. I'll definitely look into it.

Comment by johnswentworth on On hiding the source of knowledge · 2020-01-26T17:50:15.541Z · score: 19 (6 votes) · LW · GW

Lately I've been explicitly trying to trace the origins of the intuitions I use for various theoretical work, and writing up various key sources of background intuition. That was my main reason for writing a review of Design Principles of Biological Circuits, for instance. I do expect this will make it much easier to transfer my models to other people.

It sounds like many of the sources of your intuition are way more spiritual/political than most of mine, though. I have to admit I'd expect intuition-sources like mystic philosophy and conflict-y politics to systematically produce not-very-useful ideas, even in cases where the ideas are true. Specifically, I'd expect such intuition-sources to produce models without correct gears in them.

Comment by johnswentworth on Coordination as a Scarce Resource · 2020-01-26T17:17:06.257Z · score: 3 (2 votes) · LW · GW

Good points. Personal to Prison Gangs makes a similar point about regulation, along with several other phenomena - litigation, credentialism, tribalism, etc. Based on the model in the OP, all of these are increasing over time because they solve coordination problems more scalably than old systems (e.g. personal reputation).

With regards to coordination vs object-level skills, I think a decent approximation is that object-level skills usually need to be satisficed - one needs to produce a good-enough product/service. After that, it's mainly about coordination: finding the people who already need the good-enough product you have. To put it differently, decreasing marginal returns usually seem to kick in much earlier in object-level investments than in coordination investments.

Comment by johnswentworth on Material Goods as an Abundant Resource · 2020-01-26T07:03:39.099Z · score: 2 (1 votes) · LW · GW

I think the usual formulation of homo economicus would agree with you on that one, actually.

Comment by johnswentworth on Constraints & Slackness as a Worldview Generator · 2020-01-26T06:08:10.119Z · score: 5 (3 votes) · LW · GW

Great question! The short answer, in the context of the China example, is that the capital bottleneck is the first gear in the model. Whether banking/lending would relax the constraint depends on the next gear up the chain - i.e. it depends why capital was scarce in the first place.

Here are a few possibilities:

  • malthusian poverty trap: all excess resources go to expanding the population, so there is little-to-no surplus to invest in capital.
  • institutions: weak property rights or poor contract enforcement mechanisms, making it difficult to invest.
  • coordination problem: there's plenty of people with surplus to invest, and plenty of people with profitable ways to invest it, but the coordination problem between them hasn't been solved.

Introducing banking/lending would potentially solve the last one, but not the first two. In the constraint language, banking technology introduces new constraints: it requires contract enforcement, and it requires people with excess resources to invest (among other things). Those new constraints need to be less taut than the old capital constraint in order for the technology to be adopted.

In the case of China, banking/lending technology was almost certainly available - it simply wasn't used to the same extent as in Europe. I have heard both the malthusian trap and the institutions explanations given as possible reasons, but I haven't personally studied it enough to know what was most relevant.

Comment by johnswentworth on (A -> B) -> A in Causal DAGs · 2020-01-24T20:10:49.977Z · score: 4 (2 votes) · LW · GW

Again, decision theory/game theory are not about "executing a knowable strategy" or "behavior selection according to legible reasoning". They're about what goal-directed behavior means, especially under partial information and in the presence of other goal-directed systems. The theory of decisions/games is the theory of how to achieve goals. Whether a legible strategy achieves a goal is mostly incidental to decision/game theory - there are some games where legibility/illegibility could convey an advantage, but that's not really something that most game theorists study.

Comment by johnswentworth on (A -> B) -> A in Causal DAGs · 2020-01-24T19:14:09.342Z · score: 7 (4 votes) · LW · GW

Very interesting, thank you for the link!

Main difference between what they're doing and what I'm doing: they're using explicit utility & maximization nodes; I'm not. It may be that this doesn't actually matter. The representation I'm using certainly allows for utility maximization - a node downstream of a cloud can just be a maximizer for some utility on the nodes of the cloud-model. The converse question is less obvious: can any node downstream of a cloud be represented by a utility maximizer (with a very artificial "utility")? I'll probably play around with that a bit; if it works, I'd be able to re-use the equivalence results in that paper. If it doesn't work, then that would demonstrate a clear qualitative difference between "goal-directed" behavior and arbitrary behavior in these sorts of systems, which would in turn be useful for alignment - it would show a broad class of problems where utility functions do constrain.

Comment by johnswentworth on Theory of Causal Models with Dynamic Structure? · 2020-01-23T21:59:05.538Z · score: 2 (1 votes) · LW · GW

Indeed, that's exactly why I'm looking for it.

Comment by johnswentworth on (A -> B) -> A in Causal DAGs · 2020-01-23T17:32:30.656Z · score: 3 (2 votes) · LW · GW

On reflection, there's a better answer to this than I originally gave, so I'm trying again.

"What the agent believes the model to be" is whatever's inside the cloud in the high-level model. That's precisely what the clouds mean. But the clouds (and their contents) only exist in the high-level model; the low-level model contains no clouds. The "actual model" is the low-level model.

So, when we talk about the extent to which the high-level and low-level models match - i.e. what queries on the low-level model can be answered by queries on the high-level model - we're implicitly talking about the extent to which the agent's model matches the low-level model.

The high-level model (at least the part of it within the cloud) is "what the agent believes the model to be".

Comment by johnswentworth on (A -> B) -> A in Causal DAGs · 2020-01-23T17:21:41.086Z · score: 2 (1 votes) · LW · GW

First of all, these are definitely not opposites, and game-theoretic agency is about much more than just "executing a knowable strategy". The basic point of embedded agency and the like is that, because of reflectivity etc, idealized game-theoretic agent behavior can only exist (or even be approximated) in the real world at an abstract level which throws out some information about the underlying territory. Game theoretic agency is still the original goal of the exercise; reflectivity and whatnot enter the picture because they're a constraint, not because they're part of what we mean by "agentiness".

In terms of human rationality - of the sort in the sequences - the recurring theme is that we want to approximate idealized game-theoretic agency as best we can, despite complicated models, reflectivity, etc. Again, game-theoretic agency is the original goal; approximations enter the picture because complexity is a constraint. Nothing about that is contradictory.

Tying it back to the OP: we have a low-level model which may be too complex for "the agent" to represent/reason about directly. We abstract that into a high-level model. The agent is then an idealized game-theoretic agent within the high-level model, but the high-level model itself is lossy. The agent's own model coincides with the high-level model - that's the meaning of the clouds. But that still leaves the question of whether and to what extent the high-level model accurately reflects the low-level model - that's the abstraction part.

Comment by johnswentworth on (A -> B) -> A in Causal DAGs · 2020-01-22T23:24:26.363Z · score: -1 (2 votes) · LW · GW

EDIT: This answer isn't very good, see my other one.

Good question. We could easily draw a diagram in which the two are separate - we'd have the "agent" node reading from one cloud and then influencing things outside of that cloud. But that case isn't very interesting - most of what we call "agenty" behavior, and especially the diagonalization issues, are about the case where the actual model and the agent's beliefs coincide. In particular, if we're talking about ideal game-theoretic agents, we usually assume that both the rules of the game and each agent's strategy are common knowledge - including off-equilibrium behavior.

So, for idealized game-theoretic agents, there is no separation between the actual model and the agent's model - interventions on the actual model are reflected in the agent's model.

That said, in the low-level model, the map and the territory will presumably always be separate. "When do they coincide?" is implicitly wrapped up in the question "when do non-agenty models abstract into agenty models?". I view the potential mismatch between the two models as an abstraction failure - if they don't match, then the agency-abstraction is broken.

Comment by johnswentworth on Definitions of Causal Abstraction: Reviewing Beckers & Halpern · 2020-01-22T04:00:07.543Z · score: 3 (2 votes) · LW · GW

Turns out the particles -> fluid example doesn't work; it's not a -abstraction (which makes me think the range of applicability of -abstraction is considerably narrower than I first thought).

That said, here's a counterexample which I think works. Variables of the low-level model:

  • follow an arbitrary structural model
  • is a random permutation
  • given by

... where U are iid noise terms. So we have some arbitrary structural model, we scramble the variables, and then we compute a function of each. For the high-level model:

  • follow the same model as in the low-level model
  • given by

... so it's the same as the low-level model, but with the variables unscrambled. The mapping between the two is what you'd expect: maps directly, and uses to unscramble : . Then the interventions are similarly simple:

Note that we can pick any we please for the last intervention, but we do need to pick one - we can't just leave it alone.

I'm pretty sure this checks all the boxes for strong -abstraction. But it isn't a constructive -abstraction, since all of the 's depend on the same low-level variable . In principle, there could still be some other which makes the high-level model a constructive abstraction (B&H's definition only requires that some exist between the two models), but I doubt it.

Let me know if you guys spot a hole in this setup, or see an elegant way to confirm that there isn't some other that magically makes it constructive.



Comment by johnswentworth on Embedded Agency via Abstraction · 2020-01-20T01:55:57.830Z · score: 4 (2 votes) · LW · GW

LGTM

Comment by johnswentworth on What is Life in an Immoral Maze? · 2020-01-10T02:42:09.662Z · score: 13 (4 votes) · LW · GW
But the distinction that feels important is something like: "if a system manipulates you in such a way that, initially, you thought you were getting a good deal, but upon reflection you got a bad deal and now it's hard to change your mind about that deal"

I totally agree. That's pointing to something very interesting. It has nothing whatsoever to do with competition, and I think trying to frame this whole thing in terms of competition and barriers to exit is making a complete mess of a potentially interesting idea.

Comment by johnswentworth on What is Life in an Immoral Maze? · 2020-01-10T00:31:22.017Z · score: 4 (2 votes) · LW · GW

Remember, the original issue here is superperfect competition. In order for the model to work, there has to be something forcing people to stay in the game when they would prefer to be out. E.g. I'm in the kitchen, I would prefer to be in the dining room, but something prevents me from leaving the kitchen. "All my relationships are in the kitchen" is not usually something which prevents me from leaving when I would already prefer to be out; it's something which makes me not want to be out of the kitchen in the first place. Even if I were already in the dining room, I'd want to go back to the kitchen if all my relationships were there.

That's the central confusion I see coming up here repeatedly: most of the reasons people are talking about for not leaving middle management are reasons to not want to be out; they are not reasons to stay in when someone does want to be out.

Even the typical usage of "invested in a job" suggests a reason that someone would not want to be out of the job, as opposed to forcing them to stay when they do want to be out.

That's the problem here: if people stick in middle management because they do not want to be out, then we do not have superperfect competition. Rather, we have ordinary competition over a not-very-legible-but-entirely-legitimate kind of value.

EDIT: an analogy. We have a mix of hydrogen and oxygen gasses in a container. If they react to form water, they will be at lower energy - they "prefer" to be in that lower-energy state. But the two can't react without a spark - there's an energy barrier (exactly analogous to an exit barrier), and the barrier prevents the system from moving to a preferred state. The key distinction is between the relative energy of the two states, vs the height of the barrier.

Comment by johnswentworth on What is Life in an Immoral Maze? · 2020-01-09T22:07:26.827Z · score: 2 (1 votes) · LW · GW

True, but that's a sunk cost, not a barrier to exit.