We need to revisit AI rewriting its source code 2019-12-27T18:27:55.315Z · score: 10 (7 votes)
Units of Action 2019-11-07T17:47:13.141Z · score: 7 (1 votes)
Natural laws should be explicit constraints on strategy space 2019-08-13T20:22:47.933Z · score: 10 (3 votes)
Offering public comment in the Federal rulemaking process 2019-07-15T20:31:39.182Z · score: 19 (4 votes)
Outline of NIST draft plan for AI standards 2019-07-09T17:30:45.721Z · score: 19 (5 votes)
NIST: draft plan for AI standards development 2019-07-08T14:13:09.314Z · score: 17 (5 votes)
Open Thread July 2019 2019-07-03T15:07:40.991Z · score: 15 (4 votes)
Systems Engineering Advancement Research Initiative 2019-06-28T17:57:54.606Z · score: 23 (7 votes)
Financial engineering for funding drug research 2019-05-10T18:46:03.029Z · score: 11 (5 votes)
Open Thread May 2019 2019-05-01T15:43:23.982Z · score: 11 (4 votes)
StrongerByScience: a rational strength training website 2019-04-17T18:12:47.481Z · score: 15 (7 votes)
Machine Pastoralism 2019-04-03T16:04:02.450Z · score: 12 (7 votes)
Open Thread March 2019 2019-03-07T18:26:02.976Z · score: 10 (4 votes)
Open Thread February 2019 2019-02-07T18:00:45.772Z · score: 20 (7 votes)
Towards equilibria-breaking methods 2019-01-29T16:19:57.564Z · score: 23 (7 votes)
How could shares in a megaproject return value to shareholders? 2019-01-18T18:36:34.916Z · score: 18 (4 votes)
Buy shares in a megaproject 2019-01-16T16:18:50.177Z · score: 15 (6 votes)
Megaproject management 2019-01-11T17:08:37.308Z · score: 57 (21 votes)
Towards no-math, graphical instructions for prediction markets 2019-01-04T16:39:58.479Z · score: 30 (13 votes)
Strategy is the Deconfusion of Action 2019-01-02T20:56:28.124Z · score: 75 (24 votes)
Systems Engineering and the META Program 2018-12-20T20:19:25.819Z · score: 31 (11 votes)
Is cognitive load a factor in community decline? 2018-12-07T15:45:20.605Z · score: 20 (7 votes)
Genetically Modified Humans Born (Allegedly) 2018-11-28T16:14:05.477Z · score: 30 (9 votes)
Real-time hiring with prediction markets 2018-11-09T22:10:18.576Z · score: 19 (5 votes)
Update the best textbooks on every subject list 2018-11-08T20:54:35.300Z · score: 79 (29 votes)
An Undergraduate Reading Of: Semantic information, autonomous agency and non-equilibrium statistical physics 2018-10-30T18:36:14.159Z · score: 31 (7 votes)
Why don’t we treat geniuses like professional athletes? 2018-10-11T15:37:33.688Z · score: 27 (16 votes)
Thinkerly: Grammarly for writing good thoughts 2018-10-11T14:57:04.571Z · score: 6 (6 votes)
Simple Metaphor About Compressed Sensing 2018-07-17T15:47:17.909Z · score: 8 (7 votes)
Book Review: Why Honor Matters 2018-06-25T20:53:48.671Z · score: 31 (13 votes)
Does anyone use advanced media projects? 2018-06-20T23:33:45.405Z · score: 45 (14 votes)
An Undergraduate Reading Of: Macroscopic Prediction by E.T. Jaynes 2018-04-19T17:30:39.893Z · score: 38 (9 votes)
Death in Groups II 2018-04-13T18:12:30.427Z · score: 32 (7 votes)
Death in Groups 2018-04-05T00:45:24.990Z · score: 48 (19 votes)
Ancient Social Patterns: Comitatus 2018-03-05T18:28:35.765Z · score: 20 (7 votes)
Book Review - Probability and Finance: It's Only a Game! 2018-01-23T18:52:23.602Z · score: 25 (10 votes)
Conversational Presentation of Why Automation is Different This Time 2018-01-17T22:11:32.083Z · score: 70 (29 votes)
Arbitrary Math Questions 2017-11-21T01:18:47.430Z · score: 8 (4 votes)
Set, Game, Match 2017-11-09T23:06:53.672Z · score: 5 (2 votes)
Reading Papers in Undergrad 2017-11-09T19:24:13.044Z · score: 42 (14 votes)


Comment by ryan_b on Toward a New Technical Explanation of Technical Explanation · 2020-01-17T19:35:11.970Z · score: 9 (4 votes) · LW · GW

I do not understand Logical Induction, and I especially don't understand the relationship between it and updating on evidence. I feel like I keep viewing Bayes as a procedure separate from the agent, and then trying to slide LI into that same slot, and it fails because at least LI and probably Bayes are wrongly viewed that way.

But this post is what I leaned on to shift from an utter-darkness understanding of LI to a heavy-fog one, and re-reading it has been very useful in that regard. Since I am otherwise not a person who would be expected to understand it, I think this speaks very well of the post in general and of its importance to the conversation surrounding LI.

This also is a good example of the norm of multiple levels of explanation: in my lay opinion a good intellectual pipeline needs explanation stretching from intuition through formalism, and this is such a post on one of the most important developments here.

Comment by ryan_b on A voting theory primer for rationalists · 2020-01-16T23:02:48.959Z · score: 4 (2 votes) · LW · GW

Congratulations on finishing your doctorate! I'm very much looking forward to the next post in the sequence on multi-winner methods, and I'm especially the metric you mention.

Comment by ryan_b on A voting theory primer for rationalists · 2020-01-16T22:59:24.442Z · score: 16 (4 votes) · LW · GW

I think this post should be included in the best posts of 2018 collection. It does an excellent job of balancing several desirable qualities: it is very well written, being both clear and entertaining; it is informative and thorough; it is in the style of argument which is preferred on LessWrong, by which I mean makes use of both theory and intuition in the explanation.

This post adds to the greater conversation by displaying rationality of the kind we are pursuing directed at a big societal problem. A specific example of what I mean that distinguishes this post from an overview that any motivated poster might write is the inclusion of Warren Smith's results; Smith is a mathematician from an unrelated field who has no published work on the subject. But he had work anyway, and it was good work which the author himself expanded on, and now we get to benefit from it through this post. This puts me very much in mind of the fact that this community was primarily founded by an autodidact who was deeply influenced by a physicist writing about probability theory.

A word on one of our sacred taboos: in the beginning it was written that Politics is the Mindkiller, and so it was for years and years. I expect this is our most consistently and universally enforced taboo. Yet here we have a high-quality and very well received post about politics, and of the ~70 comments only one appears to have been mindkilled. This post has great value on the strength of being an example of how to address troubling territory successfully. I expect most readers didn't even consider that this was political territory.

Even though it is a theory primer, it manages to be practical and actionable. Observe how the very method of scoring posts for the review, quadratic voting, is one that is discussed in the post. Practical implications for the management of the community weigh heavily in my consideration of what should be considered important conversation within the community.

Carrying on from that point into its inverse, I note that this post introduced the topic to the community (though there are scattered older references to some of the things it contains in comments). Further, as far as I can tell the author wasn't a longtime community member before this post and the sequence that followed it. The reason this matters is that LessWrong can now attract and give traction to experts in fields outside of its original core areas of interest. This is not a signal of the quality of the post so much as the post being a signal about LessWrong, so there is a definite sense in which this weighs against its inclusion: the post showed up fully formed rather than being the output of our intellectual pipeline.

I would have liked to see (probably against the preferences of most of the community and certainly against the signals the author would have received as a lurker) the areas where advocacy is happening as a specific section. I found them anyway, because they were contained in the disclosures and threaded through the discussion, and clicking the links, but I suspect that many readers would have missed them. This is especially true for readers less politically interested than I, which most of them. The obvious reason is for interested people to be able to find it more easily, which matters a lot to problems like this one. The meta-reason is that posts that tread dangerous ground might benefit from directing people somewhere else for advocacy specifically, kind of like a communication-pressure release valve. It speaks to the quality of the post this wasn't even an issue here, but for future posts on similar topics in a growing LessWrong I expect it to be.

Lastly I want to observe the follow-up posts in the sequence are also good, suggesting that this post was fertile ground for more discussion. In terms of additional follow-up: I would like to see this theory deployed at the level of intuition building, in a way similar to how we use markets, Prisoner's Dilemmas, and more recently considered Stag Hunts. I feel like it would be a good, human-achievable counterweight to things like utility functions and value handshakes in our conversation, and make our discussions more actionable thereby.

Comment by ryan_b on Open & Welcome Thread - January 2020 · 2020-01-16T18:56:17.202Z · score: 2 (1 votes) · LW · GW

Reflecting on making morally good choices vs. morally bad ones, I noticed the thing I lean on the most is not evaluating the bad ones. This effectively means good choices pay up front in computational savings.

I'm not sure whether this counts as dark arts-ing myself; on the one hand it is clearly a case of motivated stopping. On the other hand I have a solid prior that there are many more wrong choices than right ones, which implies evaluating them fairly would be stupidly expensive; that in turn implies the don't-compute-evil rule is pretty efficient even if it were arbitrarily chosen.

Comment by ryan_b on Is backwards causation necessarily absurd? · 2020-01-14T22:15:50.410Z · score: 4 (3 votes) · LW · GW

I feel that questions like this have a hard time escaping confusion because the notion of linear time is so deeply associated with causality already.

Could you point me to the arguments about a high-entropy universe being expected to decrease in entropy?

Comment by ryan_b on What is Life in an Immoral Maze? · 2020-01-09T18:23:31.658Z · score: 2 (1 votes) · LW · GW

I think I agree with your intuition, though I submit that size is really only a proxy here for levels of hierarchy. We expect more levels in a bigger organization, is all. I think this gets at the mechanisms for why the kinds of behaviors in Moral Mazes might appear. I have seen several of the Moral Mazes behaviors play out in the Army, which is one of the largest and most hierarchical organizations in existence.

I don't see why being consumed by your job would predict any of the rest of it; programmers, lawyers, and salesmen are notorious for spending all of their time on work, and those aren't management positions. Rather, I expect that all these behaviors exist on continua, and we should see more or less of them depending on how strongly people are responding to the incentives.

My intuition is that the results problem largely drives the description to which you are responding. Front line people and front line managers usually have something tangible by which to be measured, but once people enter the middle zone of not being directly connected to the top line or the bottom line results, there's nothing left but signalling. So even a 9-5 guy who goes fishing is still likely to play politics, avoid rocking the boat, pass the blame downhill, and think that outcomes are determined by outside forces.

I would be shocked to my core if Moral Mazes behaviors rarely appeared under such conditions.

Comment by ryan_b on What is Life in an Immoral Maze? · 2020-01-08T15:29:48.984Z · score: 2 (1 votes) · LW · GW

One of the largest in the country. The core organization is less than a thousand, but they have state affiliate organizations and as of recently international ones as well.

It is exceedingly top-heavy; I want to say it was approaching 5% executives, not counting their immediate staff.

The organization is functionally in free-fall now; they are hemorrhaging people and money. I expect if it were for-profit this is the part where they would go bankrupt. The transition from well-functioning to free-fall took ~5 years.

Comment by ryan_b on What is Life in an Immoral Maze? · 2020-01-07T22:11:44.887Z · score: 4 (2 votes) · LW · GW

When considering a barrier to exit, do they usually include the cost to go somewhere else? Quitting is free and easy, but getting another job elsewhere isn't, especially when considering opportunity costs.

Comment by ryan_b on What is Life in an Immoral Maze? · 2020-01-07T22:07:24.824Z · score: 4 (2 votes) · LW · GW

By contrast this does match my wife's experiences as a senior manager in a large non-profit. There were repeated and consistent messages about being expected to respond to emails and calls at all hours as you moved up the hierarchy; the performance metrics were fixed so everyone fit within a narrower band; actual outcomes of programs did not matter suggesting that they did was punished (culminating in one fascinating episode where a VP seems to have made up an entire program which delivered 0.001 of projected revenue, resulting in a revenue shortfall of some 25% for the whole organization, and who was not fired).

Comment by ryan_b on We need to revisit AI rewriting its source code · 2019-12-30T15:22:42.654Z · score: 3 (2 votes) · LW · GW

Self modifying code has been possible but not practical for as long as we have had digital computers. Now it has toolchains, use cases, and in the near future tens to hundreds of people will do it as their day job.

The strong version of my claim is that I expect to see the same kinds of failure modes we are concerned with in AGI pushed down to the level of consumer-grade software, at least in huge applications like social networks and self-driving cars.

I think it is now simple and cheap enough for a single research group to do something like:

  • Write a formal specification
  • Which employs learning for some simple purpose
  • And employs self-modification on one or more levels

Which is to say, it feels like we have enough tooling to start doing "Hello World" grade self-modification tests that account for every level of the stack, in real systems.

Comment by ryan_b on Technical AGI safety research outside AI · 2019-12-26T13:25:11.345Z · score: 4 (2 votes) · LW · GW

I think systems engineering is a candidate for this, at least as far as the safety and meta sections go.

There is a program at MIT for expanding systems engineering to account for post-design variations in the environment, including specific reasoning about a broader notion of safety:

Systems Engineering Advancement Research Initiative

There was also a DARPA program for speeding up the delivery of new military vehicles, which seems to have the most direct applications to CAIS:

Systems Engineering and the META Program

Among other things, systems engineering has the virtue of making hardware an explicit feature of the model.

Comment by ryan_b on Propagating Facts into Aesthetics · 2019-12-19T16:15:50.166Z · score: 6 (3 votes) · LW · GW

1. I strongly endorse this line of thinking, and I want to see it continue to develop. I have a very strong expectation that we will see benefits really accrue from the rationality project when we have finally hit on everything important to humans. Specifically, taking the first step in each of probability|purpose|community|aesthetics|etc will be much more impactful than puissant mastery of only probability.


I am in fact confused by this. My answer is "yes", and I don't know why. Deserts don't have much in the way of resources. Their stark beauty is more like the way a statue is beautiful than the way a forest is beautiful.

I think the key word here is "stark." The desert environment is elegant, because it has fewer things in it. We can see clearly the carving of the wind into the dunes, the sudden contrast where sand abuts stone, the endless gleaming of the salt. Consider for a moment the difference between looking at the forest and looking at the trees: when I zoom out to the forest level I notice the lay of the hills beneath the trees, the gradual change from one kind of tree to another, spot the gaps where rivers run or the dirt thins. Deserts smack you in the face with the forest-level view, because there isn't another one available.

3. I like the extension to disgust. My experience was also with deserts, but in this case my impression was that deserts were clean. I found myself out in the dunes of Kuwait, where there was an abundance of flies. I figured they would go for our water, or perhaps our protein bars. Then I saw they happily landed anywhere on the sand, and I thought: wait, what do flies eat?

So now I think of deserts as beautiful and filthy.

Comment by ryan_b on Is Causality in the Map or the Territory? · 2019-12-18T19:07:04.650Z · score: 4 (2 votes) · LW · GW

I have a hard time thinking of that example as a different causal structure. Rather I think of it as keeping the same causal structure, but abstracting most of it away until we reach the level of the knob; then we make the knob concrete. This creates an affordance.

Of course when I am in my house I am approaching it from the knob-end, so mostly I just assume some layers of hidden detail behind it.

Another way to say this is that I tend to view it as compressing causal structure.

Comment by ryan_b on Is Causality in the Map or the Territory? · 2019-12-18T17:48:36.294Z · score: 4 (2 votes) · LW · GW

This point might be useless, but it feels like we are substituting sub-maps for the territory here. This example looks to me like:

Circuits -> Map

Physics -> Sub-map

Reality -> Territory

I intuitively feel like a causal signature should show up in the sub-map of whichever level you are currently examining. I am tempted to go as far as saying the degree to which the sub-map allows causal inference is effectively a measure of how close the layers are on the ladder of abstraction. In my head this sounds something like "perfect causal inference implies the minimum coherent abstraction distance."

Comment by ryan_b on Open & Welcome Thread - December 2019 · 2019-12-18T15:37:04.462Z · score: 2 (1 votes) · LW · GW

That's the one! My thanks, I was on the verge of madness!

Comment by ryan_b on Approval Extraction Advertised as Production · 2019-12-16T21:06:03.100Z · score: 6 (4 votes) · LW · GW

This part is a little baffling to me:

For better or worse that's never going to be more than a thought experiment. We could never stand it. How about that for counterintuitive? I can lay out what I know to be the right thing to do, and still not do it. I can make up all sorts of plausible justifications. It would hurt YC's brand (at least among the innumerate) if we invested in huge numbers of risky startups that flamed out. It might dilute the value of the alumni network. Perhaps most convincingly, it would be demoralizing for us to be up to our chins in failure all the time. But I know the real reason we're so conservative is that we just haven't assimilated the fact of 1000x variation in returns.
We'll probably never be able to bring ourselves to take risks proportionate to the returns in this business.

So I get why Y Combinator can't do this, but the "we" seems more inclusive here than just the YC team. I think this because in most other instances of not knowing how or being unable to do something, he troubles to suggest a way someone else might be able to.

If people are prepared to invest a lot of money in high frequency trading algorithms which are famously opaque to the people providing the money, or into hedge funds which systematically lose to the market, why wouldn't someone be willing to invest in an even larger number of startups than Y Combinator?

If we follow the logic of dumping arbitrary tests, it feels like it might be as direct as configuring a few reasoned rules, using a standardized equity offer with standardized paperwork, and then just slowly tweak the reasoned rules as the batch outcomes roll in.

Comment by ryan_b on Open & Welcome Thread - December 2019 · 2019-12-11T15:53:31.287Z · score: 4 (2 votes) · LW · GW

The commenting guidelines allows users to set their own norms of communication for their own private posts. This lets us experiment with different norms to see which work better, and also allows the LessWrong community to diversify into different subcommunities should there be interest. It says habryka's guidelines because that's who posted this post; if you go back through the other open threads, you will see other people posted many of them, and different commenting guidelines here and there. I think the posts that speak to this the most are:

[Meta] New moderation tools and moderation guidelines (by habryka)

Meta-tations on Moderation: Towards Public Archipelago (by Raemon)

Comment by ryan_b on Open & Welcome Thread - December 2019 · 2019-12-09T22:29:45.882Z · score: 4 (2 votes) · LW · GW

There's a post somewhere in the rationalsphere that I can't relocate for the life of me. Can anybody help?

The point was communication. The example given was the difference between a lecture and a sermon. The distinction the author made was something like a professor talking to students in class, each of whom then goes home and does homework by themselves, versus a preacher who gives his sermon to the congregation, with the expectation that they will break off into groups and discuss the sermon among themselves.

I have a vague memory that there were graphics involved.

I have tried local search on LessWrong, site search of LessWrong, and browsing a few post histories that seemed like they might be the author based on a vague sense of aesthetic similarity. I was sure it was here, but now I fear it may have been elsewhere or it is hidden in some other kind of post.

Comment by ryan_b on The Lesson To Unlearn · 2019-12-08T20:00:35.042Z · score: 6 (3 votes) · LW · GW

I really liked this essay.

And as hacking bad tests shrinks in importance, education will evolve to stop training us to do it.

This, however, is entirely excessive optimism.

Comment by ryan_b on Conscious Proprioception -Awareness of the Body's Position, Motion, Alignment & Balance. · 2019-12-07T16:43:57.068Z · score: 2 (1 votes) · LW · GW

I get all the normal pain/temperature/pressure/friction feedback just fine. It is only the problem of knowing where they are in space without looking at them.

Comment by ryan_b on What are some non-purely-sampling ways to do deep RL? · 2019-12-06T16:54:29.260Z · score: 4 (2 votes) · LW · GW

I don't know what the procedure for this is, but it occurs to me that if we can specify information about an environment via differential equations inside the neural network, then we can also compare this network's output to one that doesn't have the same information.

In the name of learning more about how to interpret the models, we could try something like:

1) Construct an artificial environment which we can completely specify via a set of differential equations.

2) Run a neural network to learn that environment with every combination of those differential equations.

3) Compare all of these to several control cases of not providing any differential equations.

It seems like how the control case differs from each of the cases-with-structural-information should give us some information about how the network learns the environmental structure.

Comment by ryan_b on Good Posture: Self-Assessment & Your Base-Line for Alignment. · 2019-12-06T16:09:58.381Z · score: 5 (3 votes) · LW · GW

I can vouch for sudden and significant gains in comfort and functionality by focusing on improving your posture. The method I used was less thorough than here - instead I just used an exercise band and a few stretching exercises to improve the shoulder position. This provided improved comfort immediately, and significant reduction in the fragility of my back in a matter of days.

Comment by ryan_b on Conscious Proprioception -Awareness of the Body's Position, Motion, Alignment & Balance. · 2019-12-06T16:05:56.081Z · score: 5 (2 votes) · LW · GW

I just discovered this sequence, and I am pleased and impressed. The subject of this post is something I have been looking at learning more about a lot recently, because I have a problem in the area.

Specifically, I never know where my feet are positioned. I can infer it, and I can confirm it, but I simply don't feel the position of my feet in relation to the rest of my body. Even when I am trying to focus on it.

By contrast, I do feel where my calves are in space. Most of the time when I need to place my feet precisely, I am actually just aiming my calves at that point and relying on the fact that my feet are on the end of my calves.

Comment by ryan_b on What are some non-purely-sampling ways to do deep RL? · 2019-12-05T17:32:03.487Z · score: 7 (4 votes) · LW · GW

This doesn't strike directly at the sampling question, but it is related to several of your ideas about incorporating the differentiable function: Neural Ordinary Differential Equations.

This is being exploited most heavily in the Julia community. The broader pitch is that they have formalized the relationship between differential equations and neural networks. This allows things like:

  • applying differential equation tricks to computing the outputs of neural networks
  • using neural networks to solve pieces of differential equations
  • using differential equations to specify the weighting of information

The last one is the most intriguing to me, mostly because it solves the problem of machine learning models having to start from scratch even in environments where information about the environment's structure is known. For example, you can provide it with Maxwell's Equations and then it "knows" electromagnetism.

There is a blog post about the paper and using it with the DifferentialEquations.jl and Flux.jl libraries. There is also a good talk by Christopher Rackauckas about the approach.

It is mostly about using ML in the physical sciences, which seems to be going by the name Scientific ML now.

Comment by ryan_b on Seeking Power is Instrumentally Convergent in MDPs · 2019-12-05T16:37:23.894Z · score: 19 (9 votes) · LW · GW

Strong upvote, this is amazing to me. On the post:

  • Another example of explaining the intuitions for formal results less formally. I strongly support this as a norm.
  • I found the graphics helpful, both in style and content.

Some thoughts on the results:

  • This strikes at the heart of AI risk, and to my inexpert eyes the lack of anything rigorous to build on or criticize as a mechanism for the flashiest concerns has been a big factor in how difficult it was and is to get engagement from the rest of the AI field. Even if the formalism fails due to a critical flaw, the ability to spot such a flaw is a big step forward.
  • The formalism of average attainable utility, and the explicit distinction from number of possibilities, provides powerful intuition even outside the field. This includes areas like warfare and business. I realize it isn't the goal, but I have always considered applicability outside the field as an important test because it would be deeply concerning for thinking about goal-directed behavior to mysteriously fail when applied to the only extant things which pursue goals.
  • I find the result aesthetically pleasing. This is not important, but I thought I would mention it.
Comment by ryan_b on Symbiotic Wars · 2019-12-04T21:02:40.847Z · score: 2 (1 votes) · LW · GW

I feel like this was rendered its own explicit meme in the form of The Game.

Comment by ryan_b on Is the rate of scientific progress slowing down? (by Tyler Cowen and Ben Southwood) · 2019-12-04T15:02:33.130Z · score: 11 (2 votes) · LW · GW

They ask whether TFP and related measures undervalue the tech sector. They conclude no:

  • Countries with smaller tech sectors than the US see a similar productivity slowdown.
  • Even if undervalued, the tech sector is not big enough to explain the whole slowdown in the US.
  • The slowdown begins in 1973, predating the tech sector.
Comment by ryan_b on Is the rate of scientific progress slowing down? (by Tyler Cowen and Ben Southwood) · 2019-12-04T14:52:57.245Z · score: 11 (2 votes) · LW · GW

They agree, and even raise approximately the same point:

To consider a simple example, imagine that an American company is inefficient, and then a management consultant comes along and teaches that company better personnel management practices, thereby boosting productivity. Does this count as an improvement in TFP or not? Or is it simply an increase in labor supply, namely that of the consultant? On one hand, some hitherto-neglected idea is introduced into the production process. That might militate in favor of counting it as TFP. On the other hand, the introduced idea is not a new one, and arguably the business firm in question is simply engaged in “catch up” economic growth, relative to more technologically sophisticated firms.

I am confused by their distinction between "catch up" growth and regular growth; it seems to me it should not matter how long it takes for an idea to diffuse when counting its value. Consider if each idea were like a corporation: it's not like anyone dismisses the growth that happened after the iPod came out as "catch up" value, and only gains during Jobs' original tenure count as "real" value.

It does seem clear to me that the timing problem makes it very difficult to disentangle from other ideas at this high level.

Comment by ryan_b on Is the rate of scientific progress slowing down? (by Tyler Cowen and Ben Southwood) · 2019-12-03T16:28:20.558Z · score: 12 (3 votes) · LW · GW

Problems with the TFP:


For instance, all attempts to measure scientific progress through productivity come up against a timing problem: the innovation does not happen at the same time as it is adopted.

Relevant footnote:

Comin, Diego, Bart Hobijn, and Emilie Rovito. Five facts you need to know about technology diffusion. No. w11928. National Bureau of Economic Research, 2006.

Undervalues enhancements to labor, capital, or land:

many scientific advances work through enabling a greater supply of labor, capital, and land, and those advances will be undervalued by a TFP metric. Let’s say someone invents a useful painkiller, and that makes it easier for many people to show up to work and be productive. Output will rise, yet that advance will show up as an increase in labor supply, rather than as an increase in technology or scientific knowledge.

Some ideas are counted as capital:

The more general problem is that many scientific and technological advances are embodied in concrete capital goods.

Some ideas are counted as labor:

If a worker generates and carries forward a new scientific idea for producing more with a given amount of labor, that measures the same way as the worker being taught greater conscientiousness and producing more.

It is not clear how consistent this is:

In defense of TFP measures, these problems are not always so serious if these biases are roughly constant over time. In that case, changes in TFP still would reflect changes in the rate of progress of science and technology. The absolute level of TFP could be biased by capital-embodied and labor-embodied technical change, but over time, for comparisons, the expected sign of that bias might be close to zero. Still, it is not obvious that the rate embodiment of new ideas into capital and labor, in percentage terms, should be constant over time.

Comment by ryan_b on Is the rate of scientific progress slowing down? (by Tyler Cowen and Ben Southwood) · 2019-12-03T16:20:19.157Z · score: 11 (2 votes) · LW · GW

They look at Total Factor Productivity instead:

Total factor productivity is an attempt to measure overall economic effectiveness: how much a society can do with the inputs it has. TFP, multi-factor productivity, or the Solow residual are all different names for this same concept. It refers to the amount of output growth left unexplained after accounting for all inputs, i.e. it is a residual and not something we measure directly. So if output grew by ten, and the contributions of land, labor, and capital each were judged at 3 (total of 9), TFP would be measured at one. As such, it is vulnerable to measurement errors in any of the main series that go into its calculation. Nonetheless the hope is that this variable measures the “left over” contribution of ideas to the process of production, and thereby helps us measure the efficacy of science. It won’t confuse progress in science with a country having a large stock of oil.

This seems to match the historical record better:

One advantage of TFP is that it seems to correspond to common intuitions as to when scientific progress was especially high. Robert H. Gordon, in his book The Rise and Decline of American Economic Growth, has argued at great length that scientific and technological progress reached a peak in the early part of the twentieth century. That was a time when fossil fuels, electrification, industrialization, nitrogen fertilizer, cars, radio, telephones, clean water, vaccines, and antibiotics all took on major roles in human lives in the wealthier countries. Within a matter of decades,, human life was transformed, in large part because of the extension and application of the earlier Industrial Revolution. Consistent with this picture, American TFP typically grew quickly in the 1920s and 1930s, ranging from between two to slightly over three percent per year. In more recent times, in contrast, TFP growth often has ranged between one and one and a half percent.
Comment by ryan_b on Is the rate of scientific progress slowing down? (by Tyler Cowen and Ben Southwood) · 2019-12-03T16:14:24.408Z · score: 11 (2 votes) · LW · GW

Disentangling science and GDP:

For instance, Norwegian GDP per capita is typically 20-30% above that of Sweden and Denmark, but Norway is accessing the same general stock of scientific knowledge. Similarly, US agricultural productivity per labourer and labour hour outperformed Europe during the 19th and 20th centuries, but some of that gap probably sprung from a high ratio of land to inhabitant, rather than an inherent technological advantage, and for some of that time American science may have been behind that of Europe.

The relevant footnote:

On agricultural productivity comparisons, see Broadberry, Stephen, and Mary O’Mahony. "Britain’s Twentieth-Century Productivity Performance in International Perspective." Work and pay in the twentieth century (2007): 301-329, and Broadberry, Stephen N., and Douglas A. Irwin. "Labor productivity in the United States and the United Kingdom during the nineteenth century." Explorations in Economic History 43, no. 2 (2006): 257-279.
Comment by ryan_b on CO2 Stripper Postmortem Thoughts · 2019-12-02T15:46:53.988Z · score: 2 (1 votes) · LW · GW

The planning fallacy for garage projects is an interesting problem, because it doesn't lend itself immediately to a reference class unless you have done a lot of projects before.

Still, next time you want to tackle a garage invention that you can just predict it will take as long as this one. It will be interesting to see how the difference between projects compares to the planning fallacy impact on a single project; in very big projects, the size of the project completely dominates any considerations of field or technology.

Come to think of it, does anyone know if there is a maker community somewhere that records its budgets and timelines? Maybe that could be used as a reference class for garage projects.

Comment by ryan_b on A voting theory primer for rationalists · 2019-11-30T19:59:03.384Z · score: 3 (2 votes) · LW · GW

I nominate this post because it does a good job of succinctly describing different ways to approach one of the core problems of civilization, which is to say choosing policy (or choosing policy choosers). A lot of our activity here is about avoiding catastrophe somehow; we have spent comparatively little time on big-picture things that are incremental improvements.

Anecdotally, this post did a good job of jogging loose a funk I was in regarding the process of politics. Politics is a notoriously *ugh field* kind of endeavor in the broader culture, and a particular taboo in this community. And yet, the very act of seriously considering what the options are is a soothing balm when you are otherwise in a state of overwhelming disgust. It’s like the ritual of evitable dismay.

Comment by ryan_b on Embedded Agents · 2019-11-28T21:46:05.457Z · score: 10 (2 votes) · LW · GW

I nominate this post for two reasons.

One, it is an excellent example of providing supplemental writing about basic intuitions and thought processes, which is extremely helpful to me because I do not have a good enough command of the formal work to intuit them.

Two, it is one of the few examples of experimenting with different kinds of presentation. I feel like this is underappreciated and under-utilized; better ways of communicating seems like a strong baseline requirement of the rationality project, and this post pushes in that direction.

Comment by ryan_b on The Costly Coordination Mechanism of Common Knowledge · 2019-11-28T02:40:26.483Z · score: 9 (4 votes) · LW · GW

I have definitely linked this more than any other post. The key insight for me is that common knowledge is something which has specific costs and trade-offs. Previously I implicitly viewed common knowledge as an accident or a feature of the environment.

Comment by ryan_b on A Theory of Pervasive Error · 2019-11-26T18:50:11.594Z · score: 11 (6 votes) · LW · GW

Mencius Moldbug is the pen name of Curtis Yarvin.

Comment by ryan_b on Shallow Review of Consistency in Statement Evaluation · 2019-11-20T17:39:09.193Z · score: 7 (4 votes) · LW · GW

I feel like the simple Kahneman algorithms are amazing. Based on what I read in the Harvard Business Review article this isn't six to eight complex variables; this is more like six cells in a spreadsheet. This has several implications:

  • Cheap: algorithms should be considered superior to expert opinion because they perform similarly for a fraction of the price.
  • Fast: spreadsheet calculations are very, very fast relative to an expert review process. Decision speed is a very common bottleneck in organizations and complex tasks; having more time is like an accumulated general advantage in the same way as having more money.
  • Simple: the low number of variables makes the options for changes to make clear, and it is easy to tell the difference between two versions of the algorithm.
  • Testable: being cheap, fast, and simple makes them ideal candidates for testing. It is easy to run multiple versions of an algorithm side by side, for almost no more resources than it takes to run one version.
  • Bootstrapping: because it is easy to test them, this effectively lowers the threshold of expertise required to identify the variables in the first place. Instead literature reviews no more intensive than the kind we do here would suffice to identify candidates for variables, and then testing can sort the most effective ones.

Even in the case where such an algorithm is exceeded by expertise, these factors make it easy to make the algorithm ubiquitous which implies we can use them to set a new floor on the goodness of decisions in the relevant domain. That really seems like raising the sanity waterline.

Decisions: fast, cheap, good. Sometimes we can have all three.

Comment by ryan_b on The new dot com bubble is here: it’s called online advertising · 2019-11-19T15:57:30.869Z · score: 15 (6 votes) · LW · GW

What you are saying is reasonable, but it feels to me like you put the burden of proof on the author of the article. The question is, why should we believe advertising works at all? So the way I see it, the burden of proof is on the people doing advertising, and the article is asserting that they have not met it.

Comment by ryan_b on Units of Action · 2019-11-18T22:52:15.890Z · score: 2 (1 votes) · LW · GW
you can usefully think of "men" as a group and make decisions based on considerations like "if we do this, men will do that".

I agree with this, but a unit of action does not add anything to the concept; it is how marketing and advertising and politics all work currently. I want to capture something different: in particular the execution of plans or working towards a goal.

I feel like the value of an abstraction is that you can think about fewer objects. If you can only work with an abstraction by taking its component objects and breaking them down to their component objects, then it's not clear in what sense you're actually abstracting.

That's interesting, and I am deeply sympathetic to this view. I do feel differently: my lens is that abstractions are for capturing the optimal amount of information. The most important thing is knowing what information is important, and then I want the most efficient way to capture it. My thinking gets muddy, though, when I don't really know what is important. This biases me in favor of being able to capture more information if necessary because if the abstraction doesn't capture the information I need then it is useless or, what is worse, misleading.

Short digression: a background assumption of mine is that there is always an algorithm or decision making process somewhere in which the abstraction will be employed. A concrete example of this which I reread from time to time is a blog post describing algorithmic efficiency in terms of problem information. The motivating example is Matlab, which is a ubiquitous numerical problem solver in engineering: the programming language is slow and wasn't designed around performance, but they get pretty good performance when solving linear systems because their algorithms do a bunch of checks to see if specific kinds of algorithms can be applied that capture the information more efficiently. This is stuff like is the matrix square? or is the matrix triangular? which matters because in each of these cases they have a maximally efficient algorithm.

Returning to the example of the obstructed agency you gave, what I want is to be able to reason about the success case and about the failure case (which if I read you correctly, is where you think the unit of action breaks down). Rolling in the intuition about problem information, when we are thinking about the agency suing a company in a unit-of-action context:

If the lawsuit proceeds normally we have only the two objects:

[Agency, Company]

But suppose the blackmail gambit works. I still want to be able to describe what is happening, so I recurse on the agency to relevant sub-units, and we have:

[Head of the agency, Investigation team, Company]

We can imagine a scenario where the blackmail gambit is discovered and the agency responds, which probably means zeroing in on the company sub-units, like whichever VP ran the operation and his informant, which brings us to:

[Head of the agency, Investigation team, Company VP, VP's informant]

And so on. The benefit is that I only need to go down into sub-units when the units I am currently looking at fail to capture the needed details. Further, I only need to look at the relevant sub-units, instead of committing to analyzing all agents/employees or all teams, which would capture all the information I need, but might be impossible (individuals) or hideously inefficient (teams).

Comment by ryan_b on Units of Action · 2019-11-15T19:42:53.646Z · score: 2 (1 votes) · LW · GW
For example, the company may decide to blackmail (the head of the agency) into pressuring (the leader of the team pursuing the case) into dropping it or flubbing the investigation or something. You won't get very far thinking of the agency as a unit, if that happens.

I tried to capture this with the game theory intuition: what this example demonstrates is that the agency has sub-units, here being the head and the investigative team. I do agree that the investigation is likely to fail, but the detail I want to highlight is that acting and the success of the action are distinct. So when an agency launches an investigation, and then the investigation is <successful|failed|sabotaged>, the real weight lies in launching the investigation and not on its outcome.

I think the intuition becomes clearer when you compare this example against one where the company decides to blackmail a key witness instead. If we imagine the agency is the FBI, I am confident you'll agree that it is a much, much bigger deal to blackmail the head of the FBI than some private citizen, even in the context of the same investigation. The simplest way to express this is that blackmailing the head of the FBI is an attack upon the agency. In both examples only one person is the target, in both cases the company does blackmail, but the consequences if discovered would be entirely lopsided.

I thought this was a good point, because it seems to contain a hidden assumption:

but... well, nor does every member of an agency sue a company together.

It is also true that everyone in the company didn't do something worth getting sued over. The intuition here is that the people that make up the agency are not a viable way to analyze what the agency is doing, even where it is technically feasible; they are the wrong unit of analysis. People aren't relevant to the agency's behavior before they join; they have much less relevance after they leave; while they are there whatever aspect of the agency you are thinking about will really only involve a subset of members. Doing things at the same time isn't the dividing line; doing them with the same purpose is.

I'm not clear on why this is:

Because there's no explicit coordination between them as a group? But if you have to consider internal communication, the abstraction seems to lose value.

Where do you see value being lost? It might be worth pointing out that the coordination doesn't have to be explicit per se, just intentional. So the men who show up to see the Expendables are not a unit of action, but a Men's Movie Club going to see the Expendables would be; further Men's Movie Club could resolve to see every new Expendables movie on opening night at a particular theater, and wouldn't need to communicate internally every time to do it because it is common knowledge. It does feel like there is something about internal communication that could mark bundles of actions, though.

Comment by ryan_b on Open & Welcome Thread - November 2019 · 2019-11-08T19:59:12.083Z · score: 7 (4 votes) · LW · GW

I found a Q&A with one of the authors on the book's website. It describes what they were hoping to accomplish, who the audience of the new book is, and summarizes some of the theoretic advancements.

Comment by ryan_b on Open & Welcome Thread - November 2019 · 2019-11-08T19:39:00.748Z · score: 12 (5 votes) · LW · GW

There's a new book out, Game-Theoretic Foundations for Probability and Finance by Glenn Shafer and Vladimir Vovk. The idea is that perfect information games can replace measure theory as the basis of probability, and also provide a mathematical basis for finance.

I have their earlier book, which I reviewed on LessWrong. I don't have the new one, in which they claim more generalization, abstraction, and coherent footing as a result of 18 years of further development. They also claim their method for continuous time finance is better and easier to use than current practice.

Has anyone else read this? It's on my list, but it will be pretty far down, so I would welcome other opinions as to whether I should promote it.

Comment by ryan_b on Units of Action · 2019-11-07T17:43:15.484Z · score: 2 (1 votes) · LW · GW


1. I can see a lot of overlap with this and several senses of the the term institution. The reason I find it convenient to use a different term is that it shifts the emphasis to what specific groups are doing. For example, family is an institution, but the Templeton family is a unit of action. The corporation is an institution, but IBM is a unit of action. It also usefully excludes broader institutions, like the market or civil rights, while keeping the New York Stock Exchange and the ACLU as units of action. One way to think of it: units of action are the microphenomena of institutions, and the macrophenomena of people.

2. Coming from a firmly demographic perspective on groups, like is common in political campaigning, this could easily get fuzzy. Consider religion: Christian and Muslim are not units of action, but Mormons and Catholics are essentially big hierarchies while Sunnis, Jews and Evangelicals are not. In the campaign-view of groups, what I think of as units of action are mostly important because they are indicators for demographic groups: NAACP is an indicator of black voter support, AARP is an indicator of senior voter support, unions of working class voter support, etc. This view seems to get the most airtime by far, though I could be biased because I consume an unusually high amount of political information.

Comment by ryan_b on Total horse takeover · 2019-11-05T17:32:15.872Z · score: 2 (1 votes) · LW · GW

Ha! I was way off. Thank you.

Comment by ryan_b on Total horse takeover · 2019-11-05T16:30:47.374Z · score: 3 (2 votes) · LW · GW

The term "rat cev" is new to me. My guess for rat is rational, but I am drawing a blank on cev. It probably isn't ceviche, though.

Comment by ryan_b on Open & Welcome Thread - October 2019 · 2019-10-31T16:21:18.078Z · score: 2 (1 votes) · LW · GW

I have always understood this to be a consequence of the Politics is the Mindkiller custom. The most relevant pieces outside the Craft and the Community on LessWrong are Raemon's The Relationship Between the Village and the Mission, and The Schelling Choice is Rabbit, not Stag.

I can think of a couple relevant-but-not-specific areas outside the rationalist community:

multivocality - the fact that single actions can be interpreted coherently from multiple perspectives simultaneously, the fact that single actions can be moves in many games at once, and the fact that public and private motivations cannot be parsed.

This leads to something they call robust action, which basically means "hard to interfere with." So my prior for successful movements is a morally multivocal ideology for hunting stag robustly.

Comment by ryan_b on To like each other, sing and dance in synchrony · 2019-10-31T16:05:13.880Z · score: 7 (3 votes) · LW · GW

I have just discovered this post, long after it was written. It is closely related to things I am now thinking about, chief among them the importance of shared experiences.

Still relevant, sources are helpful, great post!

Comment by ryan_b on Halloween · 2019-10-31T14:30:50.631Z · score: 6 (3 votes) · LW · GW

It was different not so long ago. When I was a child in the Midwest (mid-80s through at least the mid-90s), Halloween had a lot more tradition and ritual to it, but these rituals have been systematically curtailed or banned (I now live in the South, and the same is true here). Trick or treating used to be long and elaborate, treats were more commonly homemade and also elaborate, and costumes and decorations were more explicitly focused on fear and death. Halloween was about community participating in rituals of fear.

In the meantime the average age of homeowners increased, the population of children fell, and the general level of anxiety in the population increased. The entertainment industry diversified, and now even if you have cable there won't be a string of horror movies everyone watches. A whole genre of scary films, suspense, effectively disappeared in the meantime. Halloween traditions died by trivial inconveniences. Since the community aspects collapsed, new elements filled the void; entertainment entire shifted more towards adrenaline, comedy and sex; children's entertainment became increasingly nonsensical and averse to serious things; costumes followed suit.

Halloween was my favorite holiday when I was a child, but most of that is gone and now my child shall have only the memories my wife and I can impart. This saddens me greatly.

Comment by ryan_b on The Technique Taboo · 2019-10-31T13:37:31.511Z · score: 2 (1 votes) · LW · GW
It might be different for those who grew up in violent neighborhoods

Ha! I suppose the military is the next best thing. The prevalence of bodybuilding is kind of weird, considering that as a profession it is endurance-focused.

Comment by ryan_b on The Technique Taboo · 2019-10-30T14:37:22.976Z · score: 11 (5 votes) · LW · GW

Forbids electronic systems for logging their progress? Is it possible there's a separate motivation for this than hostility to technique, ie 'don't sit on the machine and play with your phone' or 'for liability reasons we can't allow you to attach the velocity device to the equipment'?

My experience of weightlifting is highly technique focused. Frequently unsophisticated, but technique focused nevertheless. Sports like powerlifting and olympic lifting are increasing in popularity all the time, and these are utterly reliant on discussion of technique. Copious discussion.

On the other hand, it does occur to me that the comparative dearth of authoritative answers is consistent with a taboo in the relatively recent past.