Posts

ryan_b's Shortform 2020-02-06T17:56:33.066Z · score: 7 (1 votes)
Open & Welcome Thread - February 2020 2020-02-04T20:49:54.924Z · score: 18 (9 votes)
Funding Long Shots 2020-01-28T22:07:16.235Z · score: 10 (2 votes)
We need to revisit AI rewriting its source code 2019-12-27T18:27:55.315Z · score: 10 (7 votes)
Units of Action 2019-11-07T17:47:13.141Z · score: 7 (1 votes)
Natural laws should be explicit constraints on strategy space 2019-08-13T20:22:47.933Z · score: 10 (3 votes)
Offering public comment in the Federal rulemaking process 2019-07-15T20:31:39.182Z · score: 19 (4 votes)
Outline of NIST draft plan for AI standards 2019-07-09T17:30:45.721Z · score: 19 (5 votes)
NIST: draft plan for AI standards development 2019-07-08T14:13:09.314Z · score: 17 (5 votes)
Open Thread July 2019 2019-07-03T15:07:40.991Z · score: 15 (4 votes)
Systems Engineering Advancement Research Initiative 2019-06-28T17:57:54.606Z · score: 23 (7 votes)
Financial engineering for funding drug research 2019-05-10T18:46:03.029Z · score: 11 (5 votes)
Open Thread May 2019 2019-05-01T15:43:23.982Z · score: 11 (4 votes)
StrongerByScience: a rational strength training website 2019-04-17T18:12:47.481Z · score: 15 (7 votes)
Machine Pastoralism 2019-04-03T16:04:02.450Z · score: 12 (7 votes)
Open Thread March 2019 2019-03-07T18:26:02.976Z · score: 10 (4 votes)
Open Thread February 2019 2019-02-07T18:00:45.772Z · score: 20 (7 votes)
Towards equilibria-breaking methods 2019-01-29T16:19:57.564Z · score: 23 (7 votes)
How could shares in a megaproject return value to shareholders? 2019-01-18T18:36:34.916Z · score: 18 (4 votes)
Buy shares in a megaproject 2019-01-16T16:18:50.177Z · score: 15 (6 votes)
Megaproject management 2019-01-11T17:08:37.308Z · score: 57 (21 votes)
Towards no-math, graphical instructions for prediction markets 2019-01-04T16:39:58.479Z · score: 30 (13 votes)
Strategy is the Deconfusion of Action 2019-01-02T20:56:28.124Z · score: 75 (24 votes)
Systems Engineering and the META Program 2018-12-20T20:19:25.819Z · score: 31 (11 votes)
Is cognitive load a factor in community decline? 2018-12-07T15:45:20.605Z · score: 20 (7 votes)
Genetically Modified Humans Born (Allegedly) 2018-11-28T16:14:05.477Z · score: 30 (9 votes)
Real-time hiring with prediction markets 2018-11-09T22:10:18.576Z · score: 19 (5 votes)
Update the best textbooks on every subject list 2018-11-08T20:54:35.300Z · score: 80 (30 votes)
An Undergraduate Reading Of: Semantic information, autonomous agency and non-equilibrium statistical physics 2018-10-30T18:36:14.159Z · score: 31 (7 votes)
Why don’t we treat geniuses like professional athletes? 2018-10-11T15:37:33.688Z · score: 27 (16 votes)
Thinkerly: Grammarly for writing good thoughts 2018-10-11T14:57:04.571Z · score: 6 (6 votes)
Simple Metaphor About Compressed Sensing 2018-07-17T15:47:17.909Z · score: 8 (7 votes)
Book Review: Why Honor Matters 2018-06-25T20:53:48.671Z · score: 31 (13 votes)
Does anyone use advanced media projects? 2018-06-20T23:33:45.405Z · score: 45 (14 votes)
An Undergraduate Reading Of: Macroscopic Prediction by E.T. Jaynes 2018-04-19T17:30:39.893Z · score: 38 (9 votes)
Death in Groups II 2018-04-13T18:12:30.427Z · score: 32 (7 votes)
Death in Groups 2018-04-05T00:45:24.990Z · score: 48 (19 votes)
Ancient Social Patterns: Comitatus 2018-03-05T18:28:35.765Z · score: 20 (7 votes)
Book Review - Probability and Finance: It's Only a Game! 2018-01-23T18:52:23.602Z · score: 25 (10 votes)
Conversational Presentation of Why Automation is Different This Time 2018-01-17T22:11:32.083Z · score: 70 (29 votes)
Arbitrary Math Questions 2017-11-21T01:18:47.430Z · score: 8 (4 votes)
Set, Game, Match 2017-11-09T23:06:53.672Z · score: 5 (2 votes)
Reading Papers in Undergrad 2017-11-09T19:24:13.044Z · score: 42 (14 votes)

Comments

Comment by ryan_b on Comprehensive COVID-19 Disinfection Protocol for Packages and Envelopes · 2020-03-29T16:37:17.803Z · score: 4 (2 votes) · LW · GW

Well that sucks. Take care of yourself and stay sane during isolation!

Comment by ryan_b on History's Biggest Natural Experiment · 2020-03-25T17:32:45.096Z · score: 2 (1 votes) · LW · GW

I feel like this is evidence for the natural experiment interpretation. This means we will get a steady stream of new findings as each maturation window approaches, for decades to come.

Comment by ryan_b on Are veterans more self-disciplined than non-veterans? · 2020-03-24T02:12:32.311Z · score: 3 (2 votes) · LW · GW

To be more exact, if you have a group, then the group provides social incentives; but social incentives do not imply a group. For example, if I were publicly humiliated in front of strangers, they might mock me if they saw me later in a restaurant. This is a social (dis)incentive, but the fact remains that we aren’t in a group.

What qualifies people as a group in the sense that I intend is at least twofold: they have to share the same set of incentives; this fact has to be common knowledge among them.

I do agree that if person trains successfully it would improve long-run discipline, but doing military training won’t meaningfully change the outcome from non-military training because the group context is what does the extra work. If that is not the focus, ie veterans are just an example disciplined population, then my comments are probably not relevant to the true concern.

Comment by ryan_b on Can crimes be discussed literally? · 2020-03-23T19:24:35.946Z · score: 12 (6 votes) · LW · GW

It seems to me precisely the opposite: my reading is that Benquo is driving exactly at how to talk about the problem of systemic falsification of information.

If the post is noncentral, what is the central thing instead?

Comment by ryan_b on Are veterans more self-disciplined than non-veterans? · 2020-03-23T16:21:39.030Z · score: 6 (4 votes) · LW · GW

I am a veteran, and my inside view suggests two things: one, the least disciplined members of the population are filtered out by the military (which is to say they are not accepted or kicked out early); two, the military experience pushes veterans towards the extremes.

Reasons to consider that veterans would be more productive than average:

  • Acclimated to long and/or strenuous work periods.
  • Better access to education through veterans programs and admission boosts.
  • Direct boost to employability in a variety of industries.

Reasons to consider that veterans would be less productive than average:

  • Higher rates of homelessness
  • Higher rates of mental illness and suicide
  • Higher rates of substance abuse
  • Etc.

My expectation is that the productivity advantage is highest when veterans enter a civilian industry that matches military tasks closely, like compliance with regulations or uncomfortable work environments. I also expect that the veterans who fail to re-adapt to civilian life suffer an almost complete collapse of productivity.

Turning to the question of discipline, I think we will benefit from a little context. Discipline in the military is very much a team phenomenon; Army training is focused overwhelmingly on establishing and maintaining a group identity. Most of the things people associate with military discipline require other people to make sense, like the chain of command, pulling security, and how tasks are divided. Even the individual things like physical fitness or memorizing trivia, are thoroughly steeped in the team environment because they are motivated by being able to help your buddy out and are how status is sorted in the group.

I believe your friend's statement:

If you can just train yourself like you're in the army, then you can become just as self disciplined as a soldier

is wrong as a consequence, because you can never train yourself like you are in the Army. That fundamentally needs a group, entirely separate from the question of social incentives and environment. Outside of the group context, discipline doesn't really mean anything more than habit formation.

Comment by ryan_b on LessWrong Coronavirus Agenda · 2020-03-19T17:28:07.021Z · score: 4 (2 votes) · LW · GW

If the same type of facility works for almost every kind of vaccine, do we think there would be interest in constructing the facilities as a speculative venture? Consider:

1. The economy is in chaos and may remain so, which I expect to produce unusually affordable access to design firms, construction crews, raw materials, and land.

2. There will be a strong incentive for regulators/inspectors to move with best speed, and the current administration at least in the US has a track record of being friendly to shortcuts.

3. If the facilities are already built, this allows a limit to the risk the companies producing the vaccines need to absorb in order to increase supply.

4. We could squeeze out unscrupulous opportunists.

Comment by ryan_b on Ways that China is surpassing the US · 2020-03-12T21:36:45.994Z · score: 2 (1 votes) · LW · GW

My model for this is that China is achieving success largely by ignoring externalities. Environmental pollution is a prime example, like in the case of their previous recycling policy and mining of rare earth minerals. It is actually against the law for the US to build as quickly or as cheaply as China, but this is reasonably motivated by trying to account for things like pollution and safety, and avoiding things like resettling entire towns.

Chinese success looks a lot like the WWII and postwar years in the US, and for much the same reasons.

Comment by ryan_b on Cortés, Pizarro, and Afonso as Precedents for Takeover · 2020-03-07T17:40:54.428Z · score: 2 (1 votes) · LW · GW
because why was it that the conquistadors were able to exploit the locals and not the other way around?

Have you considered the possibility that it was a case of mutual exploitation? The Aztec allies of the conquistadors weren't there out of the goodness of their hearts; they had found a new angle that would help them defeat Tenochtitlan. They lost the post-victory power struggle, but it was always going to be someone.

Comment by ryan_b on Have epistemic conditions always been this bad? · 2020-02-10T19:41:20.948Z · score: 2 (1 votes) · LW · GW
You make good points here. Any ideas why those other shifts happened and how can we help reverse them or prevent them from happening elsewhere?

Mostly it looks to me like a series of unrelated changes built up over time, and the unintended consequences were mostly adverse.

An example is the War on Cancer and the changes that came with it to funding. It had long been the case that funding was mostly handed out on a project-by-project basis, but in order to get the funding dedicated to cancer research it was necessary to explain how cancer research would benefit. The obvious first-order impact is an increase in administrative overhead for getting the money.

Alongside this science sort of professionalized. I expect that when the sense of how important something is permeates, professionalization is viewed as a natural consequence, but it seems to have misfired here. Professionalization, like other forms of labor organization, isn't about maximizing anything but about ensuring a minimum. This means things like more metrics, which is why our civilization formally prefers a lot of crappy scientific papers to a few good ones, and doesn't want any kind of non-paper presentation of scientific progress at all. Science jobs become subject to Goodharting, because people start thinking that the right way to get more science is just to increase the number of scientists, on account of them all being interchangeable professionals with a reliable minimum output.

The university environment also got leaned on as a lever for progress; the student loan programs all grew over this same period, which seems to have driven a long period of competition for headcount. This shifted universities' priorities from executing their nominal mission towards signalling desirability among students/parents/etc. I am certain at least part of that came at the expense of faculty, even if only by increasing the administrative burden still further by yet more metrics.

On the fixing side, I am actually pretty optimistic. A few simple things would probably help a lot, two examples being funding and organization. Example: Bell Labs and Xerox PARC have been discussed here a lot. Both cases deviated significantly from the standard university/government system of funding individual projects case by case. Under the project/grant system being a scientist reduces to being able to successfully get funding for a series of projects over time. At Bell and at PARC, they rather made long-term investments on a person-by-person basis. I think this has wide-ranging effects, but not least among them is that there wasn't a lot of administrative overhead to a given investigation; rather they could all be picked up, put down, or adapted as needed. Another effect, maybe intentional but seemingly happenstance, is that they built a community of researchers in the colloquial sense. This is pretty different from the formal employee relationships that dominate now. Around 7 years ago I listened to a recruiting pitch from Sandia National Laboratories for engineering students, and asked how communication was between different groups in the lab. The representative said that she knew of a case where two labs right across the hall from each other were investigating the same thing for over a year before they realized it, because nobody talks.

This suggests to me that a university that was struggling financially, or maybe just needed to take a gamble on moving up in the world, could cheaply implement what appears to be a superior research-producing apparatus, just by shifting their methods of funding and tracking results.

Comment by ryan_b on ryan_b's Shortform · 2020-02-06T17:56:33.320Z · score: 5 (3 votes) · LW · GW

Are math proofs useful at all for writing better algorithms? I saw on Reddit recently that they proved Batchelor's Law in 3D, the core idea of which seems to be using stochastic assumptions to prove it cannot be violated. The Quanta article does not seem to contain a link to the paper, which is weird.

Batchelor's Law is the experimentally-observed fact that turbulence occurs at a specific ratio across scales, which is to say when you zoom in on a small chunk of the turbulence it looks remarkably like all of the turbulence, and so on. Something something fractals something.

Looking up the relationship between proofs and algorithms mostly goes to proofs about specific algorithms, and sometimes using algorithms as a form of proof; but what I am after is whether a pure-math proof like the above one can be mined for useful information about how to build an algorithm in the first place. I have read elsewhere that algorithmic efficiency is about problem information, and this makes intuitive sense to me; but what kind of information am I really getting out of mathematical proofs, assuming I can understand them?

I don't suppose there's a list somewhere that handily matches tricks for proving things in mathematics to tricks for constructing algorithms in computer science?

Comment by ryan_b on Have epistemic conditions always been this bad? · 2020-02-05T17:44:07.694Z · score: 2 (1 votes) · LW · GW

Oops! Fixed.

Comment by ryan_b on Have epistemic conditions always been this bad? · 2020-02-04T16:03:59.807Z · score: 2 (1 votes) · LW · GW
We're not seeing academia defend itself like that today. I'm not sure if the norms decayed over time, or current political forces are stronger than in the past, but neither is good news.

This is a crux of the issue, in my view. It's worth considering that this isn't happening in independently of other major developments in academia: since 1950 we saw the development of publish or perish culture, a shift towards administrative activities at the expense of instruction and research, and most recently the replication crisis. The great shock of the replication crisis to me was that there was a group of scientists who sincerely believed that replication was not important. That is such a fundamental part of the story of science that even laypeople know about it. I would be extremely surprised if that decay was not at least mirrored in things like principles of political noninterference. STEM is vulnerable to political takeover because the time STEM professors spend defending the spirit of free inquiry is time taken away from churning out the next paper and writing grant applications, just like everyone else.

(unless we figure out how to make sure such norms are strong enough and stay strong enough)

I think this is the mechanism by which movements fade. In order for a norm to work, people have to make continuous, active investments in it. This mostly means doing things that reflect them, spending money on them, or taking time to advocate for them specifically.

Out of curiosity, what do you think the specific harms are from how the left will administer universities? From the example you cited for STEM fields, it looks to me like two things: 1) systematically take a hit on the talent-level of its professors (in their areas of expertise); 2) they will redirect some fraction of research dollars in every field to diversity and inclusion. From my other exposure to the rhetoric, I suspect they will cripple genetics research, which is indeed a big deal and also reminiscent of Communism.

Comment by ryan_b on Have epistemic conditions always been this bad? · 2020-02-03T22:28:45.199Z · score: 2 (1 votes) · LW · GW

Certainly - you are doing a great job of pointing out areas where I have a lot of implicit assumptions, and having to articulate them is useful all by itself, so I'm getting a lot out of this too.

What are you referring to here? Red Scares?

In general any large cultural movement, so while I am including the Red Scares I am also including things like Civil Rights and the Labor Movement. Also, I implicitly count the total cost, so the cost of both the movement's activity and also the responses to it are included. I read a review of Days of Rage, which I came to through the SlateStarCodex review of Ages of Discord; the scale of conflict there was prodigious, with hundreds of bombings and a high rate of police assassination. Rolling back earlier to the time of the first Red Scare, in 1921 the Battle of Blair Mountain was fought, with ~10,000 miners and unionists on one side and ~3,000 lawmen and strike breakers on the other, armed with machine guns and aircraft (early ones, mind you). The President had to call in the Army. These are things which we muddled through, and which also had a high human cost, which we have collectively since forgotten.

Do you agree that if the conditions of the 90s-00s had continued, the outlook on x-risks would be significantly better?

I am not sure that I do. I see that the infrastructure and community surrounding x-risks have done very will during the 2010s, and I'm not familiar with any significant setbacks driven by social justice. The most important thing of which I am aware is the sense that the Bay Area is not a friendly space for inquiry anymore, but that mostly seems to imply that expanding the x-risk institutions there will be less profitable. If we were to constrain ourselves to conditions in California, then I would probably agree.

Why doesn't it include local government?

The short answer is because that isn't where it originated. My model for how this works is basically imperial: the center of gravity for a cultural movement is like the core for imperial conquest; they use the strength of the core to subjugate neighboring territories (although here we are talking about infiltrating institutions). Social justice started on the internet and universities, and then got onto k-12 school boards and into city councils. The concrete implication of this is that if social justice withers on the internet and in universities, I expect it will subsequently vanish from school boards and city councils; because local government is not the base of power, they won't be able to push further from there. It does not seem to me that education without a research component or local government have the kind of signalling incentives that social justice needs to thrive internally. I think this is insulation from results: k-12 is all about test results, and local government has to deal with water and garbage collection and other practical things.

It used to take institutional-grade violence to silence dissent on a large scale, but social media (with its threat of career destruction) now serves that role

This is a good point; in my cultural empire model it also has the effect of making virtually all institutions adjacent institutions. As a consequence, there's lots of places we can expect social justice not to catch on, but very few insulated from it completely.

I don't think we've seen anything recently embed into our institutional structures in a way similar

This is another good point. My immediate thought is that I have trouble distinguishing social justice from any other form of fad in the areas it has occupied: why would making all the boys swear oaths against hitting women during an assembly be stickier than an assembly warning them of the dangers of satanic cults? How would changing the language we use to write test questions so that it includes trans people be different from making sure we don't refer to animals the kids might not have seen in their local environment? So far we aren't looking at the kind of things that change how institutions have to operate, like court precedents or constitutional amendments. The test I want to use for this looks something like "have they made any changes such that if the people within the institution did not know about social justice, the aims of social justice would still be advanced."

Of course I earlier predicted that the movement would continue to grow, so there is nothing that prohibits them from achieving such a thing in my view.

Again this assumes that the new ideology needs to be enforced by force, but that doesn't seem to be the case.

I don't need any such assumption. The question is more basic to my mind: why would anyone listen in the first place? Consider: if people were already engaged with a satisfactory ideology, what purchase could social justice gain with them? The decay of the old order here means of old ideas: politics should be separate from work; the correct way to address racism is color blindness; our institutions are effective; salvation lies in the next world, etc. If there was something people believed and were motivated by, they wouldn't be susceptible to new ideological influences. The positive implication of this is that successfully reinforcing the old ideas or providing a different new ideology should have an immunizing effect. The negative implication of that is you can't just gin something like that up for the purposes of memetic vaccination.

The ideological indoctrination (which reminds me of what I received myself in Communist China) is moving wholesale into K-12 education so people can't escape it by avoiding universities anyway

I feel like an important contextual detail is the total saturation effect in Communist China. In that case the indoctrination was pretty consistent because it was reinforced via propaganda through most communication channels, like news and entertainment. The left cannot even achieve that on the internet, so while I can agree that what is happening is indoctrination, my estimation of its effectiveness is very low. There is no mechanism to prevent access to contradictory information.

It has already taken over all of humanities and social sciences, and is now moving into STEM fields

Yes, but consider the causal mechanisms. It had to start somewhere; why didn't it fail and how did it expand elsewhere? Every new institution and department required someone getting in and then deciding to use the procedures and powers of that institution to bring in like-minded people, and discourage not-like-minded people. Where were the strong norms to prevent this?

Ours are straightforward: politics is the mindkiller; report your true concern; explain your reasoning. I put it to you the reason we have not been swept up in this is because we are continuously, positively investing in something else, and that something else pays off. Circling back to the decay-of-ideas notion, this is very different from the kind of passive acknowledgement that passes for norms in large institutions or the low-dimensional concerns of really tiny ones like knitting circles, to say nothing of places in the grip of disillusionment.

Comment by ryan_b on Book Review: Human Compatible · 2020-01-31T19:15:33.954Z · score: 7 (5 votes) · LW · GW

Excellent review. I am likely to buy and read the book.

In the extreme and weird scenarios the basic pitch is that when we separate the mechanism from the objective, bad things can happen, like hypno-drones telling people to buy paperclips. It feels like we should employ the same basic trick when evaluating the current things people are worried about, like deepfakes.

Deepfaked videos aren't a meaningful threat because video just isn't that important. But what if we could deepfake medicine? According to a WHO article from 2010, counterfeit medicine was worth ~$75B/yr then. That seems plenty big enough to merit throwing similar ML techniques at designing a pill with no active ingredient but that passed various kinds of basic tests for whether a medication is genuine.

The problem with deepfakes isn't that there are fake videos, it is that we are on track for a general and reliable method of fakery.

Comment by ryan_b on Have epistemic conditions always been this bad? · 2020-01-31T16:37:15.919Z · score: 2 (1 votes) · LW · GW

The cost was bad before, too. We simply forgot, because it is our collective nature to forget. It is worth remembering that this community is at the vanguard of x-risk concern; no major cultural movement is likely to take into account new x-risks.

I agree that that centers of gravity is the more important feature; it is also what gives me confidence that it cannot get as bad as religion used to be. The most important factor is that because the center of gravity for social justice doesn't include local or federal government they won't control law enforcement or the military. This means that they won't be able to deploy systematic, institutional-grade violence against their enemies, which is what was involved in the worst damage done under Communism or religion and was further core to retaining their position.

The second thing that gives me confidence it won't get as bad as religion or Communism is that the institutional changes involved in moving away from those things remain in place. I believe in both cases the theme of the changes can be reduced to "decentralization" although it is worth pointing out this looked very different between them; but the mechanism is that their dominion failed when they weren't able to maintain control over all centers of power.

There are two other smaller points that color my perceptions. The first and simplest is that I have a general sense that culture changes faster now than it did previously; the whole start-grow-wither cycle seems to be accelerated on the strength of cheap and ubiquitous communication ability. This weighs against any sort of movement lasting even as long as decades, never mind centuries. The second is that when we look at the circumstances of religion and Communism coming to power, what I see is that it requires the decay or collapse of the previous order. Specific examples are that the collapse of the Western Roman Empire was a prerequisite for the dominion of the Catholic Church, and the decay of Russia under the Romanovs was a prerequisite for the Bolshevik Revolution.

I feel like the second point is already at work in the case of universities: a huge swath of the population no longer holds them in any esteem; even people who attend them are irritated about the cost and their failure to deliver on nominal promises; many of them are going bankrupt and closing their doors. In this model, social justice taking over the English department isn't because social justice has a Cunning Plan to Rule the World, but because the English department had long since abandoned any pretense of doing something productive or useful; they were simply working in little corners of their academic discipline, never mind the outside world. There was no real opposition because nobody cared; few people noticed; it didn't matter.

Comment by ryan_b on Have epistemic conditions always been this bad? · 2020-01-30T16:45:46.229Z · score: 2 (1 votes) · LW · GW

The specific example I was thinking of was Dalton Trumbo. Now to be clear, he was in fact a Communist sympathizer and member of the Communist Party.

But the thing that brought him to the attention of the blacklisters and HUAC, as I understand the sequence of events, was his support of the 1945 Black Friday strike. In 1946 he was fingered as a Communist and blacklisted, and in 1947 summoned to HUAC because he was on the blacklist. Although reviewing the Wikipedia article I see that he reported Nazi sympathizers to the FBI in 1941 or 42; it is possible that this caused him to be prioritized for coming before Congress, though it isn't mentioned specifically.

Re: Communists have bad epistemics: in general the criticism is correct, the problem is that it isn't exclusive to communism. Political parties in general are doctrinal organizations that communicate by propaganda, communists are just more aggressive (worse) about it. I see a twofold problem with blacklisting them: one, it doesn't follow that because the communists have terrible epistemics that the people who are blacklisting them have good epistemics (the blacklist is based on the beliefs of the MPAA or Congress); two, we have a strong meta-reason for tolerating bad epistemics, which is to ensure we allow for good epistemics. This is because the enforcement mechanisms are orthogonal to values, so the same thing that muzzles the Communists can also muzzle the Democrats and Republicans. I firmly expect all such mechanisms to be used by every group with access to them, so I want them kept to a minimum.

Re: intensity: I should have been more clear here, I apologize. The underlying intuition is that these movements are things which start, grow, peak, and then wither; the reason I was talking about the different centers of gravity and whether the concern had a real basis or not is because I think these are important variables in where the peak is. The more powerful the institutions where they are centered, and the more real the basis of their concern, the higher I expect the peak power of the movement to be. So I think we are at about Satanic Panic levels of intensity now, but I think the social justice movement has a higher potential peak because "universities and the internet" is a more powerful base, and prejudice is a more realistic concern. From the perspective of your concerns, I expect things to get worse.

Edit: I expect things to get worse before they get better, which is to say I expect we will muddle through this like we did the rest.

Comment by ryan_b on Have epistemic conditions always been this bad? · 2020-01-30T01:37:03.978Z · score: 8 (4 votes) · LW · GW

I think you have lost the thread here. We are talking about different degrees of *bad epistemics* - nowhere did we suddenly shift gears into saying this is actually secretly good epistemics.

Communists were real and a threat, but it remained bad epistemics to for Congress to form a committee whose function was blacklist people from working in television for supporting labor unions.

Devil worshippers, by contrast, were not: there were *literally zero* groups of devil worshippers undertaking child sacrifice. Censorship and police investigations were being driven by utter fiction. This was a thing happening in the 1980s that was in no epistemic sense different from the Salem Witch Trials.

Racial/sexual/religious oppression are real and were formal government policy during my parent’s childhood, but it remains bad epistemics to insist that every restaurant have 26 bathrooms to accommodate some list of sexual identities.

This then is our continuum of badness. Social justice is clearly north of Satanic Panic, but will also clearly never form a House Unawoken Activites committee to blacklist Curtis Yarvin from working in tech.

Comment by ryan_b on Funding Long Shots · 2020-01-29T20:51:44.547Z · score: 2 (1 votes) · LW · GW

Can do. I thought about anchoring the idea on the much-more-familiar Y Combinator paradigm; do you think that would be helpful, or do you think it would be better to stick to a contained summary?

Comment by ryan_b on Funding Long Shots · 2020-01-29T20:08:45.744Z · score: 2 (1 votes) · LW · GW

Something like an ETF open to anyone with an investment account is the idea, but I have seen one of the authors pitch such a mechanism specifically for pharmaceuticals and in that case they said the resulting instruments would be prime targets for index and retirement funds. So I infer the target is regular institutional investors, including investment banks and hedge funds, rather than individuals like you or I.

This is definitely a devil in the details problem. Expanding on the specific pharmaceutical pitch I saw, for comparison with the cancer example: pharmaceuticals have also seen a lot of funding, but in fact there is less overall research being done; fewer drugs are even put forward through the process then previously. Most drug discoveries do not even start it. This is because only the ones that make it all the way through the process are profitable, so all of the development money has to go to the few best-chance drugs (as the company estimates it).

The way they say that their funding method solves this problem is because FDA approval has phases; so what they do instead is put a bunch of drug patents together in a bundle (like the mortgage backed securities), and then once that bundle goes through Phase 1 the survivors are re-bundled for Phase 2, and again for Phase 3. As a result, all the drugs get pushed as far as they can go. This makes a certain adjustment of the industry more feasible as well. Currently the pharmaceutical company must be the whole pipeline to capture any value, and it is hard to succeed as a research lab that just produces drug discoveries because any individual drug is so unlikely to pay out. Under the research backed security system they would be able to sell their discoveries into a bundle (or as a bundle) because there is a Phase 1 payout that is faster and more predictable. This is analogous to how mortgage companies do not collect mortgage payments, they create the mortgage and then sell the mortgage on.

Comment by ryan_b on Have epistemic conditions always been this bad? · 2020-01-29T19:37:47.165Z · score: 2 (1 votes) · LW · GW

That's what the KKK is, along with a handful of other neo-nazi and white supremacy groups. This article from Politico does a pretty good job of describing when when they turned into a loose network instead of a bunch of isolated groups in the wake of the Greensboro Massacre.

Comment by ryan_b on Have epistemic conditions always been this bad? · 2020-01-28T18:00:01.459Z · score: 4 (2 votes) · LW · GW

Yes, I think it was comparably serious, although because the landscape was different the consequences were also different; in general I expect modern events to be higher variance.

The first similarity is the primacy of cultural products, in particular media and the arts. This was the period when there was a concerted effort to destroy the fantasy genre, and pressure was brought to bear to cancel tv shows and concerts which were deemed too occult.

The influence on academia was negligible as far as I can tell, but I suggest taking another look at government: among other things it seriously distorted a fraction of the justice system because it became common for the public to worry about whether there was a satanic cult present, which diverted resources into investigating things that weren't there (like cults) or focused attention on suspects for nonsensical reasons like whether they owned metal albums. This was also the same period that gave us the modern system of censorship, like film ratings, adult content warning stickers on CDs, etc. While censorship wasn't driven solely by the panic, the people swept up in it did work hard to capitalize on these mechanisms to further their aims.

I find it helps to view this kind of cultural event from the perspective of the institutions that make up its center of gravity. For example, one way to make sense of the differences between these movements is that the Red Scare was centered on the federal government and national media outlets; the Satanic Panic was centered on local churches and local government institutions like law enforcement, schools, and libraries; and the social justice movement is centered on universities and the internet.

That being said, your point about communism being a real thing that made sense to be concerned about is a good one: there never was a nation-spanning web of devil-worshipping cults that conducted ritual murder and sought to brainwash the youth of America. By contrast there is a nation-spanning network of terrorist organizations that target minorities/homosexuals/etc, so the social justice movement has more real concerns to work with. On that basis I expect its peak to be closer to Red Scare territory, though probably still short because I have a hard time seeing the federal government deciding it has an existential stake in the outcome.

Comment by ryan_b on Have epistemic conditions always been this bad? · 2020-01-27T15:39:42.048Z · score: 4 (2 votes) · LW · GW

Epistemic conditions have been this bad (or worse) for as long as we have had the bandwidth to think outside of physical necessity. Coordinating to implement bad epistemology has been this bad before in the US, but usually isn't.

The salient examples, and speaking to your does-this-exist-on-the-right question, are the Red Scares. There we see many of the same mechanisms at work, in particular the influence of political affiliation on employment.

You can also consider the Satanic Panic of the 1980s the same kind of problem, and probably a better match because it too was bottom up and lacked the coordinating government interest that is usually a factor in a Red Scare.

From where I sit it looks like we are at Satanic Panic levels of intensity, but well short of McCarthyism.

Comment by ryan_b on Terms & literature for purposely lossy communication · 2020-01-22T18:01:09.758Z · score: 4 (2 votes) · LW · GW

Are we thinking from the transmitter end, the receiver end, or doesn't it matter? The obvious answer seems to me to be filters, specifically a band-pass filter.

Comment by ryan_b on Toward a New Technical Explanation of Technical Explanation · 2020-01-17T19:35:11.970Z · score: 9 (4 votes) · LW · GW

I do not understand Logical Induction, and I especially don't understand the relationship between it and updating on evidence. I feel like I keep viewing Bayes as a procedure separate from the agent, and then trying to slide LI into that same slot, and it fails because at least LI and probably Bayes are wrongly viewed that way.

But this post is what I leaned on to shift from an utter-darkness understanding of LI to a heavy-fog one, and re-reading it has been very useful in that regard. Since I am otherwise not a person who would be expected to understand it, I think this speaks very well of the post in general and of its importance to the conversation surrounding LI.

This also is a good example of the norm of multiple levels of explanation: in my lay opinion a good intellectual pipeline needs explanation stretching from intuition through formalism, and this is such a post on one of the most important developments here.

Comment by ryan_b on A voting theory primer for rationalists · 2020-01-16T23:02:48.959Z · score: 4 (2 votes) · LW · GW

Congratulations on finishing your doctorate! I'm very much looking forward to the next post in the sequence on multi-winner methods, and I'm especially the metric you mention.

Comment by ryan_b on A voting theory primer for rationalists · 2020-01-16T22:59:24.442Z · score: 19 (6 votes) · LW · GW

I think this post should be included in the best posts of 2018 collection. It does an excellent job of balancing several desirable qualities: it is very well written, being both clear and entertaining; it is informative and thorough; it is in the style of argument which is preferred on LessWrong, by which I mean makes use of both theory and intuition in the explanation.

This post adds to the greater conversation by displaying rationality of the kind we are pursuing directed at a big societal problem. A specific example of what I mean that distinguishes this post from an overview that any motivated poster might write is the inclusion of Warren Smith's results; Smith is a mathematician from an unrelated field who has no published work on the subject. But he had work anyway, and it was good work which the author himself expanded on, and now we get to benefit from it through this post. This puts me very much in mind of the fact that this community was primarily founded by an autodidact who was deeply influenced by a physicist writing about probability theory.

A word on one of our sacred taboos: in the beginning it was written that Politics is the Mindkiller, and so it was for years and years. I expect this is our most consistently and universally enforced taboo. Yet here we have a high-quality and very well received post about politics, and of the ~70 comments only one appears to have been mindkilled. This post has great value on the strength of being an example of how to address troubling territory successfully. I expect most readers didn't even consider that this was political territory.

Even though it is a theory primer, it manages to be practical and actionable. Observe how the very method of scoring posts for the review, quadratic voting, is one that is discussed in the post. Practical implications for the management of the community weigh heavily in my consideration of what should be considered important conversation within the community.

Carrying on from that point into its inverse, I note that this post introduced the topic to the community (though there are scattered older references to some of the things it contains in comments). Further, as far as I can tell the author wasn't a longtime community member before this post and the sequence that followed it. The reason this matters is that LessWrong can now attract and give traction to experts in fields outside of its original core areas of interest. This is not a signal of the quality of the post so much as the post being a signal about LessWrong, so there is a definite sense in which this weighs against its inclusion: the post showed up fully formed rather than being the output of our intellectual pipeline.

I would have liked to see (probably against the preferences of most of the community and certainly against the signals the author would have received as a lurker) the areas where advocacy is happening as a specific section. I found them anyway, because they were contained in the disclosures and threaded through the discussion, and clicking the links, but I suspect that many readers would have missed them. This is especially true for readers less politically interested than I, which most of them. The obvious reason is for interested people to be able to find it more easily, which matters a lot to problems like this one. The meta-reason is that posts that tread dangerous ground might benefit from directing people somewhere else for advocacy specifically, kind of like a communication-pressure release valve. It speaks to the quality of the post this wasn't even an issue here, but for future posts on similar topics in a growing LessWrong I expect it to be.

Lastly I want to observe the follow-up posts in the sequence are also good, suggesting that this post was fertile ground for more discussion. In terms of additional follow-up: I would like to see this theory deployed at the level of intuition building, in a way similar to how we use markets, Prisoner's Dilemmas, and more recently considered Stag Hunts. I feel like it would be a good, human-achievable counterweight to things like utility functions and value handshakes in our conversation, and make our discussions more actionable thereby.




Comment by ryan_b on Open & Welcome Thread - January 2020 · 2020-01-16T18:56:17.202Z · score: 4 (2 votes) · LW · GW

Reflecting on making morally good choices vs. morally bad ones, I noticed the thing I lean on the most is not evaluating the bad ones. This effectively means good choices pay up front in computational savings.

I'm not sure whether this counts as dark arts-ing myself; on the one hand it is clearly a case of motivated stopping. On the other hand I have a solid prior that there are many more wrong choices than right ones, which implies evaluating them fairly would be stupidly expensive; that in turn implies the don't-compute-evil rule is pretty efficient even if it were arbitrarily chosen.

Comment by ryan_b on Is backwards causation necessarily absurd? · 2020-01-14T22:15:50.410Z · score: 4 (3 votes) · LW · GW

I feel that questions like this have a hard time escaping confusion because the notion of linear time is so deeply associated with causality already.

Could you point me to the arguments about a high-entropy universe being expected to decrease in entropy?


Comment by ryan_b on What is Life in an Immoral Maze? · 2020-01-09T18:23:31.658Z · score: 2 (1 votes) · LW · GW

I think I agree with your intuition, though I submit that size is really only a proxy here for levels of hierarchy. We expect more levels in a bigger organization, is all. I think this gets at the mechanisms for why the kinds of behaviors in Moral Mazes might appear. I have seen several of the Moral Mazes behaviors play out in the Army, which is one of the largest and most hierarchical organizations in existence.

I don't see why being consumed by your job would predict any of the rest of it; programmers, lawyers, and salesmen are notorious for spending all of their time on work, and those aren't management positions. Rather, I expect that all these behaviors exist on continua, and we should see more or less of them depending on how strongly people are responding to the incentives.

My intuition is that the results problem largely drives the description to which you are responding. Front line people and front line managers usually have something tangible by which to be measured, but once people enter the middle zone of not being directly connected to the top line or the bottom line results, there's nothing left but signalling. So even a 9-5 guy who goes fishing is still likely to play politics, avoid rocking the boat, pass the blame downhill, and think that outcomes are determined by outside forces.

I would be shocked to my core if Moral Mazes behaviors rarely appeared under such conditions.



Comment by ryan_b on What is Life in an Immoral Maze? · 2020-01-08T15:29:48.984Z · score: 2 (1 votes) · LW · GW

One of the largest in the country. The core organization is less than a thousand, but they have state affiliate organizations and as of recently international ones as well.

It is exceedingly top-heavy; I want to say it was approaching 5% executives, not counting their immediate staff.

The organization is functionally in free-fall now; they are hemorrhaging people and money. I expect if it were for-profit this is the part where they would go bankrupt. The transition from well-functioning to free-fall took ~5 years.

Comment by ryan_b on What is Life in an Immoral Maze? · 2020-01-07T22:11:44.887Z · score: 4 (2 votes) · LW · GW

When considering a barrier to exit, do they usually include the cost to go somewhere else? Quitting is free and easy, but getting another job elsewhere isn't, especially when considering opportunity costs.

Comment by ryan_b on What is Life in an Immoral Maze? · 2020-01-07T22:07:24.824Z · score: 4 (2 votes) · LW · GW

By contrast this does match my wife's experiences as a senior manager in a large non-profit. There were repeated and consistent messages about being expected to respond to emails and calls at all hours as you moved up the hierarchy; the performance metrics were fixed so everyone fit within a narrower band; actual outcomes of programs did not matter suggesting that they did was punished (culminating in one fascinating episode where a VP seems to have made up an entire program which delivered 0.001 of projected revenue, resulting in a revenue shortfall of some 25% for the whole organization, and who was not fired).

Comment by ryan_b on We need to revisit AI rewriting its source code · 2019-12-30T15:22:42.654Z · score: 3 (2 votes) · LW · GW

Self modifying code has been possible but not practical for as long as we have had digital computers. Now it has toolchains, use cases, and in the near future tens to hundreds of people will do it as their day job.

The strong version of my claim is that I expect to see the same kinds of failure modes we are concerned with in AGI pushed down to the level of consumer-grade software, at least in huge applications like social networks and self-driving cars.

I think it is now simple and cheap enough for a single research group to do something like:

  • Write a formal specification
  • Which employs learning for some simple purpose
  • And employs self-modification on one or more levels

Which is to say, it feels like we have enough tooling to start doing "Hello World" grade self-modification tests that account for every level of the stack, in real systems.

Comment by ryan_b on Technical AGI safety research outside AI · 2019-12-26T13:25:11.345Z · score: 4 (2 votes) · LW · GW

I think systems engineering is a candidate for this, at least as far as the safety and meta sections go.

There is a program at MIT for expanding systems engineering to account for post-design variations in the environment, including specific reasoning about a broader notion of safety:

Systems Engineering Advancement Research Initiative

There was also a DARPA program for speeding up the delivery of new military vehicles, which seems to have the most direct applications to CAIS:

Systems Engineering and the META Program

Among other things, systems engineering has the virtue of making hardware an explicit feature of the model.

Comment by ryan_b on Propagating Facts into Aesthetics · 2019-12-19T16:15:50.166Z · score: 6 (3 votes) · LW · GW

1. I strongly endorse this line of thinking, and I want to see it continue to develop. I have a very strong expectation that we will see benefits really accrue from the rationality project when we have finally hit on everything important to humans. Specifically, taking the first step in each of probability|purpose|community|aesthetics|etc will be much more impactful than puissant mastery of only probability.

2.

I am in fact confused by this. My answer is "yes", and I don't know why. Deserts don't have much in the way of resources. Their stark beauty is more like the way a statue is beautiful than the way a forest is beautiful.

I think the key word here is "stark." The desert environment is elegant, because it has fewer things in it. We can see clearly the carving of the wind into the dunes, the sudden contrast where sand abuts stone, the endless gleaming of the salt. Consider for a moment the difference between looking at the forest and looking at the trees: when I zoom out to the forest level I notice the lay of the hills beneath the trees, the gradual change from one kind of tree to another, spot the gaps where rivers run or the dirt thins. Deserts smack you in the face with the forest-level view, because there isn't another one available.

3. I like the extension to disgust. My experience was also with deserts, but in this case my impression was that deserts were clean. I found myself out in the dunes of Kuwait, where there was an abundance of flies. I figured they would go for our water, or perhaps our protein bars. Then I saw they happily landed anywhere on the sand, and I thought: wait, what do flies eat?

So now I think of deserts as beautiful and filthy.

Comment by ryan_b on Is Causality in the Map or the Territory? · 2019-12-18T19:07:04.650Z · score: 4 (2 votes) · LW · GW

I have a hard time thinking of that example as a different causal structure. Rather I think of it as keeping the same causal structure, but abstracting most of it away until we reach the level of the knob; then we make the knob concrete. This creates an affordance.

Of course when I am in my house I am approaching it from the knob-end, so mostly I just assume some layers of hidden detail behind it.

Another way to say this is that I tend to view it as compressing causal structure.

Comment by ryan_b on Is Causality in the Map or the Territory? · 2019-12-18T17:48:36.294Z · score: 4 (2 votes) · LW · GW

This point might be useless, but it feels like we are substituting sub-maps for the territory here. This example looks to me like:

Circuits -> Map

Physics -> Sub-map

Reality -> Territory

I intuitively feel like a causal signature should show up in the sub-map of whichever level you are currently examining. I am tempted to go as far as saying the degree to which the sub-map allows causal inference is effectively a measure of how close the layers are on the ladder of abstraction. In my head this sounds something like "perfect causal inference implies the minimum coherent abstraction distance."

Comment by ryan_b on Open & Welcome Thread - December 2019 · 2019-12-18T15:37:04.462Z · score: 2 (1 votes) · LW · GW

That's the one! My thanks, I was on the verge of madness!

Comment by ryan_b on Approval Extraction Advertised as Production · 2019-12-16T21:06:03.100Z · score: 6 (4 votes) · LW · GW

This part is a little baffling to me:

For better or worse that's never going to be more than a thought experiment. We could never stand it. How about that for counterintuitive? I can lay out what I know to be the right thing to do, and still not do it. I can make up all sorts of plausible justifications. It would hurt YC's brand (at least among the innumerate) if we invested in huge numbers of risky startups that flamed out. It might dilute the value of the alumni network. Perhaps most convincingly, it would be demoralizing for us to be up to our chins in failure all the time. But I know the real reason we're so conservative is that we just haven't assimilated the fact of 1000x variation in returns.
We'll probably never be able to bring ourselves to take risks proportionate to the returns in this business.

So I get why Y Combinator can't do this, but the "we" seems more inclusive here than just the YC team. I think this because in most other instances of not knowing how or being unable to do something, he troubles to suggest a way someone else might be able to.

If people are prepared to invest a lot of money in high frequency trading algorithms which are famously opaque to the people providing the money, or into hedge funds which systematically lose to the market, why wouldn't someone be willing to invest in an even larger number of startups than Y Combinator?

If we follow the logic of dumping arbitrary tests, it feels like it might be as direct as configuring a few reasoned rules, using a standardized equity offer with standardized paperwork, and then just slowly tweak the reasoned rules as the batch outcomes roll in.

Comment by ryan_b on Open & Welcome Thread - December 2019 · 2019-12-11T15:53:31.287Z · score: 4 (2 votes) · LW · GW

The commenting guidelines allows users to set their own norms of communication for their own private posts. This lets us experiment with different norms to see which work better, and also allows the LessWrong community to diversify into different subcommunities should there be interest. It says habryka's guidelines because that's who posted this post; if you go back through the other open threads, you will see other people posted many of them, and different commenting guidelines here and there. I think the posts that speak to this the most are:

[Meta] New moderation tools and moderation guidelines (by habryka)

Meta-tations on Moderation: Towards Public Archipelago (by Raemon)

Comment by ryan_b on Open & Welcome Thread - December 2019 · 2019-12-09T22:29:45.882Z · score: 4 (2 votes) · LW · GW

There's a post somewhere in the rationalsphere that I can't relocate for the life of me. Can anybody help?

The point was communication. The example given was the difference between a lecture and a sermon. The distinction the author made was something like a professor talking to students in class, each of whom then goes home and does homework by themselves, versus a preacher who gives his sermon to the congregation, with the expectation that they will break off into groups and discuss the sermon among themselves.

I have a vague memory that there were graphics involved.

I have tried local search on LessWrong, site search of LessWrong, and browsing a few post histories that seemed like they might be the author based on a vague sense of aesthetic similarity. I was sure it was here, but now I fear it may have been elsewhere or it is hidden in some other kind of post.

Comment by ryan_b on The Lesson To Unlearn · 2019-12-08T20:00:35.042Z · score: 6 (3 votes) · LW · GW

I really liked this essay.

And as hacking bad tests shrinks in importance, education will evolve to stop training us to do it.

This, however, is entirely excessive optimism.

Comment by ryan_b on Conscious Proprioception -Awareness of the Body's Position, Motion, Alignment & Balance. · 2019-12-07T16:43:57.068Z · score: 2 (1 votes) · LW · GW

I get all the normal pain/temperature/pressure/friction feedback just fine. It is only the problem of knowing where they are in space without looking at them.

Comment by ryan_b on What are some non-purely-sampling ways to do deep RL? · 2019-12-06T16:54:29.260Z · score: 4 (2 votes) · LW · GW

I don't know what the procedure for this is, but it occurs to me that if we can specify information about an environment via differential equations inside the neural network, then we can also compare this network's output to one that doesn't have the same information.

In the name of learning more about how to interpret the models, we could try something like:

1) Construct an artificial environment which we can completely specify via a set of differential equations.

2) Run a neural network to learn that environment with every combination of those differential equations.

3) Compare all of these to several control cases of not providing any differential equations.

It seems like how the control case differs from each of the cases-with-structural-information should give us some information about how the network learns the environmental structure.

Comment by ryan_b on Posture - Muscles, Assessment & The Body's Base-Line for Alignment. · 2019-12-06T16:09:58.381Z · score: 5 (3 votes) · LW · GW

I can vouch for sudden and significant gains in comfort and functionality by focusing on improving your posture. The method I used was less thorough than here - instead I just used an exercise band and a few stretching exercises to improve the shoulder position. This provided improved comfort immediately, and significant reduction in the fragility of my back in a matter of days.

Comment by ryan_b on Conscious Proprioception -Awareness of the Body's Position, Motion, Alignment & Balance. · 2019-12-06T16:05:56.081Z · score: 5 (2 votes) · LW · GW

I just discovered this sequence, and I am pleased and impressed. The subject of this post is something I have been looking at learning more about a lot recently, because I have a problem in the area.

Specifically, I never know where my feet are positioned. I can infer it, and I can confirm it, but I simply don't feel the position of my feet in relation to the rest of my body. Even when I am trying to focus on it.

By contrast, I do feel where my calves are in space. Most of the time when I need to place my feet precisely, I am actually just aiming my calves at that point and relying on the fact that my feet are on the end of my calves.

Comment by ryan_b on What are some non-purely-sampling ways to do deep RL? · 2019-12-05T17:32:03.487Z · score: 7 (4 votes) · LW · GW

This doesn't strike directly at the sampling question, but it is related to several of your ideas about incorporating the differentiable function: Neural Ordinary Differential Equations.

This is being exploited most heavily in the Julia community. The broader pitch is that they have formalized the relationship between differential equations and neural networks. This allows things like:

  • applying differential equation tricks to computing the outputs of neural networks
  • using neural networks to solve pieces of differential equations
  • using differential equations to specify the weighting of information

The last one is the most intriguing to me, mostly because it solves the problem of machine learning models having to start from scratch even in environments where information about the environment's structure is known. For example, you can provide it with Maxwell's Equations and then it "knows" electromagnetism.

There is a blog post about the paper and using it with the DifferentialEquations.jl and Flux.jl libraries. There is also a good talk by Christopher Rackauckas about the approach.

It is mostly about using ML in the physical sciences, which seems to be going by the name Scientific ML now.

Comment by ryan_b on Seeking Power is Instrumentally Convergent in MDPs · 2019-12-05T16:37:23.894Z · score: 20 (10 votes) · LW · GW

Strong upvote, this is amazing to me. On the post:

  • Another example of explaining the intuitions for formal results less formally. I strongly support this as a norm.
  • I found the graphics helpful, both in style and content.

Some thoughts on the results:

  • This strikes at the heart of AI risk, and to my inexpert eyes the lack of anything rigorous to build on or criticize as a mechanism for the flashiest concerns has been a big factor in how difficult it was and is to get engagement from the rest of the AI field. Even if the formalism fails due to a critical flaw, the ability to spot such a flaw is a big step forward.
  • The formalism of average attainable utility, and the explicit distinction from number of possibilities, provides powerful intuition even outside the field. This includes areas like warfare and business. I realize it isn't the goal, but I have always considered applicability outside the field as an important test because it would be deeply concerning for thinking about goal-directed behavior to mysteriously fail when applied to the only extant things which pursue goals.
  • I find the result aesthetically pleasing. This is not important, but I thought I would mention it.
Comment by ryan_b on Symbiotic Wars · 2019-12-04T21:02:40.847Z · score: 2 (1 votes) · LW · GW

I feel like this was rendered its own explicit meme in the form of The Game.

Comment by ryan_b on Is the rate of scientific progress slowing down? (by Tyler Cowen and Ben Southwood) · 2019-12-04T15:02:33.130Z · score: 11 (2 votes) · LW · GW

They ask whether TFP and related measures undervalue the tech sector. They conclude no:

  • Countries with smaller tech sectors than the US see a similar productivity slowdown.
  • Even if undervalued, the tech sector is not big enough to explain the whole slowdown in the US.
  • The slowdown begins in 1973, predating the tech sector.