Comment by ryan_b on A Key Power of the President is to Coordinate the Execution of Existing Concrete Plans · 2019-07-16T17:48:43.401Z · score: 5 (3 votes) · LW · GW

This matches the pattern for at least a few high-profile American technology successes, e.g. Apollo and the Manhattan Project.

I note that Kalil did not speak to results per se, but rather considered the mark of success being a lot of energy directed towards whatever the goal was. It is useful to think about all the things that are considered successes from the government perspective while having lots of operational failures, e.g. recent wars or the ACA.

The argument for the difference in these cases is largely that exceptional leaders were chosen to lead them; for Europe also had a version of the Apollo program which failed, and the Nazi bomb program failed. Not came in second, mind you - but failed completely in their aims. So who would be the Mueller or Groves for the AI safety program?

Comment by ryan_b on Integrity and accountability are core parts of rationality · 2019-07-16T17:06:32.204Z · score: 2 (1 votes) · LW · GW

I'm inclined to agree that we need to be wary of how we operationalize accountability to groups.

But if it reduces the complexity of the moral logic, it should be simpler to express and abide by that logic. And yet, I see huge amounts of analysis for all the permutations of the individual case, and virtually none for the group one.

I am deeply and generally confused by this, not just in the context of the post. Why not reason about the group first, and then extend that reasoning as needed to deal with individual cases? This causes me to expect that the group case is much more difficult, like it has a floor of complexity or something.

Comment by ryan_b on Integrity and accountability are core parts of rationality · 2019-07-15T21:04:36.497Z · score: 7 (3 votes) · LW · GW

I like this part best:

Choose carefully who, and how many people, you are accountable to

Importantly, these mostly won't be individuals. Instead, we mostly have groups, and the composition of those groups is not subject to our decisions, eg: our own families, our spouse's families, the other employees at work, the other congregants at church, the other members of the club, etc.

I feel strongly that selecting and acting within groups is a badly neglected area of moral reflection.

Offering public comment in the Federal rulemaking process

2019-07-15T20:31:39.182Z · score: 18 (3 votes)
Comment by ryan_b on Open Thread July 2019 · 2019-07-11T15:41:01.909Z · score: 2 (1 votes) · LW · GW

It does seem to me like the kind of thing that would allow capitalizing strongly on something like a shared technical understanding. But that would be very difficult to pull off, because the overlap of people with shared technical understanding and advanced UI understanding is small.

If I were to say something like "DynamicLand can add UX to any layer of abstraction," how would that sound?

Comment by ryan_b on Open Thread July 2019 · 2019-07-11T13:53:03.946Z · score: 2 (1 votes) · LW · GW

Great work!

Are there any obvious tie-ins to the launch of the Alignment Forum? It seems plausible that the people who were here almost exclusively for the AI research posts might have migrated there.

Alternatively, if Alignment Forum is in fact counted, it might be that the upward trend reflects growth in that segment.

Comment by ryan_b on Open Thread July 2019 · 2019-07-10T19:14:55.557Z · score: 4 (2 votes) · LW · GW

Has anyone been to DynamicLand in Berkeley? If so, what did you think of it?

Comment by ryan_b on Open Thread July 2019 · 2019-07-10T14:44:37.673Z · score: 4 (2 votes) · LW · GW

Reading about libraries in Julia for geometric algebra, I found Grassmann.jl. This is going to require more knowledge of advanced algebra than I have to use effectively, but while reading about it I noticed the author describe how they can achieve very high dimension numbers. They claim ~4.6e18 dimensions.

That's a lotta dimensions!

Comment by ryan_b on Open Thread July 2019 · 2019-07-09T18:20:41.833Z · score: 4 (2 votes) · LW · GW

The American National Institute of Standards and Technology has a draft plan for AI standards up. There is an announcement on their website; an announcement post on LessWrong; the plan itself on NIST's website; an outline of said plan on LessWrong.

Edit: changed style of links in response to the Please Give Your Links Speaking Names post.

Comment by ryan_b on NIST: draft plan for AI standards development · 2019-07-09T17:47:00.778Z · score: 2 (1 votes) · LW · GW

An outline is provided in the next post in the sequence; I began it as a comment in this one, but it grew too long to be easily readable and comments are worse for reference purposes.

Outline of NIST draft plan for AI standards

2019-07-09T17:30:45.721Z · score: 18 (4 votes)
Comment by ryan_b on [AN #59] How arguments for AI risk have changed over time · 2019-07-09T13:53:06.023Z · score: 5 (3 votes) · LW · GW

I partway agree with this: it is much harder to compensate with people than to determine what the problem is.

The reason I still see determining the principal-agent problem as a hard problem with people is that we are highly inconsistent: a single AI is more consistent then a single person, and much more consistent than several people in succession (as is the case with any normal job).

My model for this is that determining what the problem is costs only slightly more for a person than for the AI, but you will have to repeat the process many times for a human position, probably about once per person to fill it.

Comment by ryan_b on [AN #59] How arguments for AI risk have changed over time · 2019-07-08T20:31:42.252Z · score: 2 (1 votes) · LW · GW

I have the opposite intuition regarding economies of scale and CAIS: I feel like it would hold, just to a lesser degree than to a unitary agent. The core of my intuition is that with different optimized AIs, it will be straightforward to determine exactly what the principal-agent problem consists of, and this can be compensated for. I would go as far as to say that such a function seems like a high-likelihood target for monitoring AIs within CAIS, in broadly the same way we can do resource optimization now.

I suspect the limits of both types are probably somewhere north of the current size of the planet's economy, though.

NIST: draft plan for AI standards development

2019-07-08T14:13:09.314Z · score: 16 (4 votes)
Comment by ryan_b on Open Thread July 2019 · 2019-07-03T17:45:09.551Z · score: 3 (2 votes) · LW · GW

Short mashup from two sources:

Nielson proposes an informal model of mastery:

...their prior learning has given them better chunking abilities, and so situations most people would see as complex they see as simple, and they find it much easier to reason about.
...
In other words, having more chunks memorized in some domain is somewhat like an effective boost to a person's IQ in that domain.

where the chunks in question fit into the 7+/-2 of working memory. Relatedly, there is Alan Kay's quip:

"A change in perspective is worth 80 IQ points."

Which is to say the new perspective provides a better way to chunk complex information. In retrospect this feels obvious, but my model for multiple perspectives beforehand mostly a matter of eliminating blind spots they might have. I'll have to integrate the contains-better-chunks possibility, which basically means that seeking out new perspectives is more valuable than I previously thought.

Comment by ryan_b on Open Thread July 2019 · 2019-07-03T15:19:43.856Z · score: 17 (7 votes) · LW · GW

I've been considering another run at Anki or similar, because I simultaneously found a new segment of a field to learn about and also because I am going to have to pivot my technical learning at work soon.

Reading Michael Nielson's essay on the subject, he makes frequent references to Feynman; I am wondering about the utility of using Anki to remember complex problems in better detail. The motivation is the famous story about Feynman where he always kept a bunch of his favorite open problems in mind, and then whenever he encountered a new technique he would test it against each problem. In this way, allegedly, he made several breakthroughs regarded as brilliant.

It feels to me like the point might be more broad and fundamental than mathematical techniques; I suspect if I could better articulate and memorize an important problem, I could make it more a part of my perspective rather than something I periodically take a crack at. If I can accomplish this, I expect I will be more likely to even notice relevant information in the first place.

Open Thread July 2019

2019-07-03T15:07:40.991Z · score: 13 (3 votes)
Comment by ryan_b on What's the best explanation of intellectual generativity? · 2019-06-30T14:49:50.249Z · score: 4 (2 votes) · LW · GW

I considered the government question, because I think that as an institution they clearly can execute the long view, and I also agree that governments like China’s even seem to have the long view culturally ingrained. I don’t know what the answer is, but I am confident it has more than one component, because I can identify two from the American government research example.

The first is drawn from the history of PARC again: when JC Licklider was getting funding through ARPA (which built the community that eventually was transported almost wholesale to PARC) he broke from the traditional government funding model of short-term grants on a project basis by requesting long-term grants on a person basis. I think that if the old grant model was still in effect, even if everything else stayed the same, they would have been far less productive.

The second is the War on Cancer, which was a political event that heavily impacted the way funding worked across research in the United States. Normally increased resources are considered a good thing, but in this case it came with a bunch of process changes attached: namely researches had to be able to explain how cancer treatment would benefit before they got the funds. I expect that there is always some probability of a large external event like this disrupting the incentives, even if they were in excellent shape before.

To summarize, I think it is very hard for any institution to definitely not do what they would normally do, and then even if they succeed an unexpected change may be forced upon them anyway.

Comment by ryan_b on Do children lose 'childlike curiosity?' Why? · 2019-06-30T14:15:48.946Z · score: 8 (5 votes) · LW · GW

My firm conclusion going into this is the why game is about getting the adult to interact.

But because I love layered explanations, I have been mentally preparing for this phase for a long time. My daughter turned one recently, which means I only have a short while longer to wait.

She has no grasp of the trap into which she will toddle!

Comment by ryan_b on What's the best explanation of intellectual generativity? · 2019-06-30T02:27:29.724Z · score: 7 (3 votes) · LW · GW

As a matter of historical origination, Microsoft (and Apple) looked much more closely at Xerox/PARC then at Bell Labs. As I understand the story, Microsoft Research was supposed to be a more tightly applied and business-oriented version of PARC, which itself was more applied than Bell Labs.

It’s worth considering that Microsoft Research was established shortly before the modern cost-cutting phase of the corporation, and the average lifespan has been plummeting the entire time. AT&T lasted 100+ years, the average corporate lifespan is now expected to shrink to 10.

We’ll need a different organizational form to permit the long view, I think.

Systems Engineering Advancement Research Initiative

2019-06-28T17:57:54.606Z · score: 23 (7 votes)
Comment by ryan_b on What is the evidence for productivity benefits of weightlifting? · 2019-06-27T15:42:27.392Z · score: 5 (2 votes) · LW · GW

This isn't delving deeper into the studies raised in the comment, I just wanted to emphasize the virtuous-cycle nature of a couple of these things. I got most of this from Stronger By Science, but I'll include the links to the papers directly, and the post analyzing them secondly.

Related: Meditations on Momentum

First, sleep. The study linked in the answer claims resistance training improves subjective sleep quality, but the relationship works the other way as well:

Insufficient sleep undermines dietary efforts to reduce adiposity

This study found two groups on the same diet while sedentary lost the same amount of weight, but for the group on 5.5 hrs of sleep 50% of that weight loss was muscle, whereas for a group on 8.5 hrs of sleep only 20% was. This has significant implications for trying to increase or even maintain strength. More commentary here.

Sleep and muscle recovery: endocrinological and molecular basis for a new and promising hypothesis.

This review found that lack of sleep interferes with hormone balance, namely by doing things like increasing cortisol and decreasing testosterone and growth hormone. More commentary here.

In a nutshell, resistance exercise improves sleep improves resistance exercise, and so on.

Second, comparison with aerobic exercise. A couple of the studies above compared aerobic exercise with anaerobic exercise, and found the effect of one or the other was stronger (on cognitive function, for example).

As it happens, these two are also mutually reinforcing:

Resistance Training to Momentary Muscular Failure Improves Cardiovascular Fitness in Humans: A Review of Acute Physiological Responses and Chronic Physiological Adaptations

concludes that resistance training improves cardiovascular fitness; improved cardiovascular fitness results in better recovery and the ability to sustain higher resistance training loads. This effect is mitigated by specific exercises, though:

Concurrent training: a meta-analysis examining interference of aerobic and resistance exercises.

found that running interferes with strength gains, but cycling doesn't, for example. More commentary on these points here.

In a second nutshell, strength improves cardio improves strength, and so on.

So while there is independent evidence of resistance training, and aerobic training, and sleep all providing benefits relevant to productivity, they also form a mutually reinforcing system where each seems to help the other two directly.

I don't have any information on how these things scale; for example it feels ridiculous to think twice as much resistance training leads to twice as good sleep. I also don't have any indication of how much substitution there might be - for example, is the sleep benefit just because you are active at all? Is there difference between 5/week of weights vs. 5/week of cycling vs. 5/week alternating? What about if they were 30 minute workouts vs. 60 minute workouts?

That being said my gut feeling is that a combined system would be much more resilient in its benefits. Anecdotally, weights 3x a week and HIIT or yoga or something for the other 4x a week has yielded improved sleep, more stable energy throughout the day, and ~40lbs weight loss in tandem with calorie control over 7 months. This has considerably improved my productivity; I gained a couple of hours of useful time outside of work, and all the hours are higher quality now.


Comment by ryan_b on Selection vs Control · 2019-06-26T17:32:48.289Z · score: 4 (2 votes) · LW · GW
It seems possible that one could invent a measure of "control power"

I think the likelihood of this comment being helpful is small, but I know of two sort-of-adjacent efforts. Both of which took place under the auspices of DARPA's META Program, a program for improving systems engineering.

The first is a complexity metric, which they define as unexpected behavior of any kind and attempt to quantify in terms of information entropy. The part about the development of the metric begins on page 4.

The second is an adaptability metric. This one is considerably fussier; they eventually had to produce several metrics because of tradeoffs, and then tried to produce a valuation method so you could compare the metrics properly. It relies on several specific techniques which I have no knowledge of, and is much more heavily anchored in current real applications, but the crux of the effort seems to align with the "choices don't change later choices" section above.

This post feels to me like the same type of conversation that would have been helpful in the work of these two papers, so I mention them on the off-chance the relationship works both ways.


Comment by ryan_b on A case for strategy research: what it is and why we need more of it · 2019-06-21T18:57:26.060Z · score: 2 (1 votes) · LW · GW
With strategic clarity we would know what to do. Specifically, we would know...
- who the relevant actors are
- what actions are available to use
- how the future might develop from those actions
- what good sequences of actions (plans) are
- how to best prioritize plans
- that we have not missed any important considerations

Out of curiosity, has your research so far uncovered any example domains which have strategic clarity? Or do you have an intuition for domains that do?

Comment by ryan_b on A case for strategy research: what it is and why we need more of it · 2019-06-21T14:43:18.964Z · score: 2 (1 votes) · LW · GW

What do you expect the signal of successful private strategy research to be?

There don't seem to be that many outliers around to me, which strongly suggests either the research isn't being done or it is failing to yield results.

Comment by ryan_b on [deleted post] 2019-06-10T14:58:48.509Z

This might be more fundamental than I initially thought. If people have a preference for shared information, this provides an upward pressure on communication in general - there is now a reward for telling people things just because, and also a reward for listening to people tell you things. This is entirely separate from any instrumental value the shared information may have.

Comment by ryan_b on [deleted post] 2019-06-10T14:57:01.956Z

I suspect that a lot of traditional things do double-duty, being instrumentally valuable and encouraging mutual information. Material example: the decorations on otherwise mundane items, like weapons or cauldrons.

It occurs to me that any given practice will probably be selected for resilience more than it will be for efficiency - efficiency is a threshold that must be met, but beyond that maximizing the likelihood that the minimum will be met seems more likely to propagate.

If people have an intrinsic desire to share information, then it seems likely that a practice for which sharing information is more common is more likely to persist. Hence the prevalence of so many multi-step processes; every additional step is another opportunity to share information (teach, tell where resources are, etc).


Comment by ryan_b on Steelmanning Divination · 2019-06-06T18:04:42.924Z · score: 9 (7 votes) · LW · GW

Weather. In a nutshell, bad weather makes battles harder. This is because walking a long way while wet sucks, it damages supplies and equipment, it increases the likelihood of disease, and there are intermittent dangers like flooding that are hard to predict in unfamiliar territory. In general, people know how to manage these things where they live, so the worse the weather, the bigger an advantage for the defender (or at least whoever marched less).

Comment by ryan_b on Steelmanning Divination · 2019-06-06T15:08:53.617Z · score: 7 (5 votes) · LW · GW

I once did a thought experiment where I tried to figure out how divination practices might directly help decisions.

The Druids were legendarily learned. What information we have says they were responsible for maintaining the oral history of their people, and for management of sacrifices, and reading of omens and the weather. They were reputed to have advanced knowledge of plants and animals.

I wondered about divination before battle. Naturally, birds aren't really random - I expect a lot of people have noticed things like how they suddenly go quiet when a wet gust of wind blows through immediately prior to a storm. I expect if I were a Druid, I would have spent a lot of time watching birds, and know more things like this.

As a keeper of the oral history, I'll know the reported outcome of previous battles and some important details about them (the weather, say).

Things like how many warriors my tribe has I can see with my eyes, and whether the other guys have more or less can be had by scouting like usual.

There's also the matter of appeasing the gods, and offering them sacrifice. Now there's a story from Greek myth about how early on the gods were tricked into accepting the fatty, gristly parts of the animal as the best parts, on the grounds that the smoke from burning those was better able to reach Olympus and nourish them. This agrees with casual observation: when I ruin a steak on the grill it smokes a lot more than when I ruin chicken on the grill. Smoke is a pretty good indicator of things like wind direction and strength, and further when it rises it can do things like show you where the wind changes above your level (like in smokestacks where it suddenly gets sheared off at a certain height).

So bird behavior provides information about barometric pressure, and the smoke from a sacrifice provides information about the movement of air pretty high up, and the oral history provides a sort of prior for similar circumstances.

So, if I were a Druid and knew what Druids know, I could make better than average predictions about the outcome of a battle if I made a burnt offering and read omens from birds.

Comment by ryan_b on Book Review: The Secret Of Our Success · 2019-06-06T14:17:25.542Z · score: 3 (2 votes) · LW · GW

I've been thinking about this problem from the other direction lately, particularly regarding divination practices; namely, now that we are habituated to the idea that there is a rational explanation for everything, how can we expect rituals - even useful ones - to survive over time?

My naive answer is that we cannot, and everything will slowly fall away as it comes into focus and the lack of causal mechanism is revealed.

On the other hand, people love stage magic. Almost everyone knows it is a trick beforehand, yet we are entertained. Mostly the surprisal and the possibility of belief is sufficient, but it seems to me that people are often more entertained when they spot the trick for themselves. The only real letdown is when a trick is immediately explained and proves deceptively simple, in my view.

This makes me suspect it might be possible to either reclaim or design new rituals, which would require a balancing act between having the utility explanation available and keeping the explanation separate from the experience of the ritual.

Also, the image for the bottom link is busted.

Comment by ryan_b on [deleted post] 2019-05-31T17:32:14.191Z

From Introducing: Asabiyah, which itself summarizes one concept from Ibn Khaldun's Muqaddimah:

Thus Khaldun notes:
“The consequences of common descent, though natural, still are something imaginary. The real thing to bring about the feeling of close contact is social intercourse, friendly association, long familiarity, and the companionship that results from growing up together having the same wet nurse, and sharing the other circumstances of life and death. If close contact is established in such a manner, the result will be affection and cooperation.”

And later:

In Ibn Khaldun’s thought, conquest itself seems to be the driving force behind the consolidation of two asabiyah into one. Once a weaker tribal group is defeated, its leaders removed and men of valor killed, pacified, or subsumed under a new organization so utterly that the ‘tit for tat’ vengeance schemes so common to nomadic society (which Ibn Khaldun sees as the root cause of war) are no longer possible, then their asabiyah can be swallowed up in the larger group’s. What is key here is that the other groups – after their initial defeat – are not coerced into having the same feeling of asabiyah as the main group. Asabiyah that must be coerced is not asabiyah at all (this is a theme Ibn Khaldun touches on often and we will return to it in more detail when we talk about why asabiyah declines in civilized states). Instead, those who have been allowed to join the conquering host slowly start to feel its asabiyah be subsumed as the two groups “enter into close contact,” sharing the same trials, foods, circumstances, and becoming acquainted with the others' customs, but just as importantly, sharing the same set of incentives. Once the losers are are forced together with the winners, defeat for the main clan is defeat for all; glory for the main clan is glory for all; booty gained by the main clan’s conquests becomes booty to be shared with all. Once people from a subordinate group begin to feel like the rise and fall of their own fortunes is inextricably linked to the fate of the group that overpowered them then they become willing to sacrifice and die for the sake of this group, for it has become their group.

Shared experiences create a lot of mutual information, and enough of it builds the fearsome bonds which populate our legends and histories.

Comment by ryan_b on [deleted post] 2019-05-31T17:16:10.785Z

More on semi-altruism: Darwin's Unfinished Symphony argues that what sets humans apart is our ability to reliably teach (review here).

If we consistently teach well, it seems to me it must consistently yield rewards for the teachers. As with the argument for storytellers in the example above, this may be in the form of benefits from social connections. But, because we know time discounting is a thing and it takes time to teach people something, it seems likely to me that there is an intrinsinc reward for teaching. A reasonable way to describe teaching would be mutualizing information.

Comment by ryan_b on [deleted post] 2019-05-31T15:59:25.961Z

Seemingly altruistic actions such as creating art and music may qualify. From Sexual Selection Through Mate Choice Does Not Explain the Evolution of Art and Music:

Miller, however, criticizes the idea that “art conveys cultural values and socializes the young,” writing that,
"The view that art conveys cultural values and socializes the young seems plausible at first glance. It could be called the propaganda theory of art. The trouble with propaganda is that it is usually produced only by large institutions that can pay propagandists. In small prehistoric bands, who would have any incentive to spend the time and energy producing group propaganda? It would be an altruistic act in the technical biological sense: a behavior with high costs to the individual and diffuse benefits to the group. Such altruism is not usually favored by evolution."
The answer to Miller’s question—who produces the propaganda?—is quite clear in the ethnographic data: the old men do.

Altruism is not usually favored by evolution, but if the same mechanism by which we prefer to spread our genetic information recognizes other kinds of information, then it would not feel altruistic from the inside. Rather, making songs and having other people sing them would be its own reward in precisely the same way having children does.

Comment by ryan_b on What should rationalists think about the recent claims that air force pilots observed UFOs? · 2019-05-30T00:02:26.145Z · score: 4 (2 votes) · LW · GW

That’s an interesting point.

Unfortunately, there isn’t enough data to make good performance comparisons to my knowledge. Although I would definitely watch a medium-production-value documentary that does the work with what is available.

Comment by ryan_b on What should rationalists think about the recent claims that air force pilots observed UFOs? · 2019-05-28T16:39:41.580Z · score: 4 (2 votes) · LW · GW

I note that the most advanced aircraft the United States (officially) has is the F-22. It was designed to take advantage of a few future technologies, like advanced materials and electronics.

It was also designed ~30 years ago. That’s three decades of Moore’s Law, materials science advancements and the proliferation of metamaterials, and so on. So when accounting for the possibility of it being a real craft in the air, I ask myself questions like “what could plausibly fit into that 30 years worth of advancements?”

I also note that one of the areas where we have seen considerable improvement is in compressing the design-build pipeline, which is to say we make prototypes faster than we used to. I therefore expect that the gap between what is possible on paper and what can actually fly is shorter than it was in the 1980s.

Comment by ryan_b on What should rationalists think about the recent claims that air force pilots observed UFOs? · 2019-05-28T16:26:41.158Z · score: 3 (2 votes) · LW · GW

as they would not risk to crash it by flying between two airplanes in tight formation.

This is incorrect. They shouldn’t risk crashing by flying between a tight formation, but you’ve got to consider that people who work in top secret programs are mostly just regular people who don’t talk about their work. There is plenty of room in top secret military projects for all the same jackassery that happens in public projects, like incompetence, pranks, deliberately dangerous tests, etc. Arguably more so, since they are sheltered from scrutiny.

And this ignores more prosaic explanations like an autopilot glitch. Alpha Go made weird decisions because it was misreading the apparent score, a pilot AI would certainly encounter similar problems at some point.

Comment by ryan_b on Open Thread May 2019 · 2019-05-24T14:41:20.729Z · score: 5 (2 votes) · LW · GW

My impression agrees. I am inclined to say that Chapman seems to be targeting the kind of rationality criticized in Seeing Like a State, save that In the Cells of the Eggplant is about how unsatisfying the perspective is rather than the damage implementation does.

Comment by ryan_b on And the AI would have got away with it too, if... · 2019-05-24T14:27:52.704Z · score: 2 (1 votes) · LW · GW

That could be. I had assumed that when referring to the literature he was including some number of real-world examples against which those models are measured, like the number of lawsuits over breach of contract versus the estimated number of total contracts, or something. Reviewing the piece I realize he didn't specify that, but I note that I would be surprised if the literature didn't include anything of the sort and also that it would be unusual for him to neglect current real examples.

Comment by ryan_b on And the AI would have got away with it too, if... · 2019-05-23T20:02:34.504Z · score: 6 (4 votes) · LW · GW

While I have no reason to suspect Hanson's summary of the agency literature is inaccurate, I feel like he really focused on the question of "should we expect AI agents on average to be dangerous" and concluded the answer was no, based on human and business agents.

This doesn't seem to address Christiano's true concern, which I would phrase more like "what is the likelihood at least one powerful AI turns dangerous because of principal agent problems."

One way to square this might be to take some of Hanson's own suggestions to imagine a comparison case. For example, if we look at the way real businesses have failed as agents in different cases, and then assume the business is made of Ems instead, does that make our problem worse or better?

My expectation is that it would mostly just make the whole category higher-variance; the successes will be more successful, but the failures will do more damage. If everything else about the system stays the same, this seems like a straight increase in catastrophic risk.

Comment by ryan_b on A Quick Taxonomy of Arguments for Theoretical Engineering Capabilities · 2019-05-23T19:45:45.391Z · score: 2 (1 votes) · LW · GW
we do not yet have working nuclear fusion reactors

Heh - this works exceptionally well because we do have reactors that reliably fuse things, sufficiently reliable that at least a few dozen private citizens have built them in their garage. This suggests getting what we want out of them should be pretty easy, yet the efficiency threshold is tough to crack.

Comment by ryan_b on [deleted post] 2019-05-22T15:46:39.832Z

Strategy is the search for victory.

Suppose we take search completely literally.

1. We have a current environment.

2. We need to find as many future branches in which we are victorious as possible.

3. We try to preserve as many possible victorious branches as possible, and try to screen as many defeat branches as possible.

4. Once victory becomes likely, we can begin to discriminate between better or worse victories.

The strategy itself is essentially the rule we use for making interventions: the causal theory of success.

We need a reference class of victories, on which to base our theory of success.

We need a reference class of defeats, for the purposes of murphyjitsu. This seems to be unusually important, because it looks kind of like avoiding errors is the more important feature, the closer we get to total mastery of the environment. I think this is largely captured by things like doctrine and training, but we haven't done a good job of capturing it in terms of decisions.

Related to: Macroscopic Predictions. This is kind of like using Gibbs Rule at the level of "interventions we can make" to predict victory.

Comment by ryan_b on Getting Out of the Filter Bubble Outside Your Filter Bubble · 2019-05-21T15:01:43.626Z · score: 5 (3 votes) · LW · GW

I experimented with manipulating the filter bubble, and while I find noticing I am in a filter bubble a useful trick for avoiding unconscious bias, I don't find it useful when deliberately thinking about a specific thing.

Consider bubble-hopping: I find a better way is to deliberately spend time inside each of the relevant filter bubbles instead. The central benefit is that this naturally pulls your perspective above the fray. If we entertain the notion that the goal is to get the problem solved, we'll need to understand what the conversation is among the different groups anyway.

As a practical matter, my impression from doing this periodically is that most of the time the conversations are completely different, to the extent that it is hard to recognize they are talking about the same thing.

Comment by ryan_b on [deleted post] 2019-05-20T21:08:57.038Z

Mutual Information

I suspect humans have an arbitrary preference for mutual information. This pattern is well-matched by preference for kin, and also any other in-group; for shared language; for shared experiences.

Actions as mutual information generators

It occurs to me that doing things together generates a tremendous amount of mutual information. Same place, same time, same event, same sensory stimuli, same social context. In the case of things like rituals, we can sacrifice being in the same place and keeping all the other information the same; in the case of traditions like a rite of passage we can sacrifice being in the same time and keeping all the other information the same, which allows for mutual information with people who are dead and people who are yet to be at a high level of resolution.

I further suspect that the intensity of an experience weighs more; how exactly isn't clear, because an intense event itself doesn't necessarily contain more information than boring one. I wonder if it is because intense experiences leave more vivid memories, and so after a period of time they will have more information relative to other experiences from that time or before.

Comment by ryan_b on Interpretations of "probability" · 2019-05-20T14:05:27.487Z · score: 2 (1 votes) · LW · GW

It's just a different way of arriving at the same conclusions. The whole project is developing game-theoretic proofs for results in probability and finance.

The pitch is, rather than using a Dutch Book argument as a separate singular argument, they make those intuitions central as a mechanism of proof for all of probability (or at least the core of it, thus far).

Comment by ryan_b on What makes a scientific fact 'ripe for discovery'? · 2019-05-17T16:12:40.917Z · score: 8 (4 votes) · LW · GW

Multiple angles of attack

Richard Hamming had this to say about important problems, in his talk "You and Your Research":

Let me warn you, "important problem" must be phrased carefully. The three outstanding problems in physics, in a certain sense, were never worked on while I was at Bell Labs. By important I mean guaranteed a Nobel Prize and any sum of money you want to mention. We didn't work on (1) time travel, (2) teleportation, and (3) antigravity. They are not important problems because we do not have an attack. It's not the consequence that makes a problem important, it is that you have a reasonable attack.

One reasonable attack makes the problem approachable. If there are multiple reasonable attacks, at least one succeeding becomes more likely and further they can exchange information about the problem making each attempt more likely on its own. If we switch to considering thoroughly understood problems, we usually have multiple good solutions for them (like multiple proofs in mathematics, or detection from different kinds of experimental apparatus in science).

So if I am going to rank open problems by the likelihood they will be solved, my prior is a list ordered by the number of ways we know of to attack each problem. Without any other information, a problem with two reasonable attacks is twice as likely to be solved as a problem with only one.

Then we could consider updating the weights of different kinds of attack. For example, if one requires very expensive equipment, or very rare expertise, I might adjust it down. On the other hand, if there are two different attacks but the relationship between those two approaches is otherwise very well understood, then we might not treat them as independent anymore and factor in the ease of sharing information between them but also that they will probably succeed or fail together.

We can also consider the problem itself, but I feel like looking at the reference classes for a problem largely boils down to a way to search for reasonable attacks, where any attack which worked for a problem in the reference class is considered a candidate for the problem at hand. But as I think of it, I'm not sure it is common to do a systematic evaluation in this way, so highlighting it as a specific method for finding attacks seems worthwhile.

Comment by ryan_b on Towards optimal play as Villager in a mixed game · 2019-05-17T15:03:00.600Z · score: 7 (3 votes) · LW · GW
Then, slowly expand. Optimize for lasting longer than empires at the expense of power. Maybe you incrementally gain illegible power and eventually get to win on the global scale. I think this would work fine if you don't have important time-sensitive goals on the global scale.

I have a stub post about this in drafts, but the sources are directly relevant to this section and talk about underlying mechanisms, so I'll produce it here:

~~~

The blog post is: Francisco Franco, Robust Action, and the Power of Non-Commitment

The paper is: Robust Action and the Rise of the Medici

  • Accumulation of power, and longevity in power, are largely a matter of keeping options open
  • In order to keep options as open as possible, commit to as few explicit goals as possible
  • This conflicts with our goal-orientation
  • Sacrifice longevity in exchange for explicit goal achievement: be expendable
  • Longevity is therefore only a condition of accumulation - survive long enough to be able to strike, and then strike
  • Explicit goal achievement does not inherently conflict with robust action or multivocality, but probably does put even more onus on calculating the goal well beforehand

~~~

Robust action and multivocality are sociological terms. In a nutshell, the former means 'actions which are very difficult to interfere with' and the latter means 'communication which can be interpreted different ways by different audiences'. Also, it's a pretty good paper in its own right.

Comment by ryan_b on Towards optimal play as Villager in a mixed game · 2019-05-17T14:36:17.895Z · score: 2 (1 votes) · LW · GW
Actual kings thought otherwise strongly enough to have others who claimed to be king of their realm killed if at all possible.

My model for this: other claims to being king say nothing about the claimant, but send signals about the current king they need to quash.

1. There was always a population of people who are opposed to the king, or think they could get a better deal from a different one. This makes any other person who claims to be king a Schelling Point for the current king's enemies, foreign and domestic. Consider Mary, Queen of Scots and Elizabeth, where Mary garnered support from domestic Catholics, and also the French.

2. In light of 1, making a public claim to the throne implicitly claims that the current monarch is too weak to hold the throne. I expect this to be a problem because the weaker the monarch seems, the safer gambling on a new one seems, and so more people who are purely opportunistic are willing to throw in their lot with the monarch's enemies.

Comment by ryan_b on Financial engineering for funding drug research · 2019-05-17T13:39:12.776Z · score: 3 (2 votes) · LW · GW

Yes - this fund requires pharmaceutical companies to generate the IP in the first place, and also to sell the successful drugs. A new pharmaceutical company will face the same risk profile as existing pharmaceutical companies; I would be very surprised if one could suddenly start investing according to the opposite pattern the others use.

On the other hand, I don't see any reason why an existing pharmaceutical conglomerate could not employ this strategy or a similar one. They already have a huge amount of IP laying around undeveloped (it is from them a fund like this would acquire it) and other huge companies like General Electric have deliberately explored financial engineering as a corporate strategy. It failed in that case, but in this one we are just talking about supplementing the core strategy rather than replacing it.

Comment by ryan_b on Eight Books To Read · 2019-05-16T14:35:47.971Z · score: 5 (3 votes) · LW · GW

What were the books on Syria you recommended to your friend?

Comment by ryan_b on Which scientific discovery was most ahead of its time? · 2019-05-16T14:24:49.848Z · score: 2 (1 votes) · LW · GW

For clarification, when you say "ahead of its time" do you mean the biggest jump forward from what was known at that time, or the furthest behind when we expect to have benefited from it?

I ask because if you shift from theories and equations to things like inventions or processes, it is totally routine to encounter things that were actually invented 50-100 years ago but that never saw the light of day because the materials were impossibly expensive or the market wasn't around yet.

Comment by ryan_b on How To Use Bureaucracies · 2019-05-16T14:12:05.087Z · score: 2 (1 votes) · LW · GW

It is worth mentioning here that the Achaemenid and Sassanian Empires both were in the habit of relying on local systems already in place, which were incorporated via the Satrapy system.

So when the Persian emperor sent someone to check on a whole province, they would probably access the Egyptian or Babylonian or Assyrian scribal record system at work locally.

Comment by ryan_b on [deleted post] 2019-05-14T16:32:45.422Z

While reading Meaningness, more of a description of the kinds of things I want in thinking became clear.

  • I want an honor culture, generalized to include questions of fact.
  • It seems to me that anything which we could interact with but do not describe is completely constrained to System 1 thinking. In order to reason explicitly, which is to say use System 2, we need an explicit description.
  • The things we do a really crappy job reasoning about, but which are of the highest importance, are people and groups. By "people" I mean specifically ourselves: we need to have a description of ourselves. We also need to have a description of groups. With these two descriptions we can reason explicitly about our membership in a group: whether to join, whether to leave, how to succeed within one, how to improve it, etc.
  • I strongly suspect the "center of gravity" for civilization is found within groups.
  • Specifically, the kind of group I am concerned with is the unit of action.
Comment by ryan_b on Interpretations of "probability" · 2019-05-13T15:27:20.039Z · score: 2 (3 votes) · LW · GW

There's a Q&A with one of the authors here which explains a little about the purpose of the approach, mainly talks about the new book.

Comment by ryan_b on Interpretations of "probability" · 2019-05-13T14:55:17.407Z · score: 3 (2 votes) · LW · GW

You might be interested in some work by Glenn Shafer and Vladimir Vovk about replacing measure theory with a game-theoretic approach. They have a website here, and I wrote a lay review of their first book on the subject here.

I have also just now discovered that a new book is due out in May, which presumably captures the last 18 years or so of research on the subject.

This isn't really a direct response to your post, except insofar as I feel broadly the same way about the Kolmogorov axioms as you do about interpreting their application to phenomena, and this is another way of getting at the same intuitions.

Comment by ryan_b on Ed Boyden on the State of Science · 2019-05-13T14:33:37.129Z · score: 24 (6 votes) · LW · GW

Regarding all the examples of "serendipitous" discoveries that later proved so valuable, I want to propose an analogy.

Consider consumer surplus. This is when the price you would be willing to pay is higher than the price that you do pay, so you incur less cost for the same benefit. While I have not read this description of it explicitly, I put it to you that when it later transpires that the benefit was greater than you originally expected, this is also consumer surplus.

With that idea in mind, turn now to the grant issuing process and consider how those are awarded; in particular things like peer review and grant requirements seem driven more by avoiding wasting money than they are by acquiring knowledge. It feels to me like the current system is designed explicitly to reduce the scientific equivalent of consumer surplus to zero as a consequence.

Since I am otherwise confident that scientific research doesn't resemble a market very closely, I further expect this does not reflect having reached equilibrium. Therefore this lack of surplus seems strictly bad.

Financial engineering for funding drug research

2019-05-10T18:46:03.029Z · score: 11 (5 votes)

Open Thread May 2019

2019-05-01T15:43:23.982Z · score: 11 (4 votes)

StrongerByScience: a rational strength training website

2019-04-17T18:12:47.481Z · score: 15 (7 votes)

Machine Pastoralism

2019-04-03T16:04:02.450Z · score: 12 (7 votes)

Open Thread March 2019

2019-03-07T18:26:02.976Z · score: 10 (4 votes)

Open Thread February 2019

2019-02-07T18:00:45.772Z · score: 20 (7 votes)

Towards equilibria-breaking methods

2019-01-29T16:19:57.564Z · score: 23 (7 votes)

How could shares in a megaproject return value to shareholders?

2019-01-18T18:36:34.916Z · score: 18 (4 votes)

Buy shares in a megaproject

2019-01-16T16:18:50.177Z · score: 15 (6 votes)

Megaproject management

2019-01-11T17:08:37.308Z · score: 57 (21 votes)

Towards no-math, graphical instructions for prediction markets

2019-01-04T16:39:58.479Z · score: 30 (13 votes)

Strategy is the Deconfusion of Action

2019-01-02T20:56:28.124Z · score: 75 (24 votes)

Systems Engineering and the META Program

2018-12-20T20:19:25.819Z · score: 31 (11 votes)

Is cognitive load a factor in community decline?

2018-12-07T15:45:20.605Z · score: 20 (7 votes)

Genetically Modified Humans Born (Allegedly)

2018-11-28T16:14:05.477Z · score: 30 (9 votes)

Real-time hiring with prediction markets

2018-11-09T22:10:18.576Z · score: 19 (5 votes)

Update the best textbooks on every subject list

2018-11-08T20:54:35.300Z · score: 78 (28 votes)

An Undergraduate Reading Of: Semantic information, autonomous agency and non-equilibrium statistical physics

2018-10-30T18:36:14.159Z · score: 30 (6 votes)

Why don’t we treat geniuses like professional athletes?

2018-10-11T15:37:33.688Z · score: 20 (16 votes)

Thinkerly: Grammarly for writing good thoughts

2018-10-11T14:57:04.571Z · score: 6 (6 votes)

Simple Metaphor About Compressed Sensing

2018-07-17T15:47:17.909Z · score: 8 (7 votes)

Book Review: Why Honor Matters

2018-06-25T20:53:48.671Z · score: 31 (13 votes)

Does anyone use advanced media projects?

2018-06-20T23:33:45.405Z · score: 45 (14 votes)

An Undergraduate Reading Of: Macroscopic Prediction by E.T. Jaynes

2018-04-19T17:30:39.893Z · score: 38 (9 votes)

Death in Groups II

2018-04-13T18:12:30.427Z · score: 32 (7 votes)

Death in Groups

2018-04-05T00:45:24.990Z · score: 47 (18 votes)

Ancient Social Patterns: Comitatus

2018-03-05T18:28:35.765Z · score: 20 (7 votes)

Book Review - Probability and Finance: It's Only a Game!

2018-01-23T18:52:23.602Z · score: 18 (9 votes)

Conversational Presentation of Why Automation is Different This Time

2018-01-17T22:11:32.083Z · score: 70 (29 votes)

Arbitrary Math Questions

2017-11-21T01:18:47.430Z · score: 8 (4 votes)

Set, Game, Match

2017-11-09T23:06:53.672Z · score: 5 (2 votes)

Reading Papers in Undergrad

2017-11-09T19:24:13.044Z · score: 42 (14 votes)