Comment by ryan_b on [deleted post] 2019-05-20T21:08:57.038Z

Mutual Information

I suspect humans have an arbitrary preference for mutual information. This pattern is well-matched by preference for kin, and also any other in-group; for shared language; for shared experiences.

Actions as mutual information generators

It occurs to me that doing things together generates a tremendous amount of mutual information. Same place, same time, same event, same sensory stimuli, same social context. In the case of things like rituals, we can sacrifice being in the same place and keeping all the other information the same; in the case of traditions like a rite of passage we can sacrifice being in the same time and keeping all the other information the same, which allows for mutual information with people who are dead and people who are yet to be at a high level of resolution.

I further suspect that the intensity of an experience weighs more; how exactly isn't clear, because an intense event itself doesn't necessarily contain more information than boring one. I wonder if it is because intense experiences leave more vivid memories, and so after a period of time they will have more information relative to other experiences from that time or before.

Comment by ryan_b on Interpretations of "probability" · 2019-05-20T14:05:27.487Z · score: 2 (1 votes) · LW · GW

It's just a different way of arriving at the same conclusions. The whole project is developing game-theoretic proofs for results in probability and finance.

The pitch is, rather than using a Dutch Book argument as a separate singular argument, they make those intuitions central as a mechanism of proof for all of probability (or at least the core of it, thus far).

Comment by ryan_b on What makes a scientific fact 'ripe for discovery'? · 2019-05-17T16:12:40.917Z · score: 8 (4 votes) · LW · GW

Multiple angles of attack

Richard Hamming had this to say about important problems, in his talk "You and Your Research":

Let me warn you, "important problem" must be phrased carefully. The three outstanding problems in physics, in a certain sense, were never worked on while I was at Bell Labs. By important I mean guaranteed a Nobel Prize and any sum of money you want to mention. We didn't work on (1) time travel, (2) teleportation, and (3) antigravity. They are not important problems because we do not have an attack. It's not the consequence that makes a problem important, it is that you have a reasonable attack.

One reasonable attack makes the problem approachable. If there are multiple reasonable attacks, at least one succeeding becomes more likely and further they can exchange information about the problem making each attempt more likely on its own. If we switch to considering thoroughly understood problems, we usually have multiple good solutions for them (like multiple proofs in mathematics, or detection from different kinds of experimental apparatus in science).

So if I am going to rank open problems by the likelihood they will be solved, my prior is a list ordered by the number of ways we know of to attack each problem. Without any other information, a problem with two reasonable attacks is twice as likely to be solved as a problem with only one.

Then we could consider updating the weights of different kinds of attack. For example, if one requires very expensive equipment, or very rare expertise, I might adjust it down. On the other hand, if there are two different attacks but the relationship between those two approaches is otherwise very well understood, then we might not treat them as independent anymore and factor in the ease of sharing information between them but also that they will probably succeed or fail together.

We can also consider the problem itself, but I feel like looking at the reference classes for a problem largely boils down to a way to search for reasonable attacks, where any attack which worked for a problem in the reference class is considered a candidate for the problem at hand. But as I think of it, I'm not sure it is common to do a systematic evaluation in this way, so highlighting it as a specific method for finding attacks seems worthwhile.

Comment by ryan_b on Towards optimal play as Villager in a mixed game · 2019-05-17T15:03:00.600Z · score: 7 (3 votes) · LW · GW
Then, slowly expand. Optimize for lasting longer than empires at the expense of power. Maybe you incrementally gain illegible power and eventually get to win on the global scale. I think this would work fine if you don't have important time-sensitive goals on the global scale.

I have a stub post about this in drafts, but the sources are directly relevant to this section and talk about underlying mechanisms, so I'll produce it here:


The blog post is: Francisco Franco, Robust Action, and the Power of Non-Commitment

The paper is: Robust Action and the Rise of the Medici

  • Accumulation of power, and longevity in power, are largely a matter of keeping options open
  • In order to keep options as open as possible, commit to as few explicit goals as possible
  • This conflicts with our goal-orientation
  • Sacrifice longevity in exchange for explicit goal achievement: be expendable
  • Longevity is therefore only a condition of accumulation - survive long enough to be able to strike, and then strike
  • Explicit goal achievement does not inherently conflict with robust action or multivocality, but probably does put even more onus on calculating the goal well beforehand


Robust action and multivocality are sociological terms. In a nutshell, the former means 'actions which are very difficult to interfere with' and the latter means 'communication which can be interpreted different ways by different audiences'. Also, it's a pretty good paper in its own right.

Comment by ryan_b on Towards optimal play as Villager in a mixed game · 2019-05-17T14:36:17.895Z · score: 2 (1 votes) · LW · GW
Actual kings thought otherwise strongly enough to have others who claimed to be king of their realm killed if at all possible.

My model for this: other claims to being king say nothing about the claimant, but send signals about the current king they need to quash.

1. There was always a population of people who are opposed to the king, or think they could get a better deal from a different one. This makes any other person who claims to be king a Schelling Point for the current king's enemies, foreign and domestic. Consider Mary, Queen of Scots and Elizabeth, where Mary garnered support from domestic Catholics, and also the French.

2. In light of 1, making a public claim to the throne implicitly claims that the current monarch is too weak to hold the throne. I expect this to be a problem because the weaker the monarch seems, the safer gambling on a new one seems, and so more people who are purely opportunistic are willing to throw in their lot with the monarch's enemies.

Comment by ryan_b on Financial engineering for funding drug research · 2019-05-17T13:39:12.776Z · score: 3 (2 votes) · LW · GW

Yes - this fund requires pharmaceutical companies to generate the IP in the first place, and also to sell the successful drugs. A new pharmaceutical company will face the same risk profile as existing pharmaceutical companies; I would be very surprised if one could suddenly start investing according to the opposite pattern the others use.

On the other hand, I don't see any reason why an existing pharmaceutical conglomerate could not employ this strategy or a similar one. They already have a huge amount of IP laying around undeveloped (it is from them a fund like this would acquire it) and other huge companies like General Electric have deliberately explored financial engineering as a corporate strategy. It failed in that case, but in this one we are just talking about supplementing the core strategy rather than replacing it.

Comment by ryan_b on Eight Books To Read · 2019-05-16T14:35:47.971Z · score: 5 (3 votes) · LW · GW

What were the books on Syria you recommended to your friend?

Comment by ryan_b on Which scientific discovery was most ahead of its time? · 2019-05-16T14:24:49.848Z · score: 2 (1 votes) · LW · GW

For clarification, when you say "ahead of its time" do you mean the biggest jump forward from what was known at that time, or the furthest behind when we expect to have benefited from it?

I ask because if you shift from theories and equations to things like inventions or processes, it is totally routine to encounter things that were actually invented 50-100 years ago but that never saw the light of day because the materials were impossibly expensive or the market wasn't around yet.

Comment by ryan_b on How To Use Bureaucracies · 2019-05-16T14:12:05.087Z · score: 2 (1 votes) · LW · GW

It is worth mentioning here that the Achaemenid and Sassanian Empires both were in the habit of relying on local systems already in place, which were incorporated via the Satrapy system.

So when the Persian emperor sent someone to check on a whole province, they would probably access the Egyptian or Babylonian or Assyrian scribal record system at work locally.

Comment by ryan_b on [deleted post] 2019-05-14T16:32:45.422Z

While reading Meaningness, more of a description of the kinds of things I want in thinking became clear.

  • I want an honor culture, generalized to include questions of fact.
  • It seems to me that anything which we could interact with but do not describe is completely constrained to System 1 thinking. In order to reason explicitly, which is to say use System 2, we need an explicit description.
  • The things we do a really crappy job reasoning about, but which are of the highest importance, are people and groups. By "people" I mean specifically ourselves: we need to have a description of ourselves. We also need to have a description of groups. With these two descriptions we can reason explicitly about our membership in a group: whether to join, whether to leave, how to succeed within one, how to improve it, etc.
  • I strongly suspect the "center of gravity" for civilization is found within groups.
  • Specifically, the kind of group I am concerned with is the unit of action.
Comment by ryan_b on Interpretations of "probability" · 2019-05-13T15:27:20.039Z · score: 2 (3 votes) · LW · GW

There's a Q&A with one of the authors here which explains a little about the purpose of the approach, mainly talks about the new book.

Comment by ryan_b on Interpretations of "probability" · 2019-05-13T14:55:17.407Z · score: 3 (2 votes) · LW · GW

You might be interested in some work by Glenn Shafer and Vladimir Vovk about replacing measure theory with a game-theoretic approach. They have a website here, and I wrote a lay review of their first book on the subject here.

I have also just now discovered that a new book is due out in May, which presumably captures the last 18 years or so of research on the subject.

This isn't really a direct response to your post, except insofar as I feel broadly the same way about the Kolmogorov axioms as you do about interpreting their application to phenomena, and this is another way of getting at the same intuitions.

Comment by ryan_b on Ed Boyden on the State of Science · 2019-05-13T14:33:37.129Z · score: 24 (6 votes) · LW · GW

Regarding all the examples of "serendipitous" discoveries that later proved so valuable, I want to propose an analogy.

Consider consumer surplus. This is when the price you would be willing to pay is higher than the price that you do pay, so you incur less cost for the same benefit. While I have not read this description of it explicitly, I put it to you that when it later transpires that the benefit was greater than you originally expected, this is also consumer surplus.

With that idea in mind, turn now to the grant issuing process and consider how those are awarded; in particular things like peer review and grant requirements seem driven more by avoiding wasting money than they are by acquiring knowledge. It feels to me like the current system is designed explicitly to reduce the scientific equivalent of consumer surplus to zero as a consequence.

Since I am otherwise confident that scientific research doesn't resemble a market very closely, I further expect this does not reflect having reached equilibrium. Therefore this lack of surplus seems strictly bad.

Comment by ryan_b on Tales From the American Medical System · 2019-05-11T20:53:40.760Z · score: 2 (1 votes) · LW · GW

I agree. That being said, because this would currently be a criminal undertaking, I feel like the real differentiator is willingness and ability to manage the legal risk. I suspect this weighs against a diverse black market economy springing up that is advanced in multiple sectors simultaneously.

I also think it would be better than the current combination of principal-agent problems and conflicts of interest. At least the website depends on its reputation for success, contra all the players in the legit system.

Comment by ryan_b on Financial engineering for funding drug research · 2019-05-10T19:42:54.884Z · score: 2 (1 votes) · LW · GW

Andrew Lo's website is here.

Roger Stein's website is here.

It seems that most of the code and data for the simulations is available from one of these two places, but I haven't verified any of it myself. In the original paper they explicitly use very simplified models to demonstrate the concept, which makes sense to me because there are a lot of different layers to the problem; but I feel like the details of how the risk is simulated are very important to the results going forward, and I don't have a clue what the conventions are for that. Or if the conventions are good.

That being said, it seems like a good case for application of risk distribution in general. A huge amount of problems seem like they would be reduced if we followed the dictum of insuring inevitable costs and securitizing necessary assets.

Financial engineering for funding drug research

2019-05-10T18:46:03.029Z · score: 10 (4 votes)
Comment by ryan_b on Tales From the American Medical System · 2019-05-10T14:31:02.248Z · score: 4 (4 votes) · LW · GW

1. I really won't mourn when the machines wipe this profession out.

2. I am pretty sure that a Silk Road for medical care would already be a profitable project. My question is, how easy would it be to get basic symptoms|diagnosis|prescription AI software written up without it being easily traced.

Comment by ryan_b on Tales From the American Medical System · 2019-05-10T14:23:55.449Z · score: 2 (4 votes) · LW · GW
What was your goal of the conversation with the nurse in the first place? You need a doctor's prescription for the insulin, so shouldn't you have aimed for talking with the doctor? And if that was your goal, what purpose did it serve to tighten the screws on the nurse? You should have acted like a model patient and calmly requested you speak with the doctor

The part you are looking for is here:

My friend explains again that he does not have the time to see any doctor the next day, nor can one find a doctor on one day’s notice in reasonable fashion. And that he has already made an appointment, and needs insulin to live. And would like to speak with the doctor.

Then after being ignored several times, social media is brought up:

My friend says that if the doctor does not give him access to life saving medicine and instead leaves him to die, he will post about it on social media.
The nurse now decides, for the first time in the conversation, that my friend should perhaps talk to his doctor.

The nurse is not in any sense the victim, until the doctor threw them under the bus. They refused a refill of a prescription, and also refused access to the person with the authority to do so.

Comment by ryan_b on What Botswana Can Teach Us About Political Stability · 2019-05-09T21:38:33.125Z · score: 14 (7 votes) · LW · GW

I like this one the best of your posts so far, chiefly because it was anchored with an example, and simultaneously illustrated the important variable (succession of power) and how it is absent from the usual analysis.

The frame it presents is also an extremely satisfying explanation of American success. In this view the Constitution is primarily a blunt instrument for preventing problems which afflicted European powers up to that time: a Presidency to stop wars of succession; a separation of Church and State to prevent wars of religion; a democracy to provide an outlet for the public other than rebellion. And indeed the US has had much less of these problems than any comparable power, allowing maximum capitalization on the available land and natural resources.

Comment by ryan_b on Towards optimal play as Villager in a mixed game · 2019-05-09T17:35:16.125Z · score: 12 (3 votes) · LW · GW

Not directly relevant to this post, but following through the what social skills feel like from the inside link:

He responded that she should really be persuaded by what he'd already done – that she should do things his way rather than the other way around because he has better social skills.

I'm not fully aware of the context, but in every one I have experience of this is considered a hideous faux pas. I strongly expect that this fellow has the worse social skills of the two of them.

This is an example of a rule that is pretty consistent across domains: if someone feels the need to state their status, we should infer that is not their real status. Few songs about being #1 are written by the people who are actually #1, any man who must say 'I am the King' is no true king, etc. Consider how ridiculous it would sound if in programming someone were to say "This is not a bug and we shouldn't change it because I am a better programmer."

Comment by ryan_b on [deleted post] 2019-05-06T15:03:06.339Z

Some related posts:

Comment by ryan_b on Authoritarian Empiricism · 2019-05-03T20:47:01.459Z · score: 11 (3 votes) · LW · GW
Which, like, where am I gonna find hard data on the incidence of coups via unreasonably high levels of coordination seizing control of the state's info-processing apparatus (thus causing the records to misreport reality as a side effect)?

You may be interested in the National Salvation Front. With only a little simplification, when Ceausescu was overthrown six people showed up to the state-controlled media center, announced the creation of the NSF and that they were now in charge. Four days later they were the government.

Comment by ryan_b on [deleted post] 2019-05-03T18:09:46.853Z

Prophecy is a narrative prediction

In the monotheist traditions prophets are given specific instructions from God which they must disseminate. In smaller, local traditions prophets are skilled in interpreting signs from the gods, such dreams, the flights of birds, or reading entrails/ashes/bones.

Consider instead the use of narrative in structuring and communicating a prediction (at various levels of detail). Even in the case of good predictions using state-of-the-art methods, people often ignore them or fail to account for them properly. The question becomes how to get people who are not intimate with the prediction methods, or who do not trust the authority, to act as though it were true.

See also: self-fulfilling prophecy, where the contents of prophecy drive people to act in such a way as to cause it to come true. This is the baseline model for start-ups: a good enough story about success causes people to expect more success, which is the mechanism by which start-ups are judged to succeed. By contrast a popular trick in ancient myths is a bad prophecy which people cause by trying to avoid it, ie telling the king one of his grandchildren will supplant him, so the king tries to have them all drowned but one is smuggled away into the lands of the king's enemies and then returns at the head of a large army 18 years later. Opposite the first example would sit something like propaganda distributed by invading armies whereby they claim opposing them is hopeless and try to persuade enough people of this that the actual defense is actually compromised.

It seems like the appropriate cycle would be: 1. state-of-the-art prediction methods to estimate the future; 2. an analysis of how the story might affect the prediction under various scales of adoption, namely if everyone acting as though it were true changed the outcome; 3. build a story according to the desired outcome in light of 2.

Divination is psuedo-RNG | a gut-check.

Comment by ryan_b on Has government or industry had greater past success in maintaining really powerful technological secrets? · 2019-05-02T15:14:30.485Z · score: 5 (3 votes) · LW · GW

Following on assumption #1, it feels worth it to address the question of incentives. For example, a corporation only has a positive incentive to invest in security relative to the profits they expect from the secret or secrets in question. Further, they always have an incentive to cut costs and security is notorious for being a target because its relationship to profits is poorly understood, and that is how the judgments are made.

By contrast, the government tends to have security protocols first and then decide what to protect with them later. The United States is notorious for classifying huge amounts of even mundane information; security protocols last for longer than the average company exists (~25 years is common, frequently longer). There is a trend to overprotect secrets, regardless of power.

Because these incentives are different, it might be worthwhile to break the question up along a few different criteria. For example, suppose we compared government protection of important military secrets with something of similar import to a corporation, like trade secrets of their core product. Alternatively, we could break the question down by method and ask how each group has secured their technological secrets, and then compare between methods. This wouldn't address the question of "is the risk greater if it is Google or DARPA who cracks AGI first" but it would help us more accurately assess such risks and perhaps help with safety-related recommendations.

Comment by ryan_b on Has government or industry had greater past success in maintaining really powerful technological secrets? · 2019-05-02T14:34:22.418Z · score: 12 (3 votes) · LW · GW

My naive expectation is that government has been more successful. This expectation rests on three things:

1. Industry is only interested in commercially relevant secrets. Government is interested in commercially relevant secrets, and also a variety of non-commercial secrets like those with military applications. Therefore a government is more likely to try to keep any random technological secret than a company will, because many of them are not commercially viable.

2. Historically, powerful technological secrets have been developed explicitly under government authority. In the United States example, these have been government laboratories or heavily regulated companies who yield the secrets to the government and don't share them with the industry. Comparatively few such secrets are developed under the auspices of the private sector alone (unless they have been much more successful in keeping them secret than I expect).

3. Governments usually have capabilities that industries lack, like powers of investigation and violence. They can and do routinely use these capabilities in the protection of secrets. It is rare for a commercial entity to have anything like that capacity, and even if they do there is no presumption of legitimacy the way there is for governments.

So the government is interested in more kinds of powerful technological secrets, and originates most of them, while having and using additional tools for keeping them secret.

Comment by ryan_b on The Amish, and Strategic Norms around Technology · 2019-05-01T19:16:06.449Z · score: 6 (3 votes) · LW · GW
But in the context of "strategic norms of technology", it need not be. The important bit is to add friction to transportation and communication.

Returning to this post, I feel like this is the real core of the value here.

The add friction strategy looks like it would work on arbitrary combinations of choices. Given choices A and B, we rationally prefer A but frequently choose B, then add friction to B until we begin to choose A consistently. By adding different amounts of friction, we can arbitrarily sort A, B, C etc.

This is basically Beware Trivial Inconveniences applied systematically to achieve the desired norms.

Comment by ryan_b on Open Thread May 2019 · 2019-05-01T18:11:50.444Z · score: 2 (1 votes) · LW · GW

A mechanism for comments in particular would be valuable - I feel like this is where the best criticism lies, and it is very rarely captured in a larger post.

Also it seems difficult to trace the genesis of an idea; comments are a common inflection point.

Comment by ryan_b on Open Thread May 2019 · 2019-05-01T15:45:23.641Z · score: 16 (5 votes) · LW · GW

It so happens that I just yesterday realized that you can put anyone's posts into a Sequence, and that further you can leave them in Draft form forever.

I am going to use this trick for posts I want to refer back to frequently, as a kind of bookmark feature within the site.

Open Thread May 2019

2019-05-01T15:43:23.982Z · score: 8 (2 votes)
Comment by ryan_b on "Everything is Correlated": An Anthology of the Psychology Debate · 2019-04-30T14:56:12.065Z · score: 3 (2 votes) · LW · GW

Does anyone know if private or corporate research has abandoned NHST and adopted better alternatives?

My suspicion is that the answer is no, because even companies that depend heavily on the correctness of their R&D output (like 3M or Dow Chemical, or marketing measurements at Amazon and Facebook) mostly have the problem that people chosen to do research are chosen by people who don't know about things like p-hacking. Even if they are diligent and careful business people, I expect a modest epistemology to conclude "do what Nature does" and stick with the current methods.

On the other hand, it seems like skipping the analysis part completely by tightly coupling research outputs to products/practices is a more common strategy, particularly for people like Amazon and Facebook where it can be very rapidly iterated. Yet it really seems like there is an opportunity to consistently win beyond what the market is doing here.

This is causing me to wonder how exactly do go about betting on an institution that placed its bets on better analysis. I feel like in order to be sure these methods are really being put to work, a new institution will have to be built for the purpose. It also seems like some careful consideration needs to be given to which market the institution will operate in, because it should be somewhere the edge provided by better analysis has the greatest impact.

One step beyond this: is inducing mimicry of good practices via effective competition something that has been analyzed in an EA context?

Comment by ryan_b on Open Thread April 2019 · 2019-04-29T21:24:12.636Z · score: 6 (3 votes) · LW · GW

Over at 80,000 Hours they have an interview with Mark Lutter about charter cities. I think they are a cool idea, but my estimation of the utility of Lutter's organization was dealt a bitter blow with this line:

Because while we are the NGO that’s presenting directly to the Zambian government, a lot of the heavy lifting, they’re telling us who to talk to. I’m not gonna figure out Zambian politics. That’s really complicated, but they understand it.

They want to build cities, for the purpose of better governance, but plan A is to throw up their hands at local politics. I strongly feel like this is doing it wrong, in exactly the same way the US military failed to co-opt tribal leadership in Afghanistan (because they assumed the Pashtuns were basically Arabs) and the Roman failures to manage diplomacy on the frontier (because they couldn't tell the difference between a village chief and a king).

Later in the interview he mentions Brasilia specifically as an example of cities being built, which many will recognize as one of the core cases of failure in Seeing Like a State. I now fear the whole experiment will basically just be scientific forestry but for businesses.

Comment by ryan_b on Buying Value, not Price · 2019-04-29T16:48:50.647Z · score: 2 (1 votes) · LW · GW

This is a slightly tangential question, but is value generally accepted as being inclusive of risk?

Comment by ryan_b on The Forces of Blandness and the Disagreeable Majority · 2019-04-29T16:02:08.510Z · score: 4 (3 votes) · LW · GW
I think we’re currently in an era of unusually large amounts of free speech that elites are starting to get spooked by and defend against.

I suspect this is explained sufficiently just by unusually large amounts of free speech. Speech is nearly free, nearly instant, and half the population of the planet now has global reach. I think of this as communication pollution.

Comment by ryan_b on The Forces of Blandness and the Disagreeable Majority · 2019-04-29T15:46:07.795Z · score: 3 (2 votes) · LW · GW

Following up on the Renee diResta piece, the DARPA program mentioned is Social Media in Strategic Communication. The manager of that program was Rand Waltzman, who currently works at the RAND Corporation. I think he makes for a much, much better source of information about the program and the role of propaganda management in government and the military.

A short summary of the program he wrote is here.

He uses the term cognitive security, and wrote a proposal for DoD funding of a center for it here.

He gave testimony to Congress on both the program and the policy proposal, found here.

Comment by ryan_b on Speaking for myself (re: how the LW2.0 team communicates) · 2019-04-26T21:15:12.558Z · score: 11 (5 votes) · LW · GW
This is indeed a bit sad and worrying for human-human communication.

Is it newly sad and worrying, though?

By contrast, I find it reassuring when someone explicitly notes the goal, and the gap between here and that goal, because we have rediscovered the motivation for the community. 10 years deep, and still on track.

Suck it, value drift!

Comment by ryan_b on Asymmetric Justice · 2019-04-26T17:44:26.121Z · score: 14 (5 votes) · LW · GW

I cannot speak for Zvi, but I suggest that the new thing is communication pollution.

Reality is far away and expensive. Signs are immediate and basically free. I intuitively suspect the gap is so huge that it is cheaper and easier to do a kind of sign-hopping, like frequency hopping, in lieu of working on or confronting the reality of the matter directly.

To provide more intuition about what I mean, compare communication costs to the falling costs of light over time. When our only lights were firewood it cost a significant fraction of the time of illumination in labor, for gathering and chopping wood. Now light is so ubiquitous that we turn them on with virtually no thought, and light pollution is a thing.

Comment by ryan_b on Asymmetric Justice · 2019-04-26T15:05:14.755Z · score: 2 (1 votes) · LW · GW
To respond directly, one who takes on a share of tail risk needs to enjoy a share of the generic upside, so the carpenter would get a small equity stake in the house if this was a non-trivial risk.

This helps a lot - I think that more explicit emphasis on risk and reward needing to be symmetric in both type and shape in addition to magnitude would help a lot.

Edit: would help a lot for the symmetric justice argument, I should have said. Although a casual introspective review of my conversations about risk says it would be a good idea for all such discussions. I will develop a habit of being explicit about the type and shape (which is to say distribution) of risks moving forward.

Comment by ryan_b on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-25T17:38:34.266Z · score: 8 (3 votes) · LW · GW

If there's a lot of coordination among AI, even if only through transactions, I feel like this implies we would need to add "resources which might be valuable to other AIs" to the list of things we can expect any given AI to instrumentally pursue.

Comment by ryan_b on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-25T17:30:59.214Z · score: 5 (3 votes) · LW · GW

It seems to me that computers don't suffer from most of the constraints humans do. For example, AI can expose its source code and its error-less memory. Humans have no such option, and our very best approximations are made of stories and error-prone memory.

They can provide guarantees which humans cannot, simulate one another within precise boundaries in a way humans cannot, calculate risk and confidence levels in a way humans cannot, communicate their preferences precisely in a way humans cannot. All of this seems to point in the direction of increased clarity and accuracy of trust.

On the other hand, I see no reason to believe AI will have the strong bias in favor of coordination or trust that we have, so it is possible that clear and accurate trust levels will make coordination a rare event. That seems off to me though, because it feels like saying they would be better off working alone in a world filled with potential competitors. That statement flatly disagrees with my reading of history.

Comment by ryan_b on Asymmetric Justice · 2019-04-25T16:58:01.760Z · score: 2 (1 votes) · LW · GW
If the carpenter’s son is executed when the house they built falls down and kills someone’s son, as in the Code of Hammurabi, well, that’s one way to ban inexpensive housing.

I thought the bridge example captured the problem of price very well, but this one seems different to me because it seems like it effectively advocates for houses falling down on people. The Code of Hammurabi is famously and literally symmetric, a strong example of lex talionis. If killing someone's son does not cause the carpenter to lose his, what does symmetric justice suggest?

Comment by ryan_b on The Principle of Predicted Improvement · 2019-04-24T21:00:16.804Z · score: 5 (3 votes) · LW · GW

I think this is very well done. The explanation is sufficiently clear that even I, the non-formal-math person, can follow the logic.

Comment by ryan_b on Book review: The Sleepwalkers by Arthur Koestler · 2019-04-24T19:25:04.297Z · score: 4 (2 votes) · LW · GW

So this runs the risk of being tangential, but I generally view straight lines in graphs with acute suspicion. This is not the usual expectation: people expect things to keep changing the same way they have been, so they predict a straight line; we have lots of straight lines which come out of diligent aggregation of data like this.

My thinking shifted when I did an electromagnetic theory course for antennas, which contra the rest of engineering school was mostly about Maxwell's Equations and how to derive them. We relied a lot on the linearity property for those equations, and I was ceaselessly impressed by the stupendous power this gave us.

An unreasonable amount of power. So much power did linearity yield that I look at this first as an explanation for things that we don't do well. Linearity gives us electricity and computers, precision and control. The impression I got was that anything in a graph that looks like a line isn't really a line, but is actually just an approximate sum of different curves.

So this is now my prior. Granted, this basically punts the question of straight lines on graphs to 'why do different curves seem to sum to approximately straight lines so often' so it doesn't get me much. My working guess is something like 'because we expect straight lines, any more growth than that probably gets left on the table.'

Comment by ryan_b on On the Nature of Programming Languages · 2019-04-24T14:53:23.847Z · score: 3 (2 votes) · LW · GW

The usual example here is memory control. The point of the higher-level languages is to abstract away the details of memory and registers, so there is no malloc/free equivalent when writing in them; for this purpose they use garbage collection.

Of course, eventually people found a need for addressing these kinds of problems, and so features to allow for it were added later. C reigns supreme in embedded applications because of the precise memory and I/O capabilities, but there is stuff for embedded Haskell and embedded LISP now. But note that in these sources they are talking about stuff like special compilers and strategies for keeping the automatic garbage collection from blowing everything up, whereas with C, you just mostly write regular C. Also interrupts.

Comment by ryan_b on Degree of duplication and coordination in projects that examine computing prices, AI progress, and related topics? · 2019-04-23T17:01:12.269Z · score: 14 (5 votes) · LW · GW

I propose that the motivation for all of these projects is not to find the answer, but rather to build the intuitions of the project members. If you were to compare the effects on intuition of reading research vs. performing research, I strongly expect that performing research would be greater.

Because of this, I expect that a significant chunk of all the people who are working in any capacity on the AI risk problem will take a direct shot at similar projects themselves, even if they don't write it up. I would also be surprised to find an org without any such people in it.

Comment by ryan_b on 1960: The Year The Singularity Was Cancelled · 2019-04-23T15:07:39.551Z · score: 4 (2 votes) · LW · GW

This seems highly plausible to me. It would be interesting to see something that more closely tracks different kinds of innovation - for example, how do hygiene and vaccinations (which stop dying) compare to domesticated crops and irrigation (which increase productivity directly) in terms of population growth?

Faster growth rates means more money means more AIs researching new technology means even faster growth rates, and so on to infinity.

My prior is that this will not require anywhere near human-level intelligence. I firmly expect this can be accomplished with the kind of AI we already possess, in tandem with certain imminent kinds of automation.

Comment by ryan_b on 1960: The Year The Singularity Was Cancelled · 2019-04-23T14:59:41.723Z · score: 6 (3 votes) · LW · GW

If 1 in 1,000,000 people is an irrepressible genius and produces a technological invention, then we should see as many technological inventions as there are millions of people.

Alternatively, each person has some chance of making such an invention, and the more persons there are the more chances for invention(s) happening.

If the question is 'what is the model of innovation that justifies the assumption' then I don't know, but I would guess some variant of the Great Men theory of history. We might model it as an IQ distribution.

Comment by ryan_b on On the Nature of Programming Languages · 2019-04-22T18:24:09.282Z · score: 4 (3 votes) · LW · GW

On the other-other hand, an example was staring me in the face that points more closely to your old intuitions: I just started reading The Structure and Interpretation of Classical Mechanics, which is the textbook used for classical mechanics at MIT. Of particular note is that the book uses Scheme, a LISP dialect, in order to enforce clarity and correctness of understanding of mechanics. The programming language is only covered in the appendix; they spend an hour or two on it in the course.

The goal here is to raise the standard of understanding the world to 'can you explain it to the computer.'

Comment by ryan_b on Helen Toner on China, CSET, and AI · 2019-04-22T15:59:28.069Z · score: 9 (4 votes) · LW · GW

Helen's comments on the assumed superiority of China in gathering data about people brought to mind the recent disagreement between Rich Sutton and Max Welling.

Sutton recently argued that the bitter lesson of AI is methods which leverage compute the best are most effective. Welling responds by arguing the case that data is similarly important, particularly for domains which are not well defined.

This causes me to suspect that the US and China directions for AI research will significantly diverge in the medium term.

Comment by ryan_b on On the Nature of Programming Languages · 2019-04-22T15:19:30.718Z · score: 12 (6 votes) · LW · GW

My intuition is strongly opposite yours of ten years ago.

  • For example, there are Domain Specific Languages, which are designed exactly for one problem domain.
  • C, the most widespread general-purpose programming language, does things that are extremely difficult or impossible in highly abstract languages like Haskell or LISP, which doesn't seem to match the notion of all three being a helpful way to think about the world.
  • Most of what we wind up doing with programming languages is building software tools. We prefer programs to be written such that the thinking is clear and correct, but this seems to me motivated more by convenience than anything else, and it rarely turns out that way besides.

I would go as far as to say that the case of 'our imperfect brains dealing with a complex world' is in fact a series of specific sub-problems, and we build tools for solving them on that basis.

On the other hand, it feels like there is a large influence on programming languages that isn't well captured by the tool-for-problem or crutch-for-psychology dichotomy: working with other people. Consider the object-oriented languages, like Java. For all that an object is a convenient way to represent the world, and for all that it is meant to provide abstractions like inheritance, what actually seems to have driven the popularity of object orientation is that it provides a way for the next programmer not to know exactly what is happening in the code, but instead to take the current crop of objects as given and then do whatever additional thing they need done.

Should we consider a group of people separated in time working on the same problem, an independent problem? Or should we consider that working with people-in-the-future is something we are psychologically bad at, and we need a better way to organize our thinking about it? While the former seems more reasonable to me, I don't actually know the answer here. One way to tell might be if the people who wrote Java said specifically somewhere that they wanted a language that would make it easier for multiple people to write large programs together over time. Another way might be if everyone who learned Java chose it because they liked not having to worry that much about what the last guy did, so long as the objects work.

Comment by ryan_b on Human performance, psychometry, and baseball statistics · 2019-04-19T14:33:57.232Z · score: 2 (1 votes) · LW · GW

This is very old, but if I am eyeballing the timeline correctly we should be approaching the point where you are deciding whether to cut your losses or endorse the lessons. So if I may, how did it go?

Comment by ryan_b on Could waste heat become an environment problem in the future (centuries)? · 2019-04-17T20:32:41.720Z · score: 2 (1 votes) · LW · GW

There's a post about this at the blog DoTheMath, which calculates we boil ourselves with waste heat in ~400 years, assuming GDP doubles every 100 years and per capita energy consumption increases at the same rate it has been for the previous ~400 years.

The usual economic retort is that the economy could look very different from the one we are used to, and decouple from energy consumption. But the assumption about waste heat is what is doing the work here, and we have recently developed thermal transistors. These transistors have been designed out of quantum objects. And it turns out we might be able to beat the Planck limit in the far field. Which is to say, we can build heat computers, and then waste heat could be converted into computation.

That doesn't solve the problem of too much energy use being bad, but if waste heat is computation then we can hit peak (safe) output, stay there, and still add value.

Comment by ryan_b on StrongerByScience: a rational strength training website · 2019-04-17T19:20:21.746Z · score: 2 (1 votes) · LW · GW

Har! I thought that was just a titling convention we'd adopted. Oops!

StrongerByScience: a rational strength training website

2019-04-17T18:12:47.481Z · score: 15 (7 votes)

Machine Pastoralism

2019-04-03T16:04:02.450Z · score: 12 (7 votes)

Open Thread March 2019

2019-03-07T18:26:02.976Z · score: 10 (4 votes)

Open Thread February 2019

2019-02-07T18:00:45.772Z · score: 20 (7 votes)

Towards equilibria-breaking methods

2019-01-29T16:19:57.564Z · score: 23 (7 votes)

How could shares in a megaproject return value to shareholders?

2019-01-18T18:36:34.916Z · score: 18 (4 votes)

Buy shares in a megaproject

2019-01-16T16:18:50.177Z · score: 15 (6 votes)

Megaproject management

2019-01-11T17:08:37.308Z · score: 57 (21 votes)

Towards no-math, graphical instructions for prediction markets

2019-01-04T16:39:58.479Z · score: 30 (13 votes)

Strategy is the Deconfusion of Action

2019-01-02T20:56:28.124Z · score: 73 (23 votes)

Systems Engineering and the META Program

2018-12-20T20:19:25.819Z · score: 31 (11 votes)

Is cognitive load a factor in community decline?

2018-12-07T15:45:20.605Z · score: 20 (7 votes)

Genetically Modified Humans Born (Allegedly)

2018-11-28T16:14:05.477Z · score: 30 (9 votes)

Real-time hiring with prediction markets

2018-11-09T22:10:18.576Z · score: 19 (5 votes)

Update the best textbooks on every subject list

2018-11-08T20:54:35.300Z · score: 78 (28 votes)

An Undergraduate Reading Of: Semantic information, autonomous agency and non-equilibrium statistical physics

2018-10-30T18:36:14.159Z · score: 30 (6 votes)

Why don’t we treat geniuses like professional athletes?

2018-10-11T15:37:33.688Z · score: 20 (16 votes)

Thinkerly: Grammarly for writing good thoughts

2018-10-11T14:57:04.571Z · score: 6 (6 votes)

Simple Metaphor About Compressed Sensing

2018-07-17T15:47:17.909Z · score: 8 (7 votes)

Book Review: Why Honor Matters

2018-06-25T20:53:48.671Z · score: 31 (13 votes)

Does anyone use advanced media projects?

2018-06-20T23:33:45.405Z · score: 45 (14 votes)

An Undergraduate Reading Of: Macroscopic Prediction by E.T. Jaynes

2018-04-19T17:30:39.893Z · score: 37 (8 votes)

Death in Groups II

2018-04-13T18:12:30.427Z · score: 32 (7 votes)

Death in Groups

2018-04-05T00:45:24.990Z · score: 47 (18 votes)

Ancient Social Patterns: Comitatus

2018-03-05T18:28:35.765Z · score: 20 (7 votes)

Book Review - Probability and Finance: It's Only a Game!

2018-01-23T18:52:23.602Z · score: 18 (9 votes)

Conversational Presentation of Why Automation is Different This Time

2018-01-17T22:11:32.083Z · score: 70 (29 votes)

Arbitrary Math Questions

2017-11-21T01:18:47.430Z · score: 8 (4 votes)

Set, Game, Match

2017-11-09T23:06:53.672Z · score: 5 (2 votes)

Reading Papers in Undergrad

2017-11-09T19:24:13.044Z · score: 42 (14 votes)