post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by habryka (habryka4) · 2018-05-18T21:29:00.129Z · LW(p) · GW(p)

Moved back to your drafts. Given the title.

Replies from: ryan_b
comment by ryan_b · 2018-05-18T21:35:09.758Z · LW(p) · GW(p)

Much appreciated!

comment by ryan_b · 2019-08-29T15:24:43.091Z · LW(p) · GW(p)

For the Book Review: Reframing Superintelligence (SSC) [LW · GW] linkpost:

There seems to be some missing linkage between what a computer knows and what it can do. I feel like there is some notion of action that is missing: how the heck does an AI of any given sophistication add new actions to its repertoire? This can't happen in any software we currently have - even the ability to categorize or define actions doesn't imply the ability to create new ones.

Logical Actions are Optimization Channels: intuition from Information Theory - a given action is like a channel, and the message is optimization of the environment vis-a-vis goals. Logical actions are not the same as actual actions: for example, it is obvious to humans that we can look at what a computer actually does to get information about its intentions, so having drones steal candy from babies can turn us against it regardless of what it displays on the monitor. But a logical action of 'Signal Good Intentions' encompasses both what the monitor displays and how humans perceive the drone activity. Further, we can look at how dividing bandwidth up into multiple channels impacts the efficiency of transmitting a message as an intuition for how more logical actions increase the capability of the AI.

This seems to be orthogonal to the question of agency - even an AI with many logical actions that it optimizes won't generate new ones unless one of those logical actions is 'search action space for new actions'. This makes it clear that Tool AIs with a large action set will be strictly more powerful than Agent AIs that only start with 'search action space' up until a certain point.


Replies from: ryan_b, ryan_b, ryan_b
comment by ryan_b · 2019-08-29T19:55:17.736Z · LW(p) · GW(p)

3. Naively, actions feel like they require causal reasoning, and causal reasoning of any kind seems to require the ability to reason about two parts of the environment. One of these parts can be you (or the AI).

But I am not sure this is the case. Strong correlation seems to be good enough for a human brain - we do all kinds of actions without any understanding of what we are doing or why. This can go as far as provoking conscious confusion during the action. Based on this lower standard, what correlation would be needed?

Boundaries of some kind, because we need some way to localize what we are doing and looking at. Strong, chiefly as a matter of efficiency. We want a way to describe a correlation such that we can chain it with other correlations, and then eventually bundle them together as an action. Then doing new actions is a matter of chaining the correlations back to something we can currently do.

Replies from: ryan_b
comment by ryan_b · 2019-08-29T20:00:09.762Z · LW(p) · GW(p)

I feel like a Rube-Goldberg device would be a good intuition pump here. How can we describe a Rube-Goldberg device in terms of correlations? What is a good way to break it into chunks, and then also a good way to connect those chunks? Since they are usually built of simple machines, everything is mathematically tractable - we have a good grip on those.

comment by ryan_b · 2019-08-29T18:49:53.202Z · LW(p) · GW(p)

2. Is thinking about actions just rephrasing the agent-environment question? It feels like the answer is no, because it isn't as though being able to specify the relationship between the agent and the environment changes the need to compute the specific details of any particular action.

But it might be impossible to specify an action exactly without being able to specify the agent-environment relationship exactly. Could it be (or is it) stated implicitly?

Replies from: ryan_b
comment by ryan_b · 2019-08-29T19:28:39.320Z · LW(p) · GW(p)

Actions are not just Embedded Agency in a different guise. From the Full-Text Version it looks to me like what actions are and how to discover them is abstracted away, which makes sense in the context of that project.

It appears most relevant to problems associated with multi-level models.

comment by ryan_b · 2019-08-29T18:42:33.809Z · LW(p) · GW(p)

1. I am deeply confused by this.

The older conversations about Tool AI seemed to focus on the difference between an Oracle that answers questions and one that does things. I feel like this distinction is bigger and in a different way than it was made out to be, because doing things is really hard. If feels like the paradox of sensing being complicated should go two ways.

Checking the Wikipedia page for Moravec's Paradox, "sensorimotor" is how they describe it, so both sensing and motor skills (inputs and outputs) are covered. My intuition fairly screams that this should generalize to any other environment-affecting action. So:

  • The more inputs an AI starts with, the easier it is to recognize other inputs/outputs.
  • The more outputs an AI starts with, the easier it is to add other inputs/outputs.
  • This still doesn't identify what causes the machine to try to affect the world at all.
comment by ryan_b · 2019-08-27T20:58:15.517Z · LW(p) · GW(p)

From Where are people thinking and talking about global coordination for AI safety? [LW(p) · GW(p)]

3. When humans made advances in coordination ability in the past, how was that accomplished? What are the best places to apply leverage today?

Chiefly because groups that were not sufficiently coordinated were destroyed, or absorbed by competing groups.

Replies from: ryan_b
comment by ryan_b · 2019-08-27T21:47:54.332Z · LW(p) · GW(p)
4. Information technology has massively increased certain kinds of coordination (e.g., email, eBay, Facebook, Uber), but at the international relations level, IT seems to have made very little impact. Why?

I note the coordination is entirely at a lower-level than those companies: mostly individuals are using these services for coordination, as well as small groups. It seems like coordination innovations aren't bottom up, but rather top-down (even if the IT examples are mostly opt-in). This seems to match other large coordination improvements, like empire, monotheism, or corporations. There is no higher level of abstraction than governments from which to improve international relations, it seems to me.

Quite separately, we could ask: what are the specific challenges in international relations that IT could address? The problems mostly revolve around questions of trust, questions of the basic competence of human agents (diplomats, ambassadors, heads of state, etc), and fundamental conflicts of interest. None of these are really addressable with off-the-shelf IT solutions.

That being said, it's also clear that Facebook and Uber aren't even trying to target problems related to international relations. We know contracting with multiple governments is achievable, because people like Google, Microsoft, and Palantir all manage it selling IT for intelligence purposes. Dominic Cummings has a blog post High performance government, ‘cognitive technologies’, Michael Nielsen, Bret Victor, & ‘Seeing Rooms’ that speculates about how international relations could be improved by making the stupendous complexity of the information at work more readily available to decision makers, both for educational purposes and in real time. Maybe there would be an opportunity for a Situation Room Industries, or similar.


comment by ryan_b · 2019-08-13T20:19:34.373Z · LW(p) · GW(p)

Rough draft for Scott's Secular Cycles post:


This doesn't qualify as criticism per se, but might offer some help for coloring in the edges. The only real suspicion I have about Turchin's work is that it follows the traditional model of only looking at agrarian empires, even though better information is available now outside of this traditional focus.

  • Significant change in the understanding of mongols and other nomadic empires
    • From Needy Nomad (of material goods for survival) to Tradey Nomad (of luxury goods for maintaining their social organization), from Thomas Barfield in The Perilous Frontier.
      • Under this lens, a large unified agrarian state provides enough luxury trade and raiding for large nomadic confederacies to form.
    • Beckwith goes further in Empires of the Silk Road (into controversy), arguing that nomads are the drivers of Eurasian commerce. This is on the basis of records detailing huge importations of finished goods, including things like iron weapons and armor, into China. Further, he argues an inverse relationship between the agrarian states and the nomad confederacies - noting that the confederacies grew larger first, posits that the formation of a large confederacy creates a kind of Silk Road free-trade zone, which generates enough surplus wealth in the agrarian kingdoms to fund wars of unification.
    • A little detail about the Han-Xiongnu Wars provides some context about a stupendous crisis that might devour a golden age.

comment by ryan_b · 2019-07-23T20:44:58.016Z · LW(p) · GW(p)

In megaproject management, or even for multi-stakeholder questions more generally, I wonder about the utility of doing something like Github issue tracking or even a full-blown CRM tool to help manage the different stakeholders.

comment by ryan_b · 2019-05-22T15:46:39.832Z · LW(p) · GW(p)

Strategy is the search for victory.

Suppose we take search completely literally.

1. We have a current environment.

2. We need to find as many future branches in which we are victorious as possible.

3. We try to preserve as many possible victorious branches as possible, and try to screen as many defeat branches as possible.

4. Once victory becomes likely, we can begin to discriminate between better or worse victories.

The strategy itself is essentially the rule we use for making interventions: the causal theory of success.

We need a reference class of victories, on which to base our theory of success.

We need a reference class of defeats, for the purposes of murphyjitsu. This seems to be unusually important, because it looks kind of like avoiding errors is the more important feature, the closer we get to total mastery of the environment. I think this is largely captured by things like doctrine and training, but we haven't done a good job of capturing it in terms of decisions.

Related to: Macroscopic Predictions [LW · GW]. This is kind of like using Gibbs Rule at the level of "interventions we can make" to predict victory.

comment by ryan_b · 2019-05-20T21:08:57.038Z · LW(p) · GW(p)

Mutual Information

I suspect humans have an arbitrary preference for mutual information. This pattern is well-matched by preference for kin, and also any other in-group; for shared language; for shared experiences.

Actions as mutual information generators

It occurs to me that doing things together generates a tremendous amount of mutual information. Same place, same time, same event, same sensory stimuli, same social context. In the case of things like rituals, we can sacrifice being in the same place and keeping all the other information the same; in the case of traditions like a rite of passage we can sacrifice being in the same time and keeping all the other information the same, which allows for mutual information with people who are dead and people who are yet to be at a high level of resolution.

I further suspect that the intensity of an experience weighs more; how exactly isn't clear, because an intense event itself doesn't necessarily contain more information than boring one. I wonder if it is because intense experiences leave more vivid memories, and so after a period of time they will have more information relative to other experiences from that time or before.

Replies from: ryan_b, ryan_b, ryan_b, ryan_b
comment by ryan_b · 2019-06-10T14:58:48.509Z · LW(p) · GW(p)

This might be more fundamental than I initially thought. If people have a preference for shared information, this provides an upward pressure on communication in general - there is now a reward for telling people things just because, and also a reward for listening to people tell you things. This is entirely separate from any instrumental value the shared information may have.

comment by ryan_b · 2019-06-10T14:57:01.956Z · LW(p) · GW(p)

I suspect that a lot of traditional things do double-duty, being instrumentally valuable and encouraging mutual information. Material example: the decorations on otherwise mundane items, like weapons or cauldrons.

It occurs to me that any given practice will probably be selected for resilience more than it will be for efficiency - efficiency is a threshold that must be met, but beyond that maximizing the likelihood that the minimum will be met seems more likely to propagate.

If people have an intrinsic desire to share information, then it seems likely that a practice for which sharing information is more common is more likely to persist. Hence the prevalence of so many multi-step processes; every additional step is another opportunity to share information (teach, tell where resources are, etc).


comment by ryan_b · 2019-05-31T17:32:14.191Z · LW(p) · GW(p)

From Introducing: Asabiyah, which itself summarizes one concept from Ibn Khaldun's Muqaddimah:

Thus Khaldun notes:
“The consequences of common descent, though natural, still are something imaginary. The real thing to bring about the feeling of close contact is social intercourse, friendly association, long familiarity, and the companionship that results from growing up together having the same wet nurse, and sharing the other circumstances of life and death. If close contact is established in such a manner, the result will be affection and cooperation.”

And later:

In Ibn Khaldun’s thought, conquest itself seems to be the driving force behind the consolidation of two asabiyah into one. Once a weaker tribal group is defeated, its leaders removed and men of valor killed, pacified, or subsumed under a new organization so utterly that the ‘tit for tat’ vengeance schemes so common to nomadic society (which Ibn Khaldun sees as the root cause of war) are no longer possible, then their asabiyah can be swallowed up in the larger group’s. What is key here is that the other groups – after their initial defeat – are not coerced into having the same feeling of asabiyah as the main group. Asabiyah that must be coerced is not asabiyah at all (this is a theme Ibn Khaldun touches on often and we will return to it in more detail when we talk about why asabiyah declines in civilized states). Instead, those who have been allowed to join the conquering host slowly start to feel its asabiyah be subsumed as the two groups “enter into close contact,” sharing the same trials, foods, circumstances, and becoming acquainted with the others' customs, but just as importantly, sharing the same set of incentives. Once the losers are are forced together with the winners, defeat for the main clan is defeat for all; glory for the main clan is glory for all; booty gained by the main clan’s conquests becomes booty to be shared with all. Once people from a subordinate group begin to feel like the rise and fall of their own fortunes is inextricably linked to the fate of the group that overpowered them then they become willing to sacrifice and die for the sake of this group, for it has become their group.

Shared experiences create a lot of mutual information, and enough of it builds the fearsome bonds which populate our legends and histories.

comment by ryan_b · 2019-05-31T15:59:25.961Z · LW(p) · GW(p)

Seemingly altruistic actions such as creating art and music may qualify. From Sexual Selection Through Mate Choice Does Not Explain the Evolution of Art and Music:

Miller, however, criticizes the idea that “art conveys cultural values and socializes the young,” writing that,
"The view that art conveys cultural values and socializes the young seems plausible at first glance. It could be called the propaganda theory of art. The trouble with propaganda is that it is usually produced only by large institutions that can pay propagandists. In small prehistoric bands, who would have any incentive to spend the time and energy producing group propaganda? It would be an altruistic act in the technical biological sense: a behavior with high costs to the individual and diffuse benefits to the group. Such altruism is not usually favored by evolution."
The answer to Miller’s question—who produces the propaganda?—is quite clear in the ethnographic data: the old men do.

Altruism is not usually favored by evolution, but if the same mechanism by which we prefer to spread our genetic information recognizes other kinds of information, then it would not feel altruistic from the inside. Rather, making songs and having other people sing them would be its own reward in precisely the same way having children does.

Replies from: ryan_b
comment by ryan_b · 2019-05-31T17:16:10.785Z · LW(p) · GW(p)

More on semi-altruism: Darwin's Unfinished Symphony argues that what sets humans apart is our ability to reliably teach (review here).

If we consistently teach well, it seems to me it must consistently yield rewards for the teachers. As with the argument for storytellers in the example above, this may be in the form of benefits from social connections. But, because we know time discounting [LW · GW] is a thing and it takes time to teach people something, it seems likely to me that there is an intrinsinc reward for teaching. A reasonable way to describe teaching would be mutualizing information.

comment by ryan_b · 2019-05-14T16:32:45.422Z · LW(p) · GW(p)

While reading Meaningness, more of a description of the kinds of things I want in thinking became clear.

  • I want an honor culture, generalized to include questions of fact.
  • It seems to me that anything which we could interact with but do not describe is completely constrained to System 1 thinking. In order to reason explicitly, which is to say use System 2, we need an explicit description.
  • The things we do a really crappy job reasoning about, but which are of the highest importance, are people and groups. By "people" I mean specifically ourselves: we need to have a description of ourselves. We also need to have a description of groups. With these two descriptions we can reason explicitly about our membership in a group: whether to join, whether to leave, how to succeed within one, how to improve it, etc.
  • I strongly suspect the "center of gravity" for civilization is found within groups.
  • Specifically, the kind of group I am concerned with is the unit of action.
comment by ryan_b · 2019-05-03T18:09:46.853Z · LW(p) · GW(p)

Prophecy is a narrative prediction

In the monotheist traditions prophets are given specific instructions from God which they must disseminate. In smaller, local traditions prophets are skilled in interpreting signs from the gods, such dreams, the flights of birds, or reading entrails/ashes/bones.

Consider instead the use of narrative in structuring and communicating a prediction (at various levels of detail). Even in the case of good predictions using state-of-the-art methods, people often ignore them or fail to account for them properly. The question becomes how to get people who are not intimate with the prediction methods, or who do not trust the authority, to act as though it were true.

See also: self-fulfilling prophecy, where the contents of prophecy drive people to act in such a way as to cause it to come true. This is the baseline model for start-ups: a good enough story about success causes people to expect more success, which is the mechanism by which start-ups are judged to succeed. By contrast a popular trick in ancient myths is a bad prophecy which people cause by trying to avoid it, ie telling the king one of his grandchildren will supplant him, so the king tries to have them all drowned but one is smuggled away into the lands of the king's enemies and then returns at the head of a large army 18 years later. Opposite the first example would sit something like propaganda distributed by invading armies whereby they claim opposing them is hopeless and try to persuade enough people of this that the actual defense is actually compromised.

It seems like the appropriate cycle would be: 1. state-of-the-art prediction methods to estimate the future; 2. an analysis of how the story might affect the prediction under various scales of adoption, namely if everyone acting as though it were true changed the outcome; 3. build a story according to the desired outcome in light of 2.

Divination is psuedo-RNG | a gut-check.

Replies from: ryan_b
comment by ryan_b · 2019-05-06T15:03:06.339Z · LW(p) · GW(p)

Some related posts: