Posts

What currents of thought on LessWrong do you want to see distilled? 2021-01-08T21:43:33.464Z
The National Defense Authorization Act Contains AI Provisions 2021-01-05T15:51:28.329Z
The Best Visualizations on Every Subject 2020-12-21T22:51:54.665Z
ryan_b's Shortform 2020-02-06T17:56:33.066Z
Open & Welcome Thread - February 2020 2020-02-04T20:49:54.924Z
Funding Long Shots 2020-01-28T22:07:16.235Z
We need to revisit AI rewriting its source code 2019-12-27T18:27:55.315Z
Units of Action 2019-11-07T17:47:13.141Z
Natural laws should be explicit constraints on strategy space 2019-08-13T20:22:47.933Z
Offering public comment in the Federal rulemaking process 2019-07-15T20:31:39.182Z
Outline of NIST draft plan for AI standards 2019-07-09T17:30:45.721Z
NIST: draft plan for AI standards development 2019-07-08T14:13:09.314Z
Open Thread July 2019 2019-07-03T15:07:40.991Z
Systems Engineering Advancement Research Initiative 2019-06-28T17:57:54.606Z
Financial engineering for funding drug research 2019-05-10T18:46:03.029Z
Open Thread May 2019 2019-05-01T15:43:23.982Z
StrongerByScience: a rational strength training website 2019-04-17T18:12:47.481Z
Machine Pastoralism 2019-04-03T16:04:02.450Z
Open Thread March 2019 2019-03-07T18:26:02.976Z
Open Thread February 2019 2019-02-07T18:00:45.772Z
Towards equilibria-breaking methods 2019-01-29T16:19:57.564Z
How could shares in a megaproject return value to shareholders? 2019-01-18T18:36:34.916Z
Buy shares in a megaproject 2019-01-16T16:18:50.177Z
Megaproject management 2019-01-11T17:08:37.308Z
Towards no-math, graphical instructions for prediction markets 2019-01-04T16:39:58.479Z
Strategy is the Deconfusion of Action 2019-01-02T20:56:28.124Z
Systems Engineering and the META Program 2018-12-20T20:19:25.819Z
Is cognitive load a factor in community decline? 2018-12-07T15:45:20.605Z
Genetically Modified Humans Born (Allegedly) 2018-11-28T16:14:05.477Z
Real-time hiring with prediction markets 2018-11-09T22:10:18.576Z
Update the best textbooks on every subject list 2018-11-08T20:54:35.300Z
An Undergraduate Reading Of: Semantic information, autonomous agency and non-equilibrium statistical physics 2018-10-30T18:36:14.159Z
Why don’t we treat geniuses like professional athletes? 2018-10-11T15:37:33.688Z
Thinkerly: Grammarly for writing good thoughts 2018-10-11T14:57:04.571Z
Simple Metaphor About Compressed Sensing 2018-07-17T15:47:17.909Z
Book Review: Why Honor Matters 2018-06-25T20:53:48.671Z
Does anyone use advanced media projects? 2018-06-20T23:33:45.405Z
An Undergraduate Reading Of: Macroscopic Prediction by E.T. Jaynes 2018-04-19T17:30:39.893Z
Death in Groups II 2018-04-13T18:12:30.427Z
Death in Groups 2018-04-05T00:45:24.990Z
Ancient Social Patterns: Comitatus 2018-03-05T18:28:35.765Z
Book Review - Probability and Finance: It's Only a Game! 2018-01-23T18:52:23.602Z
Conversational Presentation of Why Automation is Different This Time 2018-01-17T22:11:32.083Z
Arbitrary Math Questions 2017-11-21T01:18:47.430Z
Set, Game, Match 2017-11-09T23:06:53.672Z
Reading Papers in Undergrad 2017-11-09T19:24:13.044Z

Comments

Comment by ryan_b on Fire Law Incentives · 2021-07-23T17:27:41.605Z · LW · GW

Does it count for this purpose when the fire vigilante is actually a fireman who just wants the overtime? It is physically the same, but there are broken incentives at work.

Comment by ryan_b on AlphaFold 2 paper released: "Highly accurate protein structure prediction with AlphaFold", Jumper et al 2021 · 2021-07-15T19:33:29.365Z · LW · GW

Here we provide the first computational method that can regularly predict protein structures with atomic accuracy even where no similar structure is known.

Holy crap. I confess this one catches me by surprise; within my hopes, but beyond my expectations.

Comment by ryan_b on Lafayette: traffic vessels · 2021-07-15T01:44:12.382Z · LW · GW

I grok this feeling.

I have found similar veins of thought in discussions about walkable cities, and occasionally in arguments over architecture. Stuff like the difference between our homes being cocoons or places of visitation; how ruthlessly utilitarian structures make humans feel vulnerable; even details like laying out a party such that people have to move around to do different things, forcing them to interact.

I don't recall whether it was at Bell Labs or at Princeton, but I recall someone thought the single long hallway to the cafeteria was an important feature of the work space because it was the only time you bumped into everyone reliably.

Looking at these examples we have micro (homes and hallways at the office) and macro (cities), but I do feel like small town are a neglected middle.

Comment by ryan_b on Relentlessness · 2021-07-08T15:48:59.653Z · LW · GW

I wonder if it is a question of dimensionality somehow. When a person is immersed in a language, they are being bombarded with all of the language's features repeatedly and more or less constantly. This means any language task they can succeed at, they will; then they can expand from these islands of competence outward.

Math class is a strictly sequential and linear endeavor. Normally we get exactly one sequence of problems, which has exactly one ordering, and once a type of problem is past it will never appear again. There is no flexibility at all in how to approach learning math in a class, unless initiative is undertaken by the student completely independent of the instruction.

I learned much more about math once I abandoned the math class approach; I did more immersive things like reading history about math concepts, and different applications, and explanations for why things are wrong.

Semi-separately, I also consider the issue of feedback loops. During talking and wrangling my kid, feedback is mostly instantaneous. Feedback in math class is usually delayed by at least a day, and usually only sparse feedback to boot, in the form of a binary correct-or-incorrect result. Reflecting on it, the sparse complaint is also an issue of dimensionality of a sort.

Comment by ryan_b on The topic is not the content · 2021-07-08T13:26:58.843Z · LW · GW

It feels to me like the meaning element makes an excellent third side of a triangle. Contrasting the content between cases where the results are meaningful to us versus where they are not would be pretty useful information.

I am reminded of a presentation I saw (YouTube? TED?) where a researcher was talking about a series of experiments they had done where the task was to assemble as many toy robots or legos or something as possible. The point was that it was trivial, and any functional adult could do it, and probably efficiently if they were motivated. The key was that in the control group they just put the completed toys away in a bin under the table, but in the experimental group they pulled them back apart right in front of their eyes, and then dumped the pieces in the bin under the table. The group with their work being undone before their eyes consistently produced less in the allotted time, even though everyone in both groups knew that the task was strictly meaningless.

I feel like the same sort of mechanism will affect the content elements of this idea, and that the same mechanism should work in reverse as the perceived meaning of the work increases. Probably worth noting that false meaning that is easy to perceive will also be effective under this model, which explains a lot about some of startup culture's picadillos.

Comment by ryan_b on Why did we wait so long for the threshing machine? · 2021-07-07T20:40:57.128Z · LW · GW

Re-reading this, the "Manufacturing Systems" explanation seems to agree with one of the stories of economic development in poorer countries. Quoting two relevant sections from Scott Alexander's review of How Asia Works:

Imagine having to start your own car company in Zimbabwe. Your past experience is "peasant farmer". You have no idea how to make cars. The local financial system can muster up only a few million dollars in seed funding, and the local manufacturing expertise is limited to a handful of engineers who have just returned from foreign universities. Maybe if you're very lucky you can eventually succeed at making cars that run at all. But there's no way you'll be able to outcompete Ford, Toyota, and Tesla. All these companies have billions of dollars and some of the smartest people in the world working for them, plus decades of practice and lots of proprietary technology. Your cars will inevitably be worse and more expensive than theirs.

. . .

Aren't there good free-market arguments against tariffs and government intervention in the economy? The key counterargument is that developing country industries aren't just about profit. They're about learning. The benefits of a developing-country industry go partly to the owners/investors, but mostly to the country itself, in the sense of gaining technology / expertise / capacity. It's almost always more profitable in the short run for developing-world capitalists to start another banana plantation, or speculate on real estate, or open a casino. But a country that invests mostly in banana plantations will still be a banana republic fifty years later, whereas a country that invests mostly in car companies will become South Korea.

In terms of doing the thing the only meaningful difference between someone in an undeveloped country now and the people in 1800 is their complete certainty that the thing can be done. For that matter, the same rules apply to me for anything I try to make in my garage.

The ease with which the idea applies across scale and distance is a point in its favor, to me.

Comment by ryan_b on How can there be a godless moral world ? · 2021-06-25T20:52:01.853Z · LW · GW

I don't want to scare off any theists who are willing to try the scout mindset

In support of the mod position, this is one of the motivations for it in the first place. The tone of the community was actively hostile to theists, and we reasonably predicted that demanding people abandon their identities as the first step in community engagement would have an extremely low success rate. I can't speak with authority, but I feel like at least a few of the mods personally knew theists who struggled with that exact issue, which informed their position.

I also have the impression, which may be mistaken, that people following your path was a hoped-for outcome and that this would be aided by our enforcement of a taboo. It feels to me like the separate magisterium is the natural consequence of our current state of affairs.

Comment by ryan_b on How can there be a godless moral world ? · 2021-06-23T22:52:07.991Z · LW · GW

I should taboo the word "morality" in my upcoming post

No need to be hasty; you have merely stepped squarely into one of the community's bugaboos.

A little historical context: this place used to be aggressively atheist, so much so that only a handful of people of faith hung around. In time we determined this to be counter-productive, because these are rarely the kinds of conversations that change anyone's mind, and it distracted from our true purpose of developing better ways of thinking. As a consequence pretty much anything about God became taboo, and the subject of moderation: the official position was (and mostly still is?) that even though it is a conversation worth having there are lots of places to have those conversations and this is not one of them.

Absent the legacy of the internet atheism wars, things settled down pretty well. Indeed you will find quite a bit of content on some key subjects: 

  • community: as a general matter for human welfare, and the welfare of this place in particular
  • morality/ethics: for people, for AGI, for groups
  • religion: for the effects on the previous two things, and naturally the legacy atheist stuff

It's normal for people to make reference to biblical stories or rabbinical argumentation for germane examples and jokes alike.

You'll also find quite a bit of moral discussion over at our sibling website, the Effective Altruism Forum. There's a lot of interrogation of questions like the moral value of animals, and of far-future humans. Some of this tackles problems like where the morality comes from, and these are good subjects to peruse because in order to argue for something outside the norm you need to establish how the normal works, so it can be extended.

You might be better served by tabooing God, so people stay focused on the arguments. If you were to try and get the same reasoning you are using now, but without terminating at God specifically, what do you think it would look like?

Comment by ryan_b on How can there be a godless moral world ? · 2021-06-21T16:32:18.216Z · LW · GW

Welcome to LessWrong! I see you have chosen trial by metaphorical fire.

I'm interested in learning more about your thoughts surrounding the question. The other answers have asked about the moral world phraseology but I'd like to be a bit more specific; would you say that:

  • Morality is a thing the world has, like the electromagnetic spectrum or gravity
  • Morality is a thing a person has, like being able to smell, or maybe like fear
  • Morality is a thing a group of people have, like relationships or clubs

I see elsewhere that you mentioned not knowing why murder is immoral, but then being able to tell murder is bad because of feelings. What do you think these feelings might be, and do you see them as related to the morality question at all? What about the difference between things we might guess the killer to feel (like guilt, remorse, or compunction) with the things the loved ones of the victim might feel (grief, sorrow, or wrath)?

What are your thoughts on Adam, Eve, and the Tree of Knowledge of Good and Evil?

Comment by ryan_b on ML is now automating parts of chip R&D. How big a deal is this? · 2021-06-14T16:45:24.794Z · LW · GW

I would agree; anything that cuts months and potentially hundreds of people makes it easier for new entrants. Further, the trend appears strongly in the direction of outsourcing, as even Intel will now build other's designs. I see no reason why this could not be done on a contracting basis as well. The primary obstacle is the low appetite for the private sector to make large investments in physical things. Intel and TSMC's new investments are largely defense motivated.

I agree that this particular sort of chip optimization is suited more for narrow AI than AGI; my claim is rather that anything which employs narrow AI is more vulnerable to AGI takeover. It seems likely to me that AGI would have an interest in production of processing power, so it seems like automating the steps is lowering the threshold.

I also consider that this kind of development is exactly what the CAIS model predicts. If CAIS is a system of narrow AIs, including coordinator/management AIs, why won't a misaligned or malevolent coordinator AI from interacting with already existing narrow AIs? The malevolent case could be as straightforward as an ML redux of Stuxnet.

All of this rests pretty heavily on the crux that once one AI runs a task, it is easy to replace it with another AI; if this effect is weak, or I am completely wrong and it is in fact harder, then the chain of logic falls apart.

I see this as analogous to the points you made in the embodied intellectual property post comments: what we think we are doing is making more efficient use of resources, but what we are actually doing is engaging in a tradeoff of gaining time and money in exchange for living with a more opaque method of controlling the work. Within this more opaque method, additional risks lie.  A more specific analogy to the Portuguese sailing technology commentary in the Conquistadors post feels achievable, but it isn't coming together for me yet.

Comment by ryan_b on Answering questions honestly given world-model mismatches · 2021-06-13T21:12:17.882Z · LW · GW

Woo for in the weeds thinking being documented!

Comment by ryan_b on ML is now automating parts of chip R&D. How big a deal is this? · 2021-06-13T21:05:09.783Z · LW · GW

In the short term: moderately big deal. The chip industry is currently in rather a lot of flux; Intel was supplanted as leader in transistor size by TSMC; Apple is running with their own chip designs; China's monopoly on rare earth mineral processing has come under scrutiny again. This has provoked a boom in new development as a consequence. Even a small improvement in the design and manufacture of these facilities weighs a lot; because the chip industry is so important and so centralized, moderately big deal is essentially the floor for any actual development within it.

In the long term: big deal. This is not an opinion shared by anyone else as far as I can tell, but it feels very clear to me that "people use ML for this application" is the threshold at which the hardware overhang is almost immediately accessible to AGI. At that point the AGI is literally an upgrade operation, as opposed to having to go through the entire process of converting a workflow to something an AI of any type at all can work on. To be more concrete: I expect any kind of AI-driven takeover to control all of the currently-uses-ML industries before taking over any that do not; and I expect that within currently-uses-ML industries the order will be determined largely by how saturated they are with tools of that kind.

Comment by ryan_b on Rogue AGI Embodies Valuable Intellectual Property · 2021-06-13T18:56:09.354Z · LW · GW

I still love the conquistador post, and it was good to read through it again. I agree strongly that direct framings like "more resources" or "more power" are wrong. I feel like we would make more progress if we understood why they were wrong; especially if we could establish that they are wrong on their own merits. I have two intuitive arguments in this direction:

I am strongly convinced that framings like resources, money, or utilons are intrinsically wrong. When people talk in these terms they always adopt the convention common to economics and decision theory where values are all positive. The trouble is that this is just a convention; its purpose is ease of computation and simplicity of comparison. This in turn means that thinking about resources in terms of more-or-less has no connection whatever to the object level. We are accidentally concealing the dimensionality of the problem from ourselves.

I am also strongly convinced that our tendency to reason about static situations is a problem. This is not so much intrinsically wrong as it is premature; reasoning about a critical positioning in a game like Chess or Go makes sense because we have a good understanding of the game. But we do not have a good understanding of the superintelligence-acting-in-the-world game, so when we do this it feels like we are accidentally substituting intuitions from unintended areas.

On the flip side of the coin, these are totally natural and utterly ubiquitous tendencies, even in scholarly communities; I don't have a ready-made solution for either one. It is also clearly not a problem of which the community is completely unaware; I interpret the strong thread of causality investigation early on as being centered squarely on the same concerns I have with these kinds of arguments.

In terms of successes similar to what I want, I point to the shift from Prisoner's Dilemma to Stag Hunt when people are talking game theory intuition. I also feel like the new technical formulation of power does a really good job of abstracting away things like resources while recapturing some dimensionality and dynamism when talking about power. I also think that we could do things like try to improve the resources argument; for example the idea that private sector IP is a useful indicator of AGI suggested in the OP is a pretty clever notion I had not considered, so it's not like resources are actually irrelevant.

Comment by ryan_b on Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI · 2021-06-12T21:49:10.154Z · LW · GW

I strongly support this line of investigation.

That being said, I propose that avoiding multi/multi dynamics is the default for pretty much all research - by way of example consider the ratio of Prisoner's Dilemma results to Stag Hunt results in game theory. We see this even in fields where there is consensus on the individual case, and we don't have that yet.

The generic explanation of "it is harder so fewer people do it" appears sufficient here.

Comment by ryan_b on Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI · 2021-06-12T21:29:29.711Z · LW · GW

Mistake Theorists should also be systematically biased against the possibility of things like power dynamics being genuinely significant

This surprises me, because it feels like the whole pitch for things like paperclip maximizers is a huge power imbalance where the more powerful party is making a mistake.

Comment by ryan_b on Rogue AGI Embodies Valuable Intellectual Property · 2021-06-11T19:00:04.183Z · LW · GW

This point feels related to the AlphaGo behavior everyone puzzled over early where it would consistently win by very few points.

I have this head-chunked as approximately undercutting the opponent until they hit the cost of victory.

Comment by ryan_b on How concerned are you about LW reputation management? · 2021-05-21T16:47:45.512Z · LW · GW

Relax, you did great!

I encourage you to post the quarter-baked ones sooner rather than later.

  • The community hasn't rejected something unless you land in the negatives, which is pretty rare.
  • I've never been in anyplace better for rewarding clear expressions of where thoughts are incomplete. Further, if you write incomplete thoughts accompanied by questions, there are often comments containing helpful further reading, even on issues not core to the community's interests.
  • I do not have advanced training in formalism or anything, so earlier posts which spend more effort describing the intuitions and false starts are the most valuable stage for me to consume in the intellectual pipeline.
  • Think of quarter-baked like a cheap test: if it gets some votes anyway, people are interested. Then the next post will be better, and most of us like ideas being carried farther along in development. I'm pretty sure this is an aesthetic thing entirely separate from the idea itself.
Comment by ryan_b on ryan_b's Shortform · 2021-05-13T15:23:53.049Z · LW · GW

This appears to be the strategy the organizations dedicated to darkening are pursuing.

Popular example: many streetlights throw light directly into the sky or sideways. They try to advocate for choosing the more expensive streetlights that effectively direct the light to the walking areas people actually use. The net result is usually a better-illuminated sidewalk.

I've seen research and demo-houses that employ the different light over the course of the day approach, but I have not seen anything about trying to get developers to offer it. In this way it falls into the same gap as any given environmental improvement in home/apartment building; people would probably by it if it were available, but have don't have the option to select it because it isn't; there's no real way to express demand to developers short of people choosing to do independent design/builds.

I feel like this should also be handled by a contractor, but there's not much in the way of 'lighting contractors' to appeal to; it seems like an electrician would have to shift their focus, or an interior decorator would have to expand into more contractor-y work.

Comment by ryan_b on Could MMRPGs be used to test economic theories? · 2021-05-12T17:50:27.968Z · LW · GW

The significance of value destruction in Eve is that it provides continuous opportunity for players to produce value.

This is similar to selling gear to NPC merchants in most games; selling stuff to the NPCs destroys the thing, and new things similar to it are harvested from the environment. Since this is the default, and is often relatively symmetric, there's only as much opportunity for players to produce value as there is value routinely destroyed by the players.

This is surely just a heuristic in most cases; games from Diablo to D&D have long suffered from a gold inflation problem because no-one troubled to balance this.

Comment by ryan_b on ryan_b's Shortform · 2021-05-12T15:03:01.077Z · LW · GW

Communication Pollution

I think social media has brought us to a point in the development of communication where it could now be considered communication pollution. I model this like light:

In short, communication is so convenient there is no incentive to make it good. Since there is no incentive to make it good, the bad (by which I mean low-quality) radically outnumbers the good, and eats up the available attention-bandwidth.

Comment by ryan_b on Covid 5/6: Vaccine Patent Suspension · 2021-05-07T17:03:10.521Z · LW · GW

A little more context on the Army vaccine reticence, based on an enlisted experience 2007-2012:

The base level of trust among soldiers is much lower than among civilians with respect to vaccines. There are a few reasons for this:

  • Huge victories like smallpox are not factors in our thinking, because we still get the smallpox vaccine. This is because it still exists in weapons stockpiles.
  • The military in general and Army in particular are super awful about messaging. Partially this is a matter of institutional ignorance, but mostly this is a matter of the communication arms being staffed by people who are chosen for the relevant training largely by lottery among the least qualified people available, and then assigned also largely by lottery. Most jobs that aren't combat-related are like this.
  • They bitterly bungled their last major vaccine push, the Anthrax Vaccine Immunization Program.

I was there through the tail-end of the Anthrax fiasco. For clarity I took the vaccine without objection, but it was clearly being managed poorly; they went light on medical justification and heavy on threats and non-judicial punishment while issuing blanket rejections of any kind of bad facts and not being able to articulate anything positive. I have no source for this, but the rumors going around at the time said that 4000 guys in the first wave got erectile disfunction, which couldn't have been more effective at terrifying a bunch of 18-25 hard-living dudes if it had been designed by a civilization with far more knowledge of group behavior than we now possess.

Comment by ryan_b on Could MMRPGs be used to test economic theories? · 2021-05-06T16:15:40.557Z · LW · GW

I second the Eve Online suggestion as an example of what you are looking for. Relevant features of the game are: 

  • Wealth is the measure of player progress, and the purpose of player activity. This is because experience accumulates with time rather than kills or quests, and you simply buy the necessary skills when you have the prerequisites. Further, your ship is destroyed it is gone - you have to buy a new one. Though wrecks can be salvaged for parts.
  • Players participate in almost every level of the economy: they manufacture most of the ships and many of the weapons; they extract most of the raw materials used to build those ships; etc.
  • There is an explicit sector of traders, of both the merchant and speculative variety. The game has explicit transaction costs, and the skills in this area are centered around reducing those transaction costs.
  • The game deals with hard-to-reconcile social values which compete with economic ones (PvP is a wealth-destroying activity, and is called "content" because it is usually more entertaining than mining space rocks).
  • Players responding to economic incentives has several times unbalanced the game in such a way the game company needs to intervene. For example the most recent changes are being imposed to make raw materials more rare, because player mining got so efficient that ship prices plummeted across the board, resulting in wars among player alliances lasting forever because no one would ever run out of hulls.
  • The company, CCP, keeps at least one full time economist on staff, whose job is to help model the economy of the game.

Based on this example, the things a game needs to be able to show complex economic behavior are:

  • Players need to be able to both add and destroy value, which is to say there needs to be player driven production as well as material losses.
  • Players need to be able to exchange value, which means being able to buy and sell almost all things within the game.
  • The parts of the game need to interact, so that changes in value in one segment of the game can affect completely separate parts of the game.
Comment by ryan_b on [link] If something seems unusually hard for you, see if you're missing a minor insight · 2021-05-06T14:27:51.041Z · LW · GW
  1. Ensure your mouth is not dry, so the pills do not stick. If dry, drink water first.
  2. Get the pills as far back in the mouth as you can, so you don't have to manipulate them with your tongue as much. The optimal outcome is to put the pills between the swell of the tongue and where the gag reflex triggers, to enable swallowing with just the moisture present in your mouth.
  3. Have a glass of water handy. You can take a large sip, ensure it picks up the pills, and then swallow the water rather than the pills; the pills are just carried down with it.
  4. In case of a pronounced gag reflex that prevents putting the pills in the back of the mouth, they can be placed under the tongue to avoid tasting them, and then deploy the water trick. Take as much water as needed.
Comment by ryan_b on Facebook is Simulacra Level 3, Andreessen is Level 4 · 2021-04-30T15:14:08.078Z · LW · GW

That's an interesting point; I wonder if there is a broader correlation between higher simulacra levels and narrowing options like this.

Intuitively it feels like the opposite should be the case; I had vaguely felt like the point of going up a level was to get more options. But then, that doesn't make any acknowledgement of the object level options.

Comment by ryan_b on Facebook is Simulacra Level 3, Andreessen is Level 4 · 2021-04-28T19:05:25.207Z · LW · GW

It looks to me like Zuckerberg has the better answer for this by far.  Andreessen's method seemingly gives no thought to outcomes (based on the segment); he's just spending a nominally-infinite resource however much he needs to in order to get through the issue at hand.

By contrast, Zuckerberg's method is trying to close the loop, by which I mean acknowledge the social reality and try to bring it back to its consequences in object-level reality.

It feels like building stable social-object reality loops is the winning play for groups.

Comment by ryan_b on How to Play a Support Role in Research Conversations · 2021-04-28T18:53:30.405Z · LW · GW

Thinking of myself as an upgraded rubber duck is an absolute stitch!

Comment by ryan_b on Why don't we vaccinate people against smallpox any more? · 2021-04-21T00:51:15.225Z · LW · GW

Because the only full-blown cases of smallpox came from people who developed it as a consequence of the vaccine. At least for a period after 1980, there was a standing reward of $10,000 for any naturally occurring case of smallpox.

In terms of how the world would respond: strategic reserves of the smallpox vaccine are kept in various places; it is still provided to the US military due to its presence in weapons stockpiles, and defense prioritization and funding would be put to work in the event of a suspected weapons-grade outbreak.

Comment by ryan_b on Another (outer) alignment failure story · 2021-04-20T18:30:00.140Z · LW · GW

I hugely appreciate story posts, and I think the meta/story/variation stack is an excellent way to organize it. I would be ecstatic if this were adopted as one of the norms for engaging with this level of problem.

Comment by ryan_b on Covid 4/15: Are We Seriously Doing This Again · 2021-04-15T21:17:58.292Z · LW · GW

I hope that, eventually, in the distant future, someone holds someone else accountable.

If the 21st century of American history has a tagline, this will be it.

Comment by ryan_b on The irrelevance of test scores is greatly exaggerated · 2021-04-15T20:47:20.053Z · LW · GW

I gotta say, I never get tired of epistemic walkthroughs of peer-reviewed papers. Upvote for you!

Comment by ryan_b on A New Center? [Politics] [Wishful Thinking] · 2021-04-15T18:39:09.494Z · LW · GW

If you were to go to the national level, absolutely. But I expect that a local-level experiment could be done entirely part-time on a volunteer basis. I expect this because the local-level major party apparatus is usually a part-time volunteer operation. Further, the threshold for success is much, much lower: you can achieve kingmaker status in a lot of locales by forging a bloc of a score of votes.

Comment by ryan_b on How & when to write a business plan · 2021-04-15T16:26:22.187Z · LW · GW

This and the previous post put much more emphasis on the plan than other entrepreneurial advocacy I have read, but I deeply appreciate the complete lack of emphasis on software which otherwise saturates the space.

Comment by ryan_b on Is Rhetoric Worth Learning? · 2021-04-14T22:04:12.270Z · LW · GW

Just over three years after this post was published, I returned to it and switch to a strong upvote from a regular upvote.  The post is well written and engaging, and it recently appears to me that it continues to be highly relevant.  The proximate cause was the post over at the EA forum about an EA debate competition; there was a lot of well-articulated and popular concerns about debate as an activity, the most excellent of which had, if I understand it correctly, the following true concern:

Methods of communicating that are not truth-seeking compete with methods that are for our mastery. Which is to say, by spending time on symmetric weapons like rhetoric, we are forsaking time spent on advancing the truth.

I think this is a mistake, and that the value of rhetoric spoken-or-written to pursuit of the truth is being neglected. I publicly register my intent to write a post on this.

Comment by ryan_b on What Multipolar Failure Looks Like, and Robust Agent-Agnostic Processes (RAAPs) · 2021-04-14T21:38:58.433Z · LW · GW

I would like to register a preference for resilient over robust. This is because in every day language robust implies that the thing you are talking about is very hard to change, whereas resilient implies it recovers even when changes are applied. So the outcome is robust, because the processes are resilient.

I also think it would be good to agree with the logistics and systems engineering literature suggested in the google link, but how regular people talk is my true motivation because I feel like getting this to change will involve talking to a lot of non-experts in any technical literature.

Comment by ryan_b on Predictive Coding has been Unified with Backpropagation · 2021-04-14T20:37:08.646Z · LW · GW

I see in the citations for this they already have the Neural ODE paper and libraries.  Which means the whole pipeline also has access to all our DiffEq tricks.

In terms of timelines, this seems bad: unless there are serious flaws with this line of work, we just plugged in our very best model for how the only extant general intelligences work into our fastest-growing field of capability.

Comment by ryan_b on A New Center? [Politics] [Wishful Thinking] · 2021-04-14T15:53:27.179Z · LW · GW

Meta level: strong upvote, because I strongly endorse this kind of thinking (actionable-ish, focused on coordination problems); I am also very excited that we are now showing signs of being able to tackle politics reliably without tripping over our traditional taboo.

Object level: I wonder if you'd consider revising your position on the not-a-party point. Referring to your comment else-thread: 

Instead, the proposal is to organize a legible voting bloc. More like "environmentalists" than "the green party".

Environmentalists are a movement, and not an organization; the proposal is for an organization. They are a single-topic group that tackles a narrow range of policies; the proposal shows no intention of isolating itself to a narrow range of policies.

What you have proposed is an organization which will recruit voters, establish consensus within the organization on a broad range of policies, with the goal of increasing their power as voters, and you intend to compete directly with the two major parties in their values. Finally, there are no environmentalist kingmaker organizations precisely because there are lots of environmental organizations, which means the positions of individual environmental organizations are not particularly meaningful in elections; this means the organization will need to compete with, or co-opt voters from, other organizations with similar values/goals.

I put it to you that the most natural fit for what you are proposing is a new political party which chooses not to put candidates on the ballot.

This is an ingenious strategy, in my view: by not advancing candidates, the organization is liberated from the focus on winning campaigns, and it is the focus on winning campaigns that drives most of the crappy behavior from the major parties.  At the same time, creating a legible block of voters does a marvelous job of avoiding direct competition while capitalizing on the short-term incentives direct competition creates.

This looks to me very much like a political party that takes the short-term hit of not directly holding office in exchange for the freedom to place longer-term bets on values and policy overall.  As you observed with third-party viability, winning office is unlikely and so not even trying is not much of a hit, and the potential upside is big.

Comment by ryan_b on A New Center? [Politics] [Wishful Thinking] · 2021-04-14T14:27:13.543Z · LW · GW

The short answer is the same thing that prevents the target audience from joining the reds or the blues and influencing them in the direction they would prefer: too much work.

But based on the idea so far, I claim this is a requirement for effectiveness. In order to get either party to change their behavior, they need to have a good understanding of what this group of swing voters want, and that requires getting an inside view.

It is much, much harder to persuade a group of people than it is to simply tell them what they want to hear.  You will be encouraged to know that this is the formal position of virtually all political operatives, because their unit of planning is an election campaign and research shows that is too short a time to effectively persuade a population of voters.

It would also be super weird if when targeting disaffected voters in the middle there were no converts from the disaffected margins of either major party (who presumably will still naturally advocate for the things that drew them to the party in the first place, which is almost the same as a true believer in the party advocating). This too is a desirable outcome.

Comment by ryan_b on People Will Listen · 2021-04-12T15:53:26.017Z · LW · GW

I have no idea if this is the answer, but there's a cluster if investing discussion on the EA side around mission hedging. That may be relevant.

Comment by ryan_b on Generalizing POWER to multi-agent games · 2021-04-08T15:12:11.297Z · LW · GW

I find this line of research tremendously exciting, and look forward to every new post in this vein.

As ever, I favor the ease with which this can be pointed at other problems in addition to AI. It feels intuitively like power-scarcity will allow us to get finely graded quantitative results for all sorts of applications.

Comment by ryan_b on Rationalism before the Sequences · 2021-04-06T20:09:31.700Z · LW · GW

I agree with this comment. There is one point that I think we can extend usefully, which may dissolve the distinction with Homo Novis:

I think there's a key skill a rationalist should attain, which is knowing in which environments you will fail to be rational, and avoiding those environments.

While I agree, I also fully expect the list of environments in which we are able to think clearly should expand over time as the art advances. There are two areas where I think shaping the environment will fail as an alternative strategy: first is that we cannot advance the art's power over a new environment without testing ourselves in that environment; second is that there are tail risks to consider, which is to say we inevitably will have such environments imposed on us at some point. Consider events like car accidents, weather like tornadoes, malevolent action like a robbery, or medical issues like someone else choking or having a seizure.

I strongly expect that the ability to think clearly in extreme environments would have payoffs in less extreme environments. For example, a lot of the stress in a bad situation comes from the worry that it will turn into a worse situation; if we are confident of the ability to make good decisions in the worse situation, we should be less worried in the merely bad one, which should allow for better decisions in the merely bad one, thus making the worse situation less likely, and so on.

Also, consider the case of tail opportunities rather than tail risks; it seems like a clearly good idea to work extending rationality to extremely good situations that also compromise clear thought. Things like: winning the lottery; getting hit on by someone you thought was out of your league; landing an interview with a much sought after investor. In fact I feel like all of the discussion around entrepreneurship falls into this category - the whole pitch is seeking out high-risk/high-reward opportunities. The idea that basic execution becomes harder when the rewards get huge is a common trope, but if we apply the test from the quote it comes back as avoid environments with huge upside which clearly doesn't scan (but is itself also a trope).

As a final note - and I emphasize up front I don't know how to square this exactly - I feel like there should be some correspondence between bad environments and bad problems. Consider that one of the motivating problems for our community is X-risk, which is a suite of problems that are by default too huge to wrap our minds around, too horrible to emotionally grapple with, etc. In short, they also meet the criteria for reliably causing rationality to fail, but this motivates us to improve our arts to deal with it. Why should problems be treated in the opposite way as environments?

So I think the Homo Novis distinction comes down to them being in possession of a fully developed art already; we are having to make do with an incomplete one.

For now.

Comment by ryan_b on Logan Strohl on exercise norms · 2021-03-31T14:12:21.552Z · LW · GW

I strongly agree with the claim, even if we differ on the motivations. I cultivate a sense of shame myself.

Come to think of it, I also deploy my sense of shame with respect to exercise. Following on Rob's questions, it could probably be considered private.

Comment by ryan_b on Logan Strohl on exercise norms · 2021-03-30T21:42:00.383Z · LW · GW

Welp, I've clearly botched this, for which I apologize. To start with, I never meant to make any assumptions about what Logan was thinking, but I can clearly see where that was what I communicated despite myself. This was an unforced error on my part.

I can't get the In Defense of Shame post, because I don't have facebook, but I'd be keen to read - do you know if it was reposted anywhere else? I was unable to locate it at Agenty Duck or here. However, if it is about the book In Defense of Shame, then I was talking about the first of the two dogmas mentioned (which they reject).

What I meant to be talking about was the language drift between the past and present, though I now see Logan wasn't using any more of a standard use of shame than I was. From the Shame Processing link, I see this:

According to me, shame is for keeping your actions in line with what you care about. It happens when you feel motivated to do something that you believe might damage what is valuable (whether or not you actually do the thing).

Shame indicates a particular kind of internal conflict. There's something in favor of the motivation, and something else against it. Both parts are fighting for things that matter to you.

This is very interesting: on the one hand, it is closer to what I mean by guilt than what I mean by shame; on the other hand, it's about reconciling competing priorities, which is supposed to be one of shame's attributes over guilt.

I'm sad about the lack of a social element, but I was sad about that beforehand.

Regarding social and private shame: I think I agree that a utopian society would make use of social shame, but there's a bunch of conditions attached to enable that good use which we now lack. That being said, I'll consider the problem; I have an ongoing related reading list that should let me come to grips with the idea better.

Saying private shame is interesting; even in the sense that I was using shame, I'm not sure I'd oppose a notion of private shame. It's really the suggestion of exclusively-private shame, or anti-social shame, with which I would quibble.

Comment by ryan_b on Logan Strohl on exercise norms · 2021-03-30T19:26:19.784Z · LW · GW

Oh, the meaning wasn't ambiguous; I understood that exactly to be what Logan meant. What I am saying is that this is completely different from how shame (in the past) has been publicly understood. Shame doesn't have any valuable effects without anyone else knowing; it is dependent on relationships to have value, and mostly concerns the obligations to other people that come with them.

But it does make complete sense to me if shame is being used as a synonym for guilt, which is the norm in the US and especially on the internet.

To elaborate on my claim a bit: I say shame and guilt are different emotions.

  • Guilt is the feeling we have when we do something morally bad, or fail to do something morally good. If we consider that lying is morally bad, and following from the OP that exercise is morally good, then if I lie or fail to exercise I should feel guilty. This is true regardless of what anyone else knows or says.
  • Shame is the feeling of letting people down. It is about reputation and the obligations we have to our people (by which I mean family, friends and community). Shame is what I would feel if I were to be caught in that lie, or if someone I cared about knew I told it. Guilt and shame aren't mutually exclusive: suppose I were a member of a running club, and decided to skip one day and do something else instead - but we see each other as they run by and I am sitting having a beer. Now I feel guilty and ashamed at the same time for the same event: guilt for skipping the run, shame for disappointing my club.

I propose a test: reflect on the last time you did something you felt bad about; then imagine someone important to you, who values things like you do, learning about it. This probably feels worse overall. The question is whether it is the same bad feeling only more intense, or if it is a different bad feeling. If it's a different bad feeling when someone else knows, then I think there is value in distinguishing between guilt and shame.

None of this makes Logan's statement bad or wrong; I quite agree with the intended meaning. I only commented because that particular reason highlighted in that particular circumstance threw into sharp relief this difference between guilt and shame, which is otherwise an idiosyncratic interest of mine.

Comment by ryan_b on Logan Strohl on exercise norms · 2021-03-30T16:12:12.593Z · LW · GW

Not germane to the subject at hand, but this stuck out to me:

I think shame is a beautiful and powerful psychological process that probably ought to be treated as personal and intimate, much like recountings of first lovemakings. Trying to use it as a public tool to make people act how you want them to seems to break it.

This flies directly in the face of the historical record. Until modernity began, shame was explicitly and strategically a public tool almost everywhere.

That being said, I am pretty sure this is a case of using shame and guilt as synonyms. I am stricken again and confused again by the difference between public and community; shame strongly requires community mechanisms to work, whereas guilt is supposed to work completely independently of it (as an emotion, at least). Neither mechanism would work based on the words of internet strangers, which seems to be the dominant implication of the word "public" now.

Comment by ryan_b on Jean Monnet: The Guerilla Bureaucrat · 2021-03-21T22:20:32.621Z · LW · GW

News media broadcast things like polling locations, times, and procedures. They do very little in terms of what candidates stand for which positions in a level of detail sufficient to distinguish primary candidates. By contrast, the parties simply provide a list saying which candidates to vote for in elections where that isn't already clear from the ballot itself.

If California and Washington prohibit distributing literature outside polling places, this effect is definitely less strong; but that is just a weaker push towards partisanship, not a push away from it.

Comment by ryan_b on Jean Monnet: The Guerilla Bureaucrat · 2021-03-21T20:42:43.858Z · LW · GW

That finding does not surprise me, because parties are still the primary mobilizers of votes and distributors of voting information. It seems to me we shouldn't expect any countervailing influence against partisanship until one party switches to an election strategy where they focus on expanding the electorate and it pays off.

Comment by ryan_b on Jean Monnet: The Guerilla Bureaucrat · 2021-03-21T20:37:24.212Z · LW · GW

I wonder how this interacts with our crisis mode of governance. I can't speak for the British or French examples, but in the United States at least in the 1800s our concept of crisis was radically more relaxed. For example, in the period leading up to the Civil War, there were a lot of fraudulent elections as a result of things like people from the Missouri Territory coming down as a militia and stuffing ballots in Kansas; for a while Pennsylvania had two legislatures with their own militias who were skirmishing constantly. All of this fell beneath the threshold of something the Federal government saw fit to take a hand in.

At least rhetorically we are prone to treat almost everything as some kind of crisis. I wonder about the degree to which governments operating in the modern media environment are hampered in their ability to recognize a crisis when it is upon them. If crisis recognition is hampered, I expect it to weaken this avenue, which seems to bode strictly ill.

Comment by ryan_b on Process Orientation · 2021-03-21T19:03:04.415Z · LW · GW

I'm going to call the type of management where codifying the steps to a workflow is a goal in and of itself the "procedure" orientation. There are managers at my job who are like this; they do not care in any meaningful way about outcomes or even if the procedure is actually achievable, considering the establishment and verification of procedure to be the desired end state. The charitable interpretation of this is that they are looking for legibility (they need to be able to see what is going on) and consistency (everyone needs to do the same thing).

My first impression is that when shifting from a results orientation or a procedure orientation to a process orientation, the overwhelming factor will be how well the translation from results to decisions goes.

As a note, in my experience people of the procedure orientation refer to themselves as process people. We're going to need a different name in order to keep this from being consistently misunderstood or co-opted by them.

I nominate: decision process orientation

Or simpler: decision orientation

Comment by ryan_b on Politics is way too meta · 2021-03-18T17:00:23.373Z · LW · GW

I don't speak for Rob, but my guess is that your work still qualifies as object level because it is directly about the mechanisms of voting and how those votes are counted. In other words, your work is about voting policy.

Comment by ryan_b on Politics is way too meta · 2021-03-18T16:50:33.315Z · LW · GW

I think it is worth pointing out that "who's winning" is a very badly understood thing in the first place, with very little actual expertise to go around. This is true of pretty much every facet of the system north of policy, and it feels like it gets worse the higher up you go.

Actual political campaigning is the most concrete thing above policy, with a lot of money and a lot of practical experience, and it still has very little expertise to work with. The expertise we have is mostly a matter of heuristics built from statistical regularities. I think it would be fair to categorize failed political strategies as often running afoul of some type of Goodhart; it is common to bet too hard on one or more of these regularities.