Comment by ryan_b on How could shares in a megaproject return value to shareholders? · 2019-01-18T19:39:55.101Z · score: 2 (1 votes) · LW · GW
The organization sponsoring the project is going from the current situation where they can at least lie and pretend that the possibility exists that they'll come in underbudget, to one where they're guaranteed to burn >=100% of the budget if only because they've got to give away the leftovers at the end.

This is a feature, not a bug. We prefer the world where the project never gets funded in the first place to the world where it gets funded based on lies and then runs 150% of the budget and provides only 50% of the benefit.

Middle managers on the project could now easily benefit financially by shorting the equities and sabotaging the project.

Management can do this now in regular companies, and it isn't a problem: it mostly doesn't occur to them; it damages reputations to appear incompetent; it is also a crime.

Comment by ryan_b on Open Thread January 2019 · 2019-01-18T19:29:20.784Z · score: 2 (1 votes) · LW · GW

From Lewis:

Katsuyama asked him a simple question: Did BATS sell a faster picture of the stock market to high-frequency traders while using a slower picture to price the trades of investors? That is, did it allow high-frequency traders, who knew current market prices, to trade unfairly against investors at old prices? The BATS president said it didn’t, which surprised me. On the other hand, he didn’t look happy to have been asked. Two days later it was clear why: it wasn’t true. The New York attorney general had called the BATS exchange to let them know it was a problem when its president went on TV and got it wrong about this very important aspect of its business. BATS issued a correction and, four months later, parted ways with its president.

Emphasis mine. I interpret Lewis' claim to be that BATS was helping HFTs extract money from non-HFT investors. The benefit to which I referred is the fee BATS was paid for the faster market picture and the increased trading volume generated as a consequence.

More broadly and aside from allegations of specific wrongdoing, the claim is that HFT is just shaving the margins of everyone who makes trades more slowly. This argument makes sense to me; I can see no added value in giving preferential information to one market participant over others. It isn't as though HFT is providing a service by helping disseminate information faster - most of the action taken by regulators on the subject was because of exchanges not informing investors about whatever they were doing. My naive guess is their only real impact on the market is to slightly amplify the noise.

That being said, I can easily imagine HFT competing to a profit margin of zero and thereby solving itself, and I can also imagine that there would be other uses for the technology once other types of algorithms were introduced. I can also imagine that the regulatory burden would be greater than the damage they do so it wouldn't be worth it to ban them.

Which is why IEX was the focus of my interest. They are competing on the basis of countering this specific practice, and they seem to be doing alright.

How could shares in a megaproject return value to shareholders?

2019-01-18T18:36:34.916Z · score: 8 (2 votes)
Comment by ryan_b on Buy shares in a megaproject · 2019-01-17T19:48:25.136Z · score: 2 (1 votes) · LW · GW
A working prediction market makes it much harder to lie to the backers (and harder for proponents to lie to themselves) about probability of success and magnitude of impact.

I agree with this. The reason I do not find it persuasive in the case of megaprojects is that in the current environment backers and proponents are motivated to lie, and the only people who are motivated to find the truth (private creditors) already do the best job of seeking it. As a result, the only group that would listen to the prediction market is also the group which experiences the lowest marginal gain from it. That being said it would still be a good idea to have a prediction market, because even a tiny improvement to the sector would be large in absolute terms.

The prediction market changes the information available to the participants, but does nothing to change their incentives. By contrast, a legal construct you can sell shares of is a restructuring of the incentives at the same time that it changes the information available.

Comment by ryan_b on Buy shares in a megaproject · 2019-01-16T20:41:24.596Z · score: 3 (2 votes) · LW · GW

Lots of megaprojects are profit motivated. Movies, oil & gas investments, mining, etc.

But even so, you are right that it is not obvious, and that is a problem I haven't solved. I had assumed that the same suite of project types would be called for as we currently have for corporations (for profit, not for profit, benefit). I assumed the incentives would shift between them.

There are a bunch of special types of contracts that the government uses for defense which might be useful for inspiration. Both for what to do and what not to do.

I considered prediction markets, but expect them to have approximately zero impact on the outcomes of projects. This is because almost all megaprojects are bad, everyone knows almost all of them are bad, and few people involved with them are trying to behave differently. Another source of information wouldn't change that because they aren't looking for more information; we need to shift the incentives of decision-makers and stakeholders directly.

Comment by ryan_b on Disadvantages of Card Rebalancing · 2019-01-16T17:08:30.377Z · score: 2 (1 votes) · LW · GW

I can see definite advantages to writing these posts and then using them as references for another post generalizing beyond game design. They will just be object-level context references instead of meta-level context references.

In fact of the two arrangements, I think the object-level references would be more useful to me.

Comment by ryan_b on Open Thread January 2019 · 2019-01-16T16:58:54.334Z · score: 1 (2 votes) · LW · GW
If the biggest players on Wall Street are angry, it's because they'd rather trade with HFT than with Brad Katsuyama.

I'm confused by this. That is almost exactly the claim that Lewis and Katsuyama are making: the large exchanges are preferring HFT. The reason this is a problem is because it unilaterally disadvantages everyone who lacks similar trading speed, like individual investors or retirement funds. The exchanges seem to benefit by the increased volume and direct payments from the HFT people for routing privileges.

Do you have a better source to recommend for how they operate in the market? I'm happy to dump this guy if I can get more reliable information.

Comment by ryan_b on Open Thread January 2019 · 2019-01-16T16:37:48.193Z · score: 2 (1 votes) · LW · GW

I feel like rejection-with-explanation is still an improvement over the norm.

Maybe pulling back and attacking the wrong intuitions Schwarz is using directly and generally would be worthwhile.

Buy shares in a megaproject

2019-01-16T16:18:50.177Z · score: 12 (5 votes)
Comment by ryan_b on Modernization and arms control don’t have to be enemies. · 2019-01-15T16:26:52.738Z · score: 2 (1 votes) · LW · GW

You may be interested in The Great American Gamble: Deterrence Theory and Practice from the Cold War to the Present by Keith Payne. It details the development of the deterrent paradigm with which we are familiar, and describes the differing thoughts of Schelling (who is otherwise popular here) and Herman Kahn. I have started but not finished the book, and it is very interesting.

It relates to your questions because nuclear arms development was driven by the European military situation. Summarizing from the book, the process went like this:

1. The Soviet Union has an overwhelming numbers advantage within easy reach of Western Europe.

2. For the US and Western Europe to counter this advantage was deemed too expensive, as the US was far away and Europe was rebuilding.

3. Nuclear weapons were stockpiled by the United States in order to retaliate against a conventional Soviet invasion. This was cheap enough to accomplish.

4. The Soviets developed and stockpiled nuclear weapons to deter any such retaliation.

5. The ICBM program enables first-strike capability, which would pre-empt a successful ground invasion.

6. The Soviet missile program also enables first-strike capability, to deter such pre-emption.

7. Both sides develop second-strike capabilities to ensure first-strike capabilities are not used.

None of these calculations applied to China, which focused its military development on defending China proper from invasion. Further, all technical aid and support for China was withdrawn by the Soviet Union in the Sino-Soviet Split of 1959.

In short, China was not a part of strategic situations which strongly motivated developing nuclear weapons, and both nuclear powers were motivated not to provide nuclear capability to them. They had a nuclear detonation in 1964, and a hydrogen detonation in 1967, ~20 years behind the US.

Comment by ryan_b on Megaproject management · 2019-01-12T14:45:22.713Z · score: 4 (2 votes) · LW · GW

My intuition is that there wouldn’t be much of a replacement effect, unless you consider different groups being more likely to do megaprojects because they are more successful a replacement effect.

I expect this for a few reasons. First, megaprojects are usually organized according to a specific need, and I would be surprised if a given stakeholder (like a city or a corporation) had a meaningful backlog. Second, the current amount of spending is an accident; I think this a different case to one where they spent much less than they originally planned. Lastly most of this is debt spending, and I feel like organizations don’t go looking for ways to absorb all of their available credit.

It does occur to me that the debt point probably weighs against EA value, because that effectively means the savings are amortized over the length of the financing, and because the same amount won’t necessarily be spent elsewhere it isn’t a direct benefit to anyone.

Comment by ryan_b on Open Thread January 2019 · 2019-01-11T19:47:46.639Z · score: 4 (2 votes) · LW · GW

Did anyone follow the development of the stock exchange IEX? I see they landed their first publicly traded company in October. I also see that Wall Street is building its own stock exchange, explicitly rejecting the IEX complaints.

I thought this was a very interesting story when it broke, because it is about how to do things better than the stock market does. If anyone with more financial background could comment, I'd be very interested to hear it.

For those completely unfamiliar, this exchange was built specifically to mitigate High Frequency Trading. There's some more details of the kinds of things that were happening in this article about the year after Flash Boys was published. Naturally, the people who employ high frequency algorithms insist it is all hokum.

Comment by ryan_b on Book Recommendations: An Everyone Culture and Moral Mazes · 2019-01-11T19:15:04.555Z · score: 4 (2 votes) · LW · GW

This isn't a direct response to your question, I just had a thought about the "nothing inside the range of what we think of as a normal workplace" line.

There might be plenty of middle ground available, but I would expect virtually all of those solutions to consistently fail. I expect this because people are mostly going to continue doing what they were doing, with as few adjustments as possible. So people will usually do the same thing and just call it the new thing; or if it is an additional thing do the absolute minimum or completely ignore it; if they do have to put real effort into whatever the new thing is they will take it out of something else.

The appeal of radical solutions is that they make it very clear that both the process and the incentives have changed at the same time, so doing it the old way is impossible.

Megaproject management

2019-01-11T17:08:37.308Z · score: 45 (13 votes)
Comment by ryan_b on Megaproject management · 2019-01-11T16:59:52.794Z · score: 3 (2 votes) · LW · GW

I wonder about the suitability of this field as a target for EA careers. An unacceptably high percentage of that ~8% of GDP is wasted, and the picture gets worse when we entertain opportunity costs. Insofar as economic growth in general is good for alleviating suffering, the ability to prevent hundreds of millions of dollars in waste per project seems like a good deal.

The same mechanism occurs in developing countries, which are the traditional place to look for high impact interventions. It seems to me that in places without a lot of other infrastructure built already, and not a lot of capital to invest, the utilization and opportunity cost factors are bigger than they would be otherwise.

The newness of the field strongly suggests it is neglected, although I don't have any sense of how people are chosen to manage this size of project so even if the expertise is neglected it still might be very difficult to apply it because of network effects or the like.

Comment by ryan_b on [deleted post] 2019-01-08T16:37:25.886Z

A few further thoughts/questions:

  • It seems like temporarily disrupting the equilibrium is probably just equivalent to reducing the confidence of actors' beliefs about other actor's incentives. If the hard incentives are permanently changed, that seems like a permanent shift in the equilibrium.
  • It would be really hard to correctly identify an exact new equilibrium. It feels like we would realistically need to identify the properties we want the new equilibrium to have, identify where those properties are in the space of possible equilibria, and then try to shift the current equilibrium in that direction.
  • How continuously can equilibria move? I have a vague intuition it is more continuous the more types of actors are participating in it, but is incremental progress actually possible or should we expect that if we don't move to the target it will 'snap back' to the same place it was before? Is there some kind of equilibria density in economics?
  • How well would we have to understand a prospective equilibrium to tell whether a business idea might be successful?
  • Following on that, it seems like the faster businesses appear to capitalize on a new equilibrium the more fixed it should be. This suggests to me that maybe a battery of start-ups would be a good approach even without the profit motive. I keep wanting to say mission hedging and activist VC firm.
Comment by ryan_b on On Abstract Systems · 2019-01-07T20:01:39.858Z · score: 10 (4 votes) · LW · GW
But only when communicated in the right way.

This has been motivating my thinking about these problems a lot lately. I am beginning to see the need for communicating in the right way to be a weakness for a given system, because this makes it very easy to misapply.

Just like executability is one of the criteria for goodness in a plan, I feel like gathering the context to apply the system properly should be an explicit part of the system.

Comment by ryan_b on Two More Decision Theory Problems for Humans · 2019-01-06T02:37:55.098Z · score: 2 (1 votes) · LW · GW

That’s the one! Greatly appreciated.

Comment by ryan_b on Towards no-math, graphical instructions for prediction markets · 2019-01-04T22:25:25.865Z · score: 4 (3 votes) · LW · GW

That is brutally bad. What was the training information like? Was it even possible for a naive user to become less naive without losing their ass a bunch of times?

Comment by ryan_b on Towards no-math, graphical instructions for prediction markets · 2019-01-04T18:34:08.438Z · score: 3 (2 votes) · LW · GW

I'd be interested in hearing more about this; I'm unfamiliar with the SciCast interface.

I notice you put user friendly in quotes - do you think it was not in fact user friendly, or was it misleading in an important dimension?

For comparison, I don't think the Metaculus interface is simple enough to do the job. I agree it would be a fraught process, but my starting assumption is that developing an intuitive and correct UI is always a fraught process.

Comment by ryan_b on Two More Decision Theory Problems for Humans · 2019-01-04T16:49:16.779Z · score: 4 (2 votes) · LW · GW

I have lost the link, but I read a post from someone in the community about how grieving takes place over time because you have to grieve separately for each place or scenario that is important to your memory of the person.

This seems like the same mechanism would be required, just for reasoning.

Towards no-math, graphical instructions for prediction markets

2019-01-04T16:39:58.479Z · score: 29 (12 votes)
Comment by ryan_b on Strategy is the Deconfusion of Action · 2019-01-04T16:30:19.258Z · score: 5 (3 votes) · LW · GW

There is an idea I have used by implication in the OP, but might benefit from being identified specifically. This idea is that the level of abstraction where a concept is applied matters.

To illustrate what I mean, consider the end of the confused statements quote from the MIRI post:

Today, these conversations are different. In between, folks worked to make themselves and others less fundamentally confused about these topics—so that today, a 14-year-old who wants to skip to the end of all that incoherence can just pick up a copy of Nick Bostrom’s Superintelligence.

I think it would be reasonable for someone reading my post to look at that section of the MIRI post and then ask: so what is the Superintelligence of strategy? My answer is that there isn't one yet; this is what Sun Tzu and Clausewitz tried and failed to accomplish. I don't believe we have a good enough understanding of the component disciplines of strategy to write one, either (consider our mastery of computer science and information theory relative to our mastery of political science, economics and psychology). We are too confused.

I think the key insight of Meiser's approach is that he applies scientific reasoning as a generative rule for a strategy instance, rather than trying to describe a science of strategy in general and leaving the instance as an exercise for the reader. In other words, he took the scientific perspective and aimed it one layer of abstraction down. This allows us to account for confusion.

The level of abstraction is a big reason deconfusion is so awesome: it works no matter where you aim it, even aiming-at-aiming.

Comment by ryan_b on Strategy is the Deconfusion of Action · 2019-01-04T15:05:08.453Z · score: 3 (2 votes) · LW · GW

Done!

Comment by ryan_b on What's the best way for me to improve my English pronounciation? · 2019-01-03T17:45:59.043Z · score: 5 (3 votes) · LW · GW

I suggest a fast and cheap procedure:

1. Identify a person who you want to sound more like (an English-speaking actor, for example).

2. Listen to them say a phrase.

3. Record yourself while you mimic the way they say the phrase.

4. Check the recording and then return to 3 until you are satisfied.

5. Return to 2.

You don't even really need a recording; you should be able to tell just by listening to yourself speak. A recording will help you understand how other people hear you speak though, and is more precise. I expect you would see significant progress in only a few hours.

Comment by ryan_b on How did academia ensure papers were correct in the early 20th Century? · 2019-01-03T16:40:23.607Z · score: 10 (2 votes) · LW · GW

My expectation is that the fourth alternative, or some variation thereof, is the dominant answer. This is less a reflection of the quality of the papers, and more a reflection of the limited bandwidth of scientists for reading them.

This problem has been discussed in the modern context because of the explosion in the number of publications and the administrative responsibilities of scientists (for example, teaching and grant writing). But it is also noticed that deep reading of papers is both time consuming and cognitively intensive; troubling to write up a correction still more so. I argue there is still a fundamental bandwidth limit, and the early 20th century scientists still had to abide by it.

Following on the argument that reading papers deeply enough to correct errors and publish those corrections is difficult, I posit that the 'publish or perish' mechanism is responsible for corrections being published at all. I expect that even though there are errors, if the objective is to produce the best original work possible it is more efficient to correct them for yourself and then go on to use the corrected version for yourself; it could even be argued that leaving the errors publicly uncorrected is advantageous for being first. I also expect that if the objective shifts to total number of publications, it becomes more efficient to publish corrections because writing up a correction is less difficult than producing original work.

If my expectation is correct, then we should see very few corrections published leading up to World War II, and then an increasing number afterward as the professionalization of science progresses.

One good source for this kind of question would be histories of science and/or math. They do a pretty good job of disentangling what scientists thought and when, because they do the difficult work of going through notes, correspondence, and the published work. The downside is it will usually be from the subject's perspective, ie thermodynamics, instead of focusing on academia per se.

Comment by ryan_b on How do we identify bottlenecks to scientific and technological progress? · 2019-01-02T21:56:00.140Z · score: 9 (5 votes) · LW · GW

Copied to full answer!

I agree regarding neuroscience. I went to a presentation (from whom I have suddenly forgotten, and I seem to have lost my notes) that was describing an advanced type of fMRI that allowed more advanced inspection than previously, and the big discovery mostly consistent of "optimize the c++" and "rearrange the UI with practitioners in mind." I found it tremendously impressive - they were using it to help map epilepsy seizures in much more detail.

I am strongly tempted to say that 2 should be considered the highest priority in any kind of advanced engineering project, and I am further tempted to say it would sometimes be worth considering even before having project goals. There has been some new work in systems engineering recently that emphasizes the meta level and focusing on architecture-space before even getting the design constraints; I wonder if the same trick could be pulled with capabilities. Sort of systematizing the constraints at the same time as the design.

Comment by ryan_b on How do we identify bottlenecks to scientific and technological progress? · 2019-01-02T21:28:55.943Z · score: 7 (3 votes) · LW · GW

Since these are all large subjects containing multiple domains of expertise, I am inclined to adopt the following rule: anything someone nominates as a bottleneck should be treated as a bottleneck until we have a convincing explanation for why it is not. I expect that once we have a good enough understanding of the relevant fields, convincing explanations should be able to resolve whole groups of prospective bottlenecks.

There are also places where I would expect bottlenecks to appear even if they have not been pointed out yet. These two leap to mind:

1. New intersections between two or more fields.

2. Everything at the systems level of analysis.

I feel like fast progress can be made on both types. While it is common for different fields to have different preferred approaches to a problem, it feels much rarer for there not to be any compatible approaches to a problem in both fields. The challenge would lie in identifying what those approaches are, which mostly just requires a sufficiently broad survey of each field. The systems level of analysis is always a bottleneck in engineering problems; the important thing is to avoid the scenario where it has been neglected.

It feels easy to imagine a scenario where the compatible approach from one of the fields is under-developed, so we would have to go back and develop the missing tools before we can really integrate the fields. It is also common even in well-understood areas for a systems level analysis to identify a critical gap. This doesn't seem any different to the usual process of problem solving, it's just each new iteration gets added to the bottleneck list.

Strategy is the Deconfusion of Action

2019-01-02T20:56:28.124Z · score: 73 (23 votes)
Comment by ryan_b on Strategy is the Deconfusion of Action · 2019-01-02T20:53:12.598Z · score: 8 (5 votes) · LW · GW

The military example of being confused about what to be confused about makes me think there is a confusion equivalent of the knowns. From the famous quip:

Known knowns

Known unknowns

Unknown knowns

Unknown unknowns

It feels like the transition from unknown to known probably looks like this:

Confused confusions (an unknown)

Confused deconfusions

Deconfused confusions (these seem like what we have been calling 'disentangled' in Agent Foundations)

Deconfused deconfusions (a known)

Comment by ryan_b on Learning-Intentions vs Doing-Intentions · 2019-01-02T16:02:35.236Z · score: 3 (2 votes) · LW · GW

This seems a lot like "shut-up and multiply" at the meta-level.

Also borrowing from start-up culture, there is a closely related concept to what you describe called de-risking. Importantly that is in terms of financial risk rather than utility risk, but if we are talking about hardware the two should track pretty closely; I would be very surprised if you found a reliable and scalable bridge design which somehow did not improve the returns on investment. The biggest difference I see between them is that utility-risk space is not under the same time pressures as finance-risk space.

Comment by ryan_b on How do we identify bottlenecks to scientific and technological progress? · 2019-01-02T15:37:11.014Z · score: 5 (3 votes) · LW · GW

Since these are all large subjects containing multiple domains of expertise, I am inclined to adopt the following rule: anything someone nominates as a bottleneck should be treated as a bottleneck until we have a convincing explanation for why it is not. I expect that once we have a good enough understanding of the relevant fields, convincing explanations should be able to resolve whole groups of prospective bottlenecks.

There are also places where I would expect bottlenecks to appear even if they have not been pointed out yet. These two leap to mind:

1. New intersections between two or more fields.

2. Everything at the systems level of analysis.

I feel like fast progress can be made on both types. While it is common for different fields to have different preferred approaches to a problem, it feels much rarer for there not to be any compatible approaches to a problem in both fields. The challenge would lie in identifying what those approaches are, which mostly just requires a sufficiently broad survey of each field. The systems level of analysis is always a bottleneck in engineering problems; the important thing is to avoid the scenario where it has been neglected.

It feels easy to imagine a scenario where the compatible approach from one of the fields is under-developed, so we would have to go back and develop the missing tools before we can really integrate the fields. It is also common even in well-understood areas for a systems level analysis to identify a critical gap. This doesn't seem any different to the usual process of problem solving, it's just each new iteration gets added to the bottleneck list.

Comment by ryan_b on Systems Engineering and the META Program · 2018-12-28T22:04:30.736Z · score: 3 (2 votes) · LW · GW

My default assumption is that the metrics themselves are useless for AI purposes, but I think the intuitions behind their development might be fruitful.

I also observe that the software component of this process is stuff like complicated avionics software, used to being tested under adversarial conditions. It seems likely to me that if a dangerous AI were to be built using modern techniques like machine learning, it would probably be assembled in a process broadly similar to this.

Systems Engineering and the META Program

2018-12-20T20:19:25.819Z · score: 31 (11 votes)
Comment by ryan_b on In Defense of Finance · 2018-12-18T20:59:50.507Z · score: 5 (3 votes) · LW · GW

Normally it is uselessly broad, but I wonder if the problem is information. Standard economic models habitually assume perfect, or at least similar, levels of information to simplify things. Some examples of how differences might appear:

My expectation is that as you go down the size scale, less specialized attention is available for managing debts: investment banks spend virtually all of their time and effort on problems of that kind; mid-size corporations that provide a consumer product or service dedicate a team to the problem; small businesses might keep one or two experts on tap; the middle class and working poor alike mostly either forgo expertise or temporarily engage it for the duration of an important transaction (like buying a house).

Is there any kind of reasoning process like "this institution has many other creditors so it is unlikely that I will get nothing in case of default?"

Comment by ryan_b on Argue Politics* With Your Best Friends · 2018-12-18T16:45:55.033Z · score: 2 (1 votes) · LW · GW

One of the things that consistently sticks out to me about human connections is how dependent they are on shared experience. What is more, it seems to me that negative experiences build bonds much more effectively than positive ones do. Consider the difference between these two scenarios:

1. You and a friend both like a band, and plan to go see them in concert. The day of comes, you go to the concert, have a good time, and come home.

2. You and a friend both like a band, and plan to go see them in concert. The day of comes, but on the way to the concert the car blows a tire. You spend a few hours on the side of the road between fussing with trying to get the tire changed and/or waiting for a tow-truck.

If we assume that both of you handle the situation normally, then I expect that most people would feel closer to their friend after scenario 2. Without shared negative experiences, there are whole dimensions of a person you will never see. Further, seeing them acquit themselves well when the chips are down is how you know whether you can rely on them.

It seems to me that arguing with your friends does this in a controlled way for beliefs. When we establish disagreement, we have a situation where people normally do not listen or show hostility. If they listen anyway, and don't show hostility even when they reasonably might, that's a good signal the quality of the friend.

Comment by ryan_b on In Defense of Finance · 2018-12-18T15:58:18.526Z · score: 3 (2 votes) · LW · GW

Well, that joke sure didn't land! Downvotes accepted - I just had to get that risk-as-a-service gag off my chest. And the pun-name was extremely terrible, I grant.

Comment by ryan_b on In Defense of Finance · 2018-12-17T21:04:32.475Z · score: 3 (10 votes) · LW · GW

Finance, the discipline of distributing risk.

With my improved insight into finance, check out my new fintech company! We do Risk-as-a-Service (RaaS) on a blockchain platform which we outsource! It is called RaaS-ma-TaaS.

Comment by ryan_b on Is cognitive load a factor in community decline? · 2018-12-12T22:54:31.428Z · score: 6 (3 votes) · LW · GW

I agree.

The significant thing is that it was not symmetrical, ie they didn't replace one machine with two machines that each took half as much attention. Working 8 took 80% of the time, so it looks like the 1900 machines each only took 10%, compared to the earlier single machine which took 30%. This suggests to me that the new machines took ~0.33 the attention the earlier ones did. So the improved machines lead to workers "only" working about three times as hard overall.

Comment by ryan_b on Measly Meditation Measurements · 2018-12-11T16:54:40.564Z · score: 6 (3 votes) · LW · GW

I agree, but I suspect the causal relationship lines up the other way - we are very good at behaving a particular way in response to particular situations, regardless of our subjective experiences.

Comment by ryan_b on Is Science Slowing Down? · 2018-12-10T22:10:16.872Z · score: 4 (3 votes) · LW · GW

I can't speak for Raemon, but I point out that how low the fruit hangs is not a variable upon which we can act. We can act on the coordination question, regardless of of anything else.

Comment by ryan_b on Measly Meditation Measurements · 2018-12-10T21:53:54.796Z · score: 7 (4 votes) · LW · GW

Strong upvote for reporting on measured self-experimentation.

It doesn't speak directly to your results, but: I was reading a comment elsewhere about the phenomenon of people having a meditative experience (like kensho), feeling very different subjectively, but then when they describe it to their friends/families/colleagues those people don't notice anything different.

I noticed that I would shocked if a few months of doing something for an hour a day were to outweigh one or more decades of socialization, under the same stimuli as usual, enough that it would be casually obvious.

As a result, my estimation of how much meditation would be required to even make a good test got pushed much higher. Alternatively, and in my estimation more likely, casual observation is a very wrong thing to be looking at for evidence of the effectiveness of meditation.

Comment by ryan_b on Is cognitive load a factor in community decline? · 2018-12-10T20:51:47.945Z · score: 7 (4 votes) · LW · GW

The way they addressed this question was by comparing how much time was spent monitoring the looms versus actively performing tasks. The quote in the article is as follows:

Bessen shows that in the early 19th century, a New England weaver operating a single power loom spent 70-75% of the time watching the loom. By 1900, monitoring without active intervention was reduced to ~20% of the weaver’s time, and actively performing tasks took up 80% of the time. This is because the weaver in 1900 was made to operate 8 power looms.

I don't have access to the Bessen paper currently, though I'll probably go ahead and read it anyway.

Comment by ryan_b on Who's welcome to our LessWrong meetups? · 2018-12-10T20:02:27.295Z · score: 14 (8 votes) · LW · GW

Written with a view to meeting the following criteria:

  • Short, so as to be digestible at a glance
  • Get people who will increase the quality of the meetup
  • Broad enough to capture new and interesting people

"You: enjoy careful thinking and value good communication."

Comment by ryan_b on Prediction Markets Are About Being Right · 2018-12-10T19:48:27.785Z · score: 3 (2 votes) · LW · GW
They enable this sole reliance on truth, without imposing virtual taxes via long lock-up periods.

I am not sure why exactly, but this sentence prompted me to imagine prediction markets differently along a particular dimension. Mostly I imagined a prediction market would wind up organizing its expertise in a way that mirrors the stock market; there are experts in particular types of commodities, in particular industries, and in particular types of transaction, etc.

The "long lock-up period" got me wondering about how to predict longer term outcomes, and the obvious answer was to break up the outcome you are really concerned with into sub-outcomes, enabling faster payouts and communicating information in a more fine-grained way in the bargain. This suggests to me that long-term and high importance outcomes will each have a family of sub-outcomes and therefore each develop into their own areas of expertise.

This looks like it doesn't have an equivalent in current markets, which strikes me as interesting and possibly important.

Is cognitive load a factor in community decline?

2018-12-07T15:45:20.605Z · score: 20 (7 votes)
Comment by ryan_b on Is cognitive load a factor in community decline? · 2018-12-07T15:35:07.533Z · score: 12 (7 votes) · LW · GW

Some things that this idea doesn't explain:

1. If a shortage of cognitive resources is the problem, why do unemployed people do even less of the things Putnam measures than employed people do?

a. Does the fact of unemployment deplete cognitive resources to a similar or greater degree, perhaps because of loss of status?

b. Am I perhaps mislead by the unemployed people metric, and not comparing like with like? For example, if someone is not working because of disability, I expect that same disability to interfere with volunteer work.

2. No accounting for change in the environment. The omnipresence of advertising could have an effect; the extreme ease of communication could have an effect; both work and community are situated in the physical environment so if just being there demands more resources, that could partially explain it.

a. Although if the mechanism is real, we should expect both work and community to be adversely affected. I note productivity growth has been slowing down, but I have the impression that can be satisfactorily explained by productivity gains from computers achieving saturation and nothing else driving growth. I don't know of any case where we see previously stable productivity actually declining, which the idea predicts.

Comment by ryan_b on On Rationalist Solstice and Epistemic Caution · 2018-12-06T18:05:47.777Z · score: 9 (3 votes) · LW · GW

More precise, maybe. I don't think it is a better term.

Comment by ryan_b on Playing Politics · 2018-12-05T17:01:42.447Z · score: 3 (2 votes) · LW · GW
doesn't seem like an action that anybody who actually wants to meet should find offensive

You might be right, but whenever I have a thought like this it turns out badly for me.

Comment by ryan_b on Playing Politics · 2018-12-05T16:57:17.696Z · score: 8 (5 votes) · LW · GW

This is an excellent description of the phenomenon. I have found that a lot of these sorts of problems dissolve if I view my contribution to the group as reducing the information load.

I am tempted to declare that the whole of leadership.

Comment by ryan_b on Genetically Modified Humans Born (Allegedly) · 2018-12-05T16:13:46.230Z · score: 5 (3 votes) · LW · GW

Nothing you are saying comes as a surprise, but my confidence in the process remains reduced. The problem here is that having a procedure for establishing whether a patient is informed, all of the weight rests on the procedure, and virtually none on the practitioner. This is the same for all fields of expertise.

I have read many of these kinds of forms as a patient. What we want them to be for is informing the patient; what they are actually for is defending against the accusation that the patient was not informed.

The audience was very much preoccupied with how the procedure for informing the patient was conducted, and seemed to consider this the biggest red flag. I find the fact that the lead scientist kept referring to a form to be the biggest red flag, because it suggests he didn't engage the ethical issues directly.

Suppose for a moment that he did a much better job informing the patients - proper training, third party verified composition, etc. I don't think this would have any implications at all for how He Jiankui engaged with the question of whether it was right to do this, but I do expect the audience to have been largely mollified. I see this as a problem.

Comment by ryan_b on Genetically Modified Humans Born (Allegedly) · 2018-11-30T17:14:21.172Z · score: 5 (3 votes) · LW · GW

I did not know what to expect, but I am not surprised.

I am now interested in how this plays out from an alignment perspective. It seems to me that the ethics of genetic editing have been taken pretty seriously by practitioners, and I'm tempted to make an analogy between the ethics here and safety in AI.

I really hope those kids are okay.

Comment by ryan_b on Genetically Modified Humans Born (Allegedly) · 2018-11-28T17:47:10.049Z · score: 4 (3 votes) · LW · GW

That's an interesting transcript. It managed to decrease my confidence in the ethical frameworks we have set up around medicine; most of the questions were about forms, training for forms, number of institutions to whom the forms were submitted, etc. Only a few of those questions went right to the heart of the ethical problems. Those questions were:

  • How do you see your obligation to these children?
  • Are you sure the parents understood what they were doing?
  • Would you do this to your own child?

There doesn't seem to be any articulation of the risks during the session, or plans for dealing with them.

Genetically Modified Humans Born (Allegedly)

2018-11-28T16:14:05.477Z · score: 30 (9 votes)
Comment by ryan_b on How democracy ends: a review and reevaluation · 2018-11-28T15:11:17.550Z · score: 2 (1 votes) · LW · GW

I have heard the same claim, but I don't find it credible. Even if it were, in order to make a credible attempt the Marine Corps would need the cooperation of the Navy, who don't have the same level of admiration.

Comment by ryan_b on Is Science Slowing Down? · 2018-11-27T20:13:54.008Z · score: 6 (4 votes) · LW · GW

Assuming the trendline cannot continue seems like the Gambler's Fallacy. Saying we can resume the efficiency of the 1930's research establishment seems like a kind of institution-level Fundamental Attribution Error.

I find the low-hanging-fruit explanation the most intuitive because I assume everything has a fundamental limit and gets harder as we approach that limit as a matter of natural law.

I'm tempted to go one step further and try to look at the value added by each additional discovery; I suspect economic intuitions would be helpful both in comparing like with like and with considering causal factors. I have a nagging suspicion that 'benefit per discovery' is largely the same concept as 'discoveries per researcher', but I am not able to articulate why.

Comment by ryan_b on How democracy ends: a review and reevaluation · 2018-11-27T16:49:28.242Z · score: 7 (4 votes) · LW · GW

The United States military is extremely unlikely to launch a coup. In the event any element of it tries, other elements can be relied on to fight them. There are a couple of reasons for this:

1) Our oaths are to the Constitution, which is to say we are formally loyal to the system, not to an office or its occupant. Nominally the Marine Corps has more specific loyalty to the office of the President, but even then sitting Presidents clearly trump aspiring ones.

2) Enlisted hold no special affection for senior military leadership. Partially this is because the organizations are huge and bureaucratic so there is no real contact, and partially this is because they aren't particularly competent. We're in a low ebb of military success, so even the famous recent generals you have heard of are famous because they failed-to-fail rather than because they did outstanding work. There are no generals popular enough to move a lot of soldiers to break the law or betray their oaths.

3) At least among the Army infantry, we talked about this kind of thing pretty frequently. I expect that if the military is to have a bad effect during a coup, it is much more likely because of excessive enthusiasm in putting one down.

Comment by ryan_b on Summary: Surreal Decisions · 2018-11-27T16:14:45.175Z · score: 9 (5 votes) · LW · GW

I am extremely pleased to see surreal numbers put to more practical use (for liberal interpretations of 'practical'). It's of no particular relevance to the paper, but when I read for the first time that every number has a game, but not all games have numbers, and thus game-space is larger than number-space, my head exploded.

Real-time hiring with prediction markets

2018-11-09T22:10:18.576Z · score: 19 (5 votes)

Update the best textbooks on every subject list

2018-11-08T20:54:35.300Z · score: 78 (28 votes)

An Undergraduate Reading Of: Semantic information, autonomous agency and non-equilibrium statistical physics

2018-10-30T18:36:14.159Z · score: 30 (6 votes)

Why don’t we treat geniuses like professional athletes?

2018-10-11T15:37:33.688Z · score: 20 (16 votes)

Thinkerly: Grammarly for writing good thoughts

2018-10-11T14:57:04.571Z · score: 6 (6 votes)

Simple Metaphor About Compressed Sensing

2018-07-17T15:47:17.909Z · score: 8 (7 votes)

Book Review: Why Honor Matters

2018-06-25T20:53:48.671Z · score: 31 (13 votes)

Does anyone use advanced media projects?

2018-06-20T23:33:45.405Z · score: 45 (14 votes)

An Undergraduate Reading Of: Macroscopic Prediction by E.T. Jaynes

2018-04-19T17:30:39.893Z · score: 37 (8 votes)

Death in Groups II

2018-04-13T18:12:30.427Z · score: 32 (7 votes)

Death in Groups

2018-04-05T00:45:24.990Z · score: 47 (18 votes)

Ancient Social Patterns: Comitatus

2018-03-05T18:28:35.765Z · score: 20 (7 votes)

Book Review - Probability and Finance: It's Only a Game!

2018-01-23T18:52:23.602Z · score: 18 (9 votes)

Conversational Presentation of Why Automation is Different This Time

2018-01-17T22:11:32.083Z · score: 70 (29 votes)

Arbitrary Math Questions

2017-11-21T01:18:47.430Z · score: 8 (4 votes)

Set, Game, Match

2017-11-09T23:06:53.672Z · score: 5 (2 votes)

Reading Papers in Undergrad

2017-11-09T19:24:13.044Z · score: 42 (14 votes)