What is the pipeline?

post by whpearson · 2017-10-05T19:25:04.736Z · LW · GW · 8 comments

Contents

8 comments

Lets say we solve the attention problem and all important posts get sufficient attention and feedback. We have a happy community of people well versed in important things. What is the next step? How does a small group of people knowing important thing move the world in a better direction?

Options:

  1. A small sub group of people splits off and does something. I believe this has happened in the past, it seems to be people who were co-located? Should we be doing more to build communities where we are.

  2. We try and explain the important thing to effective altruists. So that we/they can execute it in that context. We would have to make sure it fit into the community norms of EAs.

  3. Come together as a community to work on a software project or book to get the idea out into the world.

Whichever we pick we should be practicing it/them constantly so that when it is important we can do this well. We are only good at things we do.

This would be a comunity post if I could figure out how to post one.

8 comments

Comments sorted by top scores.

comment by Raemon · 2017-10-05T23:31:43.360Z · LW(p) · GW(p)

> This would be a comunity post if I could figure out how to post one.

FYI, for the immediate future community posts should just be personal-blog posts (which does unfortunately mean less attention, although hopefully this post has already gotten some seed comments that might keep the engine of commentary going).

Replies from: Chris_Leong
comment by Chris_Leong · 2017-10-06T03:23:36.130Z · LW(p) · GW(p)

Why not put community posts in meta then? It would allow the people who want to see them to see them and those who don't to not.

comment by Chris_Leong · 2017-10-05T22:48:26.752Z · LW(p) · GW(p)

Regarding 2, there are large numbers of people in EA who are also involved in LW and much more who have had at least some second-hand exposure. So I think that your concern is overblown.

Replies from: whpearson
comment by whpearson · 2017-10-06T07:15:40.297Z · LW(p) · GW(p)

I know there is a link. I think I was unsure of the solidity of the link, I shall update that section.

Do you think that everything we might want to do fits into the (current) EA methodology. I'm thinking of things like; if we need to do some experimental work to answer a tricky question we think is important (but can't do a good ITN argument for it)?

For example we may want to do more research on what is a good life for a human. The answer to that will have an impact on many different potential interventions, but is not really an intervention by itself.

comment by tristanm · 2017-10-05T23:24:30.172Z · LW(p) · GW(p)

Let’s suppose we solve the problem of building a truth-seeking community that knows and discovers lots of important things, especially the answers to deep philosophical questions. And more importantly, let's say the incentives of this group were correctly aligned with human values. It would be nice to have a permanent group of people that act as sort of a cognitive engine, dedicated to making sure that all of our efforts stayed on the right track and couldn’t be influenced by outside societal forces, public opinion, political pressure, etc. Like some sort of philosophical query machine that the people who are actually in power, or have influence and a public persona, would have to actually follow directives from – or at least, would face heavy costs if they began to do things against the wishes of this group.

This is sort of like the First versus Second Foundation. The First had all the manpower, finances, technology, and military strength, but the Second made sure everything happened according to the plan. And the First was destined to become corrupt and malign anyway, as this would happen with any large and unwieldy organization that gains too much power.

The problem of course is that the Second Foundation used manipulation, mind control, and outright treachery to influence events. So how would we structure incentives so that our larger and more influential organizations actually have to follow certain directives, especially ones that could possibly change rapidly over time?

Politically this can sometimes be accomplished through democracy, or the threat of revolt, but this never gets us very close to an ideal system. Economically, this can sometimes be accomplished by consumer choice, but when an organization forms a legally-sanctioned monopoly or sometimes becomes too far separated from the consumer, then there is no way to keep the organization aligned (see Equifax).

This is even a problem with Effective Altruist organizations, because even though there are still options for most philanthropists, the main thing most philanthropic organizations seek are donations from people with very high net-worth, so they will mainly become influenced by the wants of those individuals, to the extent that public opinion does not matter.

And to the extent that public opinion does matter, these organizations will have to ensure that they never propose any actions too far outside of the window of social acceptability, and when they do choose to take small steps outside of this window, they may have to partially conceal or limit the transparency of these actions.

And all this does have tangible effects on which projects actually get completed, which things get funded and so on, because we absolutely do need lots of resources to accomplish good things in the world, and the people with the most control over these resources also tend to be the most visible and already tied to lots of different incentive structures that we have almost no ability to override.

I know that LW has managed to seed some people into these organizations so that they are at least exposed to these ideas and so-on, and I know that this has had some pretty positive effects, but I also am somewhat skeptical that this will be enough as EA orgs grow and become more mainstream than they are. Every large organization must move towards greater bureaucracy and greater inertia as they grow, and if they become misaligned it becomes very difficult for them to change course. Correctly seeding them seems to be the best strategy but beyond that it is an unsolved problem.

Replies from: whpearson, Chris_Leong
comment by whpearson · 2017-10-06T07:50:29.848Z · LW(p) · GW(p)

Like some sort of philosophical query machine that the people who are actually in power, or have influence and a public persona, would have to actually follow directives from – or at least, would face heavy costs if they began to do things against the wishes of this group

Anything that is reliable influential seems like it would be attacked by individuals seeking influence. Maybe it needs to be surprisingly influential, like the surprising influence of the consents in Anathem (for those that haven't read it, there is a group of monks who are regularly shut off from the outside would and have little influence, but are occasionally emerge into the real world and are super effective at getting stuff done).

I know that LW has managed to seed some people into these organizations so that they are at least exposed to these ideas and so-on, and I know that this has had some pretty positive effects, but I also am somewhat skeptical that this will be enough as EA orgs grow and become more mainstream than they are. Every large organization must move towards greater bureaucracy and greater inertia as they grow, and if they become misaligned it becomes very difficult for them to change course. Correctly seeding them seems to be the best strategy but beyond that it is an unsolved problem.

I think EA might be able to avoid stagnation if there is a healthy crop of new organisations that spring up and it is not just dominated by behemoths. So perhaps expect organisations to be single shot things, create lots of them and then try and rely on the community to differentially fund the organisations as we decide whatever is needed.

comment by Chris_Leong · 2017-10-06T03:32:19.578Z · LW(p) · GW(p)

Interesting comment, but the way you've written it makes it sound like there is some kind of conspiracy which does not exist and which would fail anyway if it was attempted.

Replies from: tristanm
comment by tristanm · 2017-10-06T16:52:18.329Z · LW(p) · GW(p)

To be clear, I do not believe that trying to create such a conspiracy is feasible, and wanted to emphasize that even if it were possible, you'd still need to have a bunch of other problems already solved (like making an ideal truth-seeking community). Sometimes it seems that rationalists want to have an organization that accomplishes the maximum utilitarian good, and hypothetically, this implies that some kind of conspiracy - if you wish to call it that - would need to exist. For a massively influential and secretive conspiracy, I might assign a < 1% change of one already existing (in which case it would be too powerful to overcome) and a greater than 99% of none existing (in which case it's probably impossible to succeed in creating one).

That said, to solve even just the highest priority issues of interest to EAs, which probably won't require a massively influential and secretive conspiracy, I think you'd still need to solve the problem of alignment of large organizations with these objectives, especially for things like AI, where development and deployment of such will mainly be accomplished by the most enormous and wealthy firms. These are the kind of organizations that can't be seeded to have good intentions from the start. But it seems like you'd still want to have some influence over them in some way.