Subduing Moloch
post by Teja Prabhu (0xpr) · 2018-02-14T23:34:20.601Z · LW · GW · 15 commentsContents
15 comments
If rationality is systemized winning, then the diaspora of rationalists should be the most powerful group in the world. One way to increase individual and group power is to increase cooperation among rational agents.
As a starting point, the obvious way to increase cooperation is to increase the frequency of high bandwidth communication among the agents. Imagine the existence of an app that each week would randomly pair you with another rationalist. You would converse with the person for an hour everyday (for example, you could spend 30 mins each discussing your respective days and how to optimize it). Then if you repeat the process for a year, then you would get to know 52 people. In 10 years, you would know 520 people. If you connect particularly well with someone, you can continue talking with them elsewhere after the week is over.
The purpose of this is to develop a real community of people with the shared mission of optimizing the world. This is admittedly a pretty weird idea, but once there is a sufficiently high density of edges, then the group as a whole can act as an "super-agent" when the need arises.
While the benefits of forming such a community will initially be restricted to mundane things like job references and entertainment, it can eventually be used to resolve multi-player prisoner's dilemmas of the kind described by SlateStarCodex:
Bostrom makes an offhanded reference of the possibility of a dictatorless dystopia, one that every single citizen including the leadership hates but which nevertheless endures unconquered. It’s easy enough to imagine such a state. Imagine a country with two rules: first, every person must spend eight hours a day giving themselves strong electric shocks. Second, if anyone fails to follow a rule (including this one), or speaks out against it, or fails to enforce it, all citizens must unite to kill that person. Suppose these rules were well-enough established by tradition that everyone expected them to be enforced.
So you shock yourself for eight hours a day, because you know if you don’t everyone else will kill you, because if they don’t, everyone else will kill them, and so on. Every single citizen hates the system, but for lack of a good coordination mechanism it endures. From a god’s-eye-view, we can optimize the system to “everyone agrees to stop doing this at once”, but no one within the system is able to effect the transition without great risk to themselves.
An example might be everyone deciding that the current education system is completely broken (student debt in the US is over 1 trillion dollars) ― and having an organization, let's call it the Bayesian church fix it by supplying the coordination mechanism needed to change the system.
Much like the Catholic Church was all-powerful in the 16th century, the Bayesian Church should also end up wielding enormous power in the far future. The above thought experiment where rationalists interact with each other could be the first step towards igniting the coordination mechanism that will subdue Moloch.
I feel rather excited about this idea, please let me know what you think in the comments.
15 comments
Comments sorted by top scores.
comment by Hazard · 2018-02-15T04:50:04.567Z · LW(p) · GW(p)
Your general attitude seems to be taking the problem of coordination too lightly. Eliezer's recent book has a lot of good thinking on what exactly makes bad equilibria so hard to escape from. Though I'd never discourage you from trying to solve a hard problem, it seems like you're saying "We can fix coordination problems by just coordinating!"
I actually do like the call-a-week idea. I forsee a lot of problems with "call a random rationalist each week for an hour", but they seem far more solvable than "fix coordination in general".
comment by PeterBorah · 2018-02-15T11:03:10.386Z · LW(p) · GW(p)
Upvoted for an interesting idea that feels promising. I'd be down to try this experiment, though an hour a day feels like a large time commitment (and regular commitments like that are harder for me to maintain, since my schedule varies wildly).
Proposed alternative:
Once per week you receive an email with a link to a scheduling tool (something like Doodle), where you input your availability for that week. You're matched with a random partner who has overlapping availability, and you both get an email with the date/time, and a link to a video conference room (perhaps using appear.in) where the call will happen.
I'm uncertain how long the call should be. An hour is not super long, if it's the only call you'll have with that person, but people might balk at a longer call. It could be configurable, but also choices are bad.
It would likely be good to have a fairly concrete set of suggestions for what to do on the call. Maybe something like a few minutes of introductions, followed by something like pair debugging? Or if the goal is more about networking, maybe prompts like "share what you've been thinking about lately" or "what are your most important goals?" would be good.
I could see value in staying in touch with your partner over email or something for a period of time after the call, but I'm not sure exactly what that should look like, and simplicity is good.
Replies from: 0xpr, StefanDeYoung↑ comment by Teja Prabhu (0xpr) · 2018-02-15T17:23:41.381Z · LW(p) · GW(p)
Thanks for your input!
You are correct ― scheduling is a problem. Perhaps we can get around that by building something like Omegle but with only rationalists in it. It shouldn't be too hard to hack together something with WebRTC to create some sort of a chat room where you are automatically matched with strangers, and can video chat with them.
Replies from: ChristianKl, StefanDeYoung↑ comment by ChristianKl · 2018-02-16T18:41:40.359Z · LW(p) · GW(p)
We do have a Discord channel. Given that Discord has video chat it would be a straightforward place to ask people for video chatting with rationalist.
There's also the Lesswrong study hall that provides existing video chatting between rationalists:
https://wiki.lesswrong.com/wiki/Study_Hall https://complice.co/room/lesswrong/interstitial
↑ comment by StefanDeYoung · 2018-02-19T15:30:49.503Z · LW(p) · GW(p)
In her recent post about working remotely, Julia Evans mentions donut.ai as a slack plugin that randomly pairs members of a slack channel for discussions.
Replies from: 0xpr↑ comment by Teja Prabhu (0xpr) · 2018-02-19T18:55:43.165Z · LW(p) · GW(p)
LessWrong also has an existing slack channel, I don't know if it is active ― I sent a private message to Elo on the old LessWrong to get an invite. It was created in 2015, back then only way to join was an email invite ― but now it is possible to get an invite link.
If I get an invite, I'll try to convince Elo to install the donut.ai plugin and tell him to give out an invite link. I was about to create a new slack channel, but I remembered this relevant xkcd.
↑ comment by StefanDeYoung · 2018-02-15T16:10:58.383Z · LW(p) · GW(p)
I agree that an hour a day is a large time comitment. I couldn't agree to spend an our of my time on this project. I would prefer a smaller time increment by default. For example, calls could be multiples of 15 minutes with participants able to schedule themselves for multiple increments if desired. I'm sensitive to your point that choices are bad, but peoples' schedules will be so widely varying that being able to choose if you want to talk for 1,2,3, or 4 intervals during any given week would allow this to reach a much wider group.
To your point that we should have a concrete set of suggestions for what to do on the call, agendas are essential.
comment by ChristianKl · 2018-02-16T13:09:22.621Z · LW(p) · GW(p)
Spending 7 hours per week is a significant time investment. Especially for anyone who's actually working on important things.
Coordinating two busy people to have a 1 hour call every day of the week is a very hard task.
comment by moridinamael · 2018-02-15T14:00:03.058Z · LW(p) · GW(p)
#1, what Hazard said. #2, this proposal reminds me of the ITER project and the Human Brain Project, enormously expensive undertakings which are widely believed to be almost entirely pointless, which were/are organized, campaigned for and run by extremely smart people. The point is that well-meaning really smart people can still make grievous coordination errors.
The idea of a powerufl cabal of rationalists only works if all the members genuinely qualify as big-R Rationalists, people who are basically already superhuman at the types of skills that would make communication and coordination possible. I'm genuinely not sure of any of that type of person currently exist. I think the necessary and most difficult part of your approach would be actually finding and vetting such people.
Replies from: StefanDeYoung, ChristianKl, vanessa-kosoy↑ comment by StefanDeYoung · 2018-02-15T15:48:59.274Z · LW(p) · GW(p)
I disagree that participants would already have to be superhuman, or even particularly strong rationalists. We can all get stronger together through mutual support even though none of us may already be "big-R Rationalists."
In his post about explore/exploit tradeoffs, Sebastian Marshall remembers how Carlos Micelli scheduled a skype call everyday to improve his network and his English. I haven't looked into how many of the people Micelli called were C-suite executives or research chairs or other similar high-status individuals. My guess is that he could have had good results speaking with interesting and smart people on any topic.
For myself, I remember a meetup that I attended in November last year. I was feeling drained by a day job that is not necessarily aligned with my purpose. The event itself was a meeting to brainstorm changes to the education system in Canada, which is also not necessarily aligned with my purpose. However, the charge and energy I got simply from speaking to smart people about interesting things was, and I want to stress this, amazing. For weeks afterwards, the feeling that I got from attending that meeting was all that I wanted to talk about.
If I could get that feeling everyday...
↑ comment by ChristianKl · 2018-02-16T18:39:49.040Z · LW(p) · GW(p)
Both projects successfully got politicians to spend lots of money to employ people of a given field while the money would have likely be better spent if you would have spread out the money inside a team better.
The Space Launch System that NASA currently funds would be another example of large scale government funded projects that are highly suboptimal. In contrast you have Elon Musk who managed to create a lot more effective organisation with SpaceX.
↑ comment by Vanessa Kosoy (vanessa-kosoy) · 2018-02-15T15:15:24.524Z · LW(p) · GW(p)
This is a tangent, but why is ITER almost entirely pointless? And why do you think this is "widely believed"?
Replies from: moridinamael↑ comment by moridinamael · 2018-02-15T15:32:18.127Z · LW(p) · GW(p)
To answer your question, my understanding gleaned from reading a handful of articles on the topic is that many experts in fusion energy think that all money put into the Tokamak architecture is wasted because that architecture has been surpassed by better designs; very little transferrable knowledge can actually be learned via ITER, it doesn't answer fundamental research questions and the engineering questions it answers weren't worth spending ten billion dollars; and practically speaking, there's no clear path from ITER to building a commercially viable fusion reactor, in other words, it does not actually serve a function as an engineering prototype.
"Widely believed" is a weasely enough term that I think the existence of multiple articles to this effect written by experts qualifies the stance as being "widely believed".
To take a meta stance on the question, the fact that experts in the field of fusion energy can have this disagreement and yet the project went forward proves my original thesis that really smart people can disagree about the fundamental viability of a project, yet that project can still move forward and suck up billions of dollars due to one faction being more politically/bureaucratically successful for whatever reason. This is a coordination failure.
comment by TAG · 2018-02-15T16:14:03.983Z · LW(p) · GW(p)
| If rationality is systemized winning, then the diaspora of rationalists should be the most powerful group in the world.
So are they? Can they even steer a software project to a succesful conclusion?
As a starting point, the obvious way to increase cooperation is to increase the frequency of high bandwidth communication among the agents.
Co-operation requires trust and common aims. as well.
The purpose of this is to develop a real community of people with the shared mission of optimizing the world.
Optimise what about the world? There is no such thing as optimisation in general. We have politics because people disagree about what to optimise.
Things like "solve eveything and make everything wonderful" are too vague to be achievable. See Arbital, again.
An example might be everyone deciding that the current education system is completely broken (student debt in the US is over 1 trillion dollars) ― and having an organization, let's call it the Bayesian church fix it by supplying the coordination mechanism needed to change the system.
How? By political persuasion? By routing around it?
Replies from: ChristianKl↑ comment by ChristianKl · 2018-02-16T18:25:45.983Z · LW(p) · GW(p)
You don't need to engage in political persuasion to found a school that works according to your desired criteria.
I consider the system I laid out in Prediction-based Medicine a way to replace a good chunk of traditional credentialing in the medical system with the ability to actually measure the impact of actions of practioners. Prediction-based Medicine would need a startup that's focused on making it happen and not large scale political action.