post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by npostavs · 2024-01-28T03:41:19.927Z · LW(p) · GW(p)

A big part of understanding the culture of futility is understanding how traumatic it is when the bad guys win. When SBF, the Luke Skywalker of crypto, and CZ, the Darth Vader of crypto, go head to head and CZ emerges victorious. Then CZ says "Ha! serves you right for being an idiotic do-gooder" and everyone cheers.

Didn't we actually learn that they were both bad guys? I find this example confusing.

Replies from: MakoYass
comment by mako yass (MakoYass) · 2024-01-29T07:10:28.917Z · LW(p) · GW(p)

If we're about to get a trevorpost about how SBF was actually good and we only think otherwise due to narrative manipulation and toxic ingroup signalling dynamics I'm here for it

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2024-01-30T12:14:07.367Z · LW(p) · GW(p)

Upvoted (I did not downvote). For some reason my most appreciated posts were the ones where I just transcribed a bunch of scribbled notes and ordered them at the last minute. I'm not sure why, but it's the kind of thing that happens with a genetically diverse intelligent species, so I just roll with it.

I defer to the EA people on SBF stuff (this was just a suboptimal illustrative example). I don't defer to EA adjacent people about modern persuasion technology [LW · GW], because they're clueless idiot disaster monkeys who unconditionally surrender to whatever infamously manipulative hypercomputer they see their friends spending 5 hours a day looking at. Which is the kind of thing that happens with a primate species that barely evolved enough general intelligence to build civilization and then stopped, so I just roll with it.

comment by Zack_M_Davis · 2024-01-28T18:10:05.511Z · LW(p) · GW(p)

Do you think you could explain your thesis in a way that would make sense to someone who had never heard of "the EA, rationalist, and AI safety communities"? ("Moloch"? "Dath ilan"? Am I supposed to know who these people are?) You allude to "knowledge of decision theory or economics", but it's not clear what the specific claim or proposal is here.

comment by Chris_Leong · 2024-01-28T00:00:36.552Z · LW(p) · GW(p)
  1. What do you see as the low-hanging co-ordination fruit?
  2. Raising the counter-culture movement seems strange. I didn’t really see them as focused on co-ordination.
Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2024-01-28T02:35:58.234Z · LW(p) · GW(p)
  1. This is no less of a division of labor than alignment research [LW · GW]. I like CFAR's work, AI/bot-augmented prediction markets, twitter's community notes, and having a million people read HPMOR in spite of tearing down lots of civilizational schelling fences. I do not stack with the people doing more crypto-focused stuff. I think that tuning cognitive strategies [LW · GW] and Raemon's experiments [LW · GW] have the lowest hanging fruit.
  2. They were focused on making the world a better place, and a substantial subset (e.g. the hippies) were quite serious about it, but they were just lashing out with their eyes closed, due to lacking a drive to solve coordination problems or form accurate world models (the european enlightenment had the drive, but not the will to drive out nihilism). This was 60 years ago and popular revolutions weren't yet well-established as senseless lunacy like they are today; they didn't know that technical solutions were the way to go.
Replies from: Roko
comment by Roko · 2024-02-07T16:47:34.684Z · LW(p) · GW(p)

. I do not stack with the people doing more crypto-focused stuff

why not?

comment by Viliam · 2024-01-28T14:33:32.546Z · LW(p) · GW(p)

Seems to me that many people do not want to coordinate on things -- this may be a cultural thing, with everyone exposed to memes like "today they want you to coordinate on singing a song together, but tomorrow they will try to make you join a mass suicide... better resist the coordination while you still can".

But even without this baggage... the problem is, why should people coordinate on the thing you want, rather than e.g. on the very opposite of it? Coordination itself is just a tool, not a goal. If people start coordinating better on e.g. violently spreading their religion, you probably won't be happy. So maybe this pushback against coordination is actually a public good -- most people are stupid, they would probably coordinate on stupid things, the fewer of those the better.

There are still some ways to coordinate people, for example you can pay them, but those are more difficult.

EDIT:

Uhm, this was too extreme. I actually believe that coordinating on small things is good (such as neighbors deciding to build a playground together for their kids) and is even kind of necessary for a healthy democracy. It is just the mass movements, especially mass movements of idiots coordinated online, that I am afraid of.

comment by mesaoptimizer · 2024-01-30T09:49:31.471Z · LW(p) · GW(p)

Miscellaneous thoughts:

  1. The way you use the word Moloch makes me feel like it is an attempt to invoke a vague miasma of dread. If your intention was to coherently point at a cluster of concepts or behaviors, I'd recommend you use less flavorful terms, such as "inadequate stable equilibria", "zero-sum behavior that spreads like cancer", "parasitism and predation". Of course, these three terms are also vague and I would recommend using examples to communicate exactly what you are pointing at, but they are still less vague than Moloch. At a higher level I recommend looking at some of Duncan Sabien's posts for how to communicate abstract concepts from a sociological perspective.
  2. I've been investigating "Tuning Your Cognitive Strategies" off and on since November 2023, and I agree that it is interesting enough to be worth a greater investment in research efforts (including my own), but I believe that there are other skills of rationality that may be significantly more useful for people trying to save the world. Kaj Sotala's Multiagent sequence [? · GW] in my opinion is probably the one rationality research direction I think has the highest potential impact in enabling people in our community to do the things they want to do.
  3. The "Why our kind cannot cooperate" sequence, as far as I remember, is focused on what seem to be irrationalities-based failures of cooperation in our community. Stuff like mistaking contrarianism with being smart and high status, et cetera. I disagree with your attempt at using it as a reasoning to claim that the "bad guys" are predisposed to "victory".

If I was focused on furthering co-ordination, I'd take a step back and actually try to further co-ordination and see what issues I face. I'd try to build a small research team focused on a research project and see what irrational behavior and incentives I notice, and try to figure out systemic fixes. I'd try to create simple game theoretic models of interactions between people working towards making something happen and see what issues may arise.

I think CFAR was recently funding projects focused on furthering group rationality. You should contact CFAR, talk to some people thinking about this.

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2024-01-30T11:56:49.500Z · LW(p) · GW(p)

Strong upvoted. I read this lightly as I am currently pulling an all-nighter, and will read it more deeply and give a proper response in 24-36 hours.

comment by cousin_it · 2024-01-29T11:51:06.877Z · LW(p) · GW(p)

Some of the first people to try to get together and have a really big movement to enlighten and reform the world was the Counter Culture movement starting in the 60′s

The first? Like, in the history of the world?

comment by Adele Lopez (adele-lopez-1) · 2024-01-28T02:17:39.234Z · LW(p) · GW(p)

Potential piece of a coordination takeoff:

An easy to use app which allows people to negotiate contracts in a transparently fair way, by using an LDT solution to the Ultimatum Game (probably the proposed solution in that link is good-enough, despite being unlikely to be fully-optimal).

Part of the problem here is not just the implementation, but of making it credible to people who don't/can't understand the math. I tried to solve a similar problem with my website bayescalc.io where a large part of the goal was not just making use of Bayes' theorem accessible, but to make it credible by visually showing what it's doing as much as possible in an easy to understand way (not sure how well I succeeded, unfortunately).

Another important factor is that ease-of-use and a frictionless design. I believe Manifold Markets has succeeded because this turns out to be more important than even having proper financial incentives.

Replies from: TrevorWiesinger
comment by trevor (TrevorWiesinger) · 2024-01-28T02:52:43.414Z · LW(p) · GW(p)

An easy to use app which allows people to negotiate contracts in a transparently fair way, by using an LDT solution to the Ultimatum Game (probably the proposed solution in that link is good-enough, despite being unlikely to be fully-optimal)

Writing up the contracts (especially around all the caveats that they might not have noticed) seems like it would be harder than just reading contracts (I'm an exception, I write faster than I read). Have you thought of integrating GPT/Claude as assistants? I don't know about current tech, but like many other technologies, that integration will scale well in the contingency scenario where publicly available LLMs keep advancing.

Part of the problem here is not just the implementation, but of making it credible to people who don't/can't understand the math. I tried to solve a similar problem with my website bayescalc.io where a large part of the goal was not just making use of Bayes' theorem accessible, but to make it credible by visually showing what it's doing as much as possible in an easy to understand way (not sure how well I succeeded, unfortunately).

I think this can be done with a website, but not the current one. Have you tried reading yudkowsky's projectlawful [LW · GW]? The main character's math lessons gave me the impression of something that actually succeeds at demonstrating, to business school types (maybe not politicians), why math and bayesianism is something that works for them.

Another important factor is that ease-of-use and a frictionless design. I believe Manifold Markets has succeeded because this turns out to be more important than even having proper financial incentives.

This is a really interesting thing, it's not just about making each button intuitive, it's about making the whole enchilada intuitive for a wide variety of neurotypes. Now that I think about it, manifold really was a feat of engineering here, although I don't know how well it would work for people who, unlike me, don't know what getting ahead of markets is like. But generally, it's just a lot of optimization power, and it's probably way more time-effective to reach out to them and ask them how they did it (e.g. what books they read) than to try to find ease-of-use resources (e.g. books) with google search.

Replies from: adele-lopez-1
comment by Adele Lopez (adele-lopez-1) · 2024-01-28T22:32:50.059Z · LW(p) · GW(p)

Writing up the contracts (especially around all the caveats that they might not have noticed) seems like it would be harder than just reading contracts (I'm an exception, I write faster than I read). Have you thought of integrating GPT/Claude as assistants? I don't know about current tech, but like many other technologies, that integration will scale well in the contingency scenario where publicly available LLMs keep advancing.

I'd consider the success of Manifold Markets over Metaculus to be mild evidence against this.

And to be clear, I do not currently intend to build the idea I'm suggesting here myself (could potentially be persuaded, but I'd be much happier to see someone else with better design and marketing skills make it).

I think this can be done with a website, but not the current one. Have you tried reading yudkowsky's projectlawful? The main character's math lessons gave me the impression of something that actually succeeds at demonstrating, to business school types (maybe not politicians), why math and bayesianism is something that works for them.

Heh, that scene was the direct inspiration for my website. I'm curious what specific things you think can be done better.

comment by Perhaps · 2024-01-27T23:00:28.966Z · LW(p) · GW(p)

I feel like it's not very clear here what type of coordination is needed.

How strong does coordination need to become before we can start reaching take off levels? And how material does that coordination need to be?

Strong coordination, as I'm defining here, is about how powerfully the coordination constrains certain actions.

Material coordination, as I'm defining here, is about on what level the coordination "software" is running. Is it running on your self(i.e. it's some kind of information that's been coded into the algorithm that runs on your brain, examples being the trained beliefs in nihilism you refer to or decision theories)? Is it running on your brain(i.e. Neuralink, some kind of BCI)? Is it running on your body, or official/digital identity? Is it running on a decentralized crypto protocol, or as contracts witnessed by a governing body? 

The difficult part of coordination is actions, deciding what to do is mostly solved through prediction markets, research, and good voting theory. 

comment by Sinclair Chen (sinclair-chen) · 2024-01-28T18:05:14.959Z · LW(p) · GW(p)

Decision theory didn't take off because it's "law thinking" but better decisionmaking in practice needs "rule thinking". And the mathematical formalisms early on actually weren't very complete or meaningful? 

There were and are market-economics-knowing people who tried very hard to get the world to a better place. They're called developmental economists. Turns out that stuff is actually pretty hard, but people are making progress.

comment by Shankar Sivarajan (shankar-sivarajan) · 2024-01-28T02:44:37.259Z · LW(p) · GW(p)

People strongly prefer the good guys in charge,

Most people in fact just want their bad guys in charge instead, so they can do unto others.

comment by Seth Herd · 2024-01-31T20:04:30.757Z · LW(p) · GW(p)

Your central point, that relatively little work has gone into academic study of coordination, seems really important.

I hope that reading Dath Ilan isn't necessary, because that's a hell of an entry cost. Shouldn't there be an easier way to describe the possibilities and payoffs of better coordination? Surely there's some existing work out there.

As I see it, the arc of history bends toward better coordination. But it does so sporadically and slowly on average.

I'd have little fear for the future if it wasn't for AGI x-risk. That's a hard coordination problem, and the main one I worry about.

comment by Mitchell_Porter · 2024-01-28T12:19:48.279Z · LW(p) · GW(p)

I have not properly read that "Moloch" essay, but I think I get the message. The world ruled by Moloch is one in which negative-sum games prevail, causing essential human values to be neglected or sacrificed. Nonetheless, one does not get to rule without at least espousing the values of one's civilization or one's generation. The public abandonment of human values therefore has to be justified in terms of necessary evils - most commonly, because there are amoral enemies, within and without. 

The other form of abandonment of value that corrupts the world, mostly boils down to the machiavellian pursuit of self-interest - the self-interest of an individual, a clique, a class. To explain this, you don't even need to suppose that society is trapped in a malign negative-sum equilibrium. You just need to remember that the pursuit of self-interest is actually a natural thing, because subjective goods are experienced by individuals. Humans do also have a natural attraction to certain intersubjective goods, but "omnisubjective" goods like universal love, or perpetual peace among all nations, are radical utopian ideas, that aren't even conceivable without prior cultural groundwork. But that groundwork has already existed for thousands of years: 

It's important to remember that the culture we grew up in is deeply nihilistic at its core...

The pursuit of a better world is as old as history. Think of the "Axial Age" in which several world religions - which include universal moralities - came into being. Every civilization has a notion of good. Every modern political philosophy involves some kind of ideal. Every significant movement and institution had people in it thinking of how to do good or minimize harm. Even cynical egoistical cliques that wield power, must generally claim to be doing so, for the sake of something greater than themselves. 

I'm pretty sure that the entire 20th century came and went with nearly none of them spending an hour a week thinking about solving the coordination problems facing the human race, so that the world could be better for them and their children.

You appear to be talking about game theorists and economists, saying they were captured by military and financial elites respectively, and led to use their knowledge solely in the interest of those elites? This seems to me profoundly wrong. After World War 2, the whole world was seeking peace, justice, freedom, prosperity. The economists and game theorists, of the West at least, were proposing pathways to those outcomes, within the framework of western ideology, and in the context of decolonization and the cold war. The main rival to the West was Communism, which of course had its own concept of how to make a better world; and then you had all the nonaligned postcolonial nationalisms, for whom having the sovereign freedom to decide their own destinies was something new, that they pursued in a spirit of pragmatic solidarity. 

What I'm objecting to is the idea that ideals have counted for nothing in the governance of the world, except to camouflage the self-interest of ruling cliques. Metaphorically, I don't believe that the world is ruled by a single evil god, Moloch. While there is no shortage of cold or depraved individuals in the circles of power, the fact is that power usually requires a social base of some kind, and sometimes it is achieved by standing for what that base thinks is right. Also, one can lose power by being too evil... Moloch has to share power with other "gods", some of them actually mean well, and their relative share of power waxes and wanes. 

I think a far more profound critique of "Moloch theory" could be written, emphasizing its incompleteness and lopsidedness when it's treated as a theory of everything. 

As for new powers of coordination, I would just say that completely shutting Moloch out of the boardroom and the war room, is not a panacea. It is possible to coordinate on a mistaken goal. And hypercoordination itself could even become Moloch 2.0. 

comment by lemonhope (lcmgcd) · 2024-04-27T04:49:29.151Z · LW(p) · GW(p)

This is inspiring

comment by Roko · 2024-02-07T18:02:24.494Z · LW(p) · GW(p)

I think the most likely source for a coordination singularity is crypto, not prediction markets.

PMs will not get you out of bad NEs.

comment by Olli Järviniemi (jarviniemi) · 2024-01-29T06:46:22.256Z · LW(p) · GW(p)

Interesting perspective!

I would be interested in hearing answers to "what can we do about this?". Sinclair has a couple of concrete ideas - surely there are more. 

Let me also suggest that improving coordination benefits from coordination. Perhaps there is little a single person can do, but is there something a group of half a dozen people could do? Or two dozens? "Create a great prediction market platform" falls into this category, what else?

comment by Sinclair Chen (sinclair-chen) · 2024-01-28T18:22:35.886Z · LW(p) · GW(p)

Concrete steps towards removing language barriers:
- promote idea that letting languages die is good actually
- improve translation speed, offline-capability, and UI
- create great products that take advantage of auto-translating non-english internets, social media, or traditional media
- accelerate capabilities of LLMs

Concrete steps towards free banking
- Fintech startup that issues VISA cards backed by your liquid investment portfolio, that autosells to pay for things
- Write code for crypto projects

More pie in the sky
- Design new social media that is fun and meaningful rather than divisive or draining
- Create the one true religion
- Stop tipping

comment by Bendini (bendini) · 2024-01-27T22:06:26.820Z · LW(p) · GW(p)

Plausible theory.

In the scenario where a breakthrough leads to a coordination takeoff, what implications do you think that would have for alignment/AI safety research?