On plans for a functional society
post by kave, Vaniver · 2023-12-12T00:07:46.629Z · LW · GW · 8 commentsContents
8 comments
8 comments
Comments sorted by top scores.
comment by stavros · 2023-12-20T09:58:39.250Z · LW(p) · GW(p)
The polycrisis has been my primary source of novelty/intellectual stimulation for a good long while now. Excited to see people explicitly talking about it here.
With regard to the central proposition:
I think if there were A Plan to make the world visibly less broken, made out of many components which are themselves made out of components that people could join and take responsibility for, this would increase the amount of world-fixing work being done and would meaningfully decrease the brokenness of the world. Further, I think there's a lot of Common Cause of Many Causes stuff going on here, where people active in this project are likely to passively or actively support other parts of this project / there could be an active consulting / experience transfer / etc. scene built around it.
I think this is largely sensible and true, but consider top-down implementation of such to be a pipe dream.
Instead there is a kind of grassroots version where you do some combination of:
1.) Clearly state the problems that need to be worked on, and provide reasonable guidance as to where and how they might be worked on
2.) Notice what work is already being done on the problems, and who is doing it (avoid reinventing the wheel/not invented here syndrome; EA is especially guilty of this)
3.) Actively develop useful connections between 2.)
4.) Measure engagement (resource flows) and progress
And from that process I expect something like a plan to emerge - it won't be the best possible plan, but it will be far from the worst plan, more adequate than not, and importantly it will survive contact with reality because reality was a key driver in the development of the plan.
The platform for generating the plan would need to be more-open-than-not, and should be fairly bleeding edge - incorporating prediction markets, consensus seeking (polis), eigenkarma etc
It should be a design goal that high value contributions should be noticed, no matter the source. An example of this actually happening is where Taiwan was able to respond rapidly to Covid thanks to a moderator noticing and doing due diligence on a post in the .g0v forums re: covid, and having a process in place where that information could be escalated to government.
It should also be subject to a serious amount of adversarial testing - such a platform, if successful, will influence $ flows, and thus will be a target for capture/gaming etc etc.
As it stands, we're lacking all 4. We're lacking a coherent map of the polycrisis[1], we're lacking in useful+discoverable communication channels, we're lacking meaningful 3rd party measurement.
As it stands, the barriers to entry for those wishing to engage in meaningful work in this space are absurd.
If you lack the credentials and/or wealth to self-fund, then you're effectively excluded - a problem which was created by an increasingly specialized world (And the worldview, cultural dynamics and behaviours it engenders) has gatekeepers from that same world, enforcing the same bottlenecks/selective pressures of that world on those who would try to solve the problem.
The neighbourhood is on fire, and the only people allowed to join the bucket chain are those most likely to be ignoring the fire - so very catch-22.
P.S.
I think there's a ton of funding available in this space, specifically I think speculating on the markets informed by the kind of worldview that allows one to perceive the polycrisis has significant alpha. I think we can make much better predictions about the next 5-10 years than the market, and I don't think most of the market is even trying to make good predictions on those timescales.
I'd be interested in talking/collaborating with anyone who either strongly agrees or disagrees with this logic.
- ^
On this note, if anyone wants to do and/or fund a version of aisafety.world for the polycrisis, I'm interested in contributing.
↑ comment by Roman Leventov · 2023-12-25T13:08:30.670Z · LW(p) · GW(p)
we're lacking all 4. We're lacking a coherent map of the polycrisis (if anyone wants to do and/or fund a version of aisafety.world for the polycrisis, I'm interested in contributing)
Joshua Williams created an initial version of a metacrisis map and I suggested to him a couple of days ago to make the development of such a resource more open, e.g., to turn it into a Github repository.
I think there's a ton of funding available in this space, specifically I think speculating on the markets informed by the kind of worldview that allows one to perceive the polycrisis has significant alpha. I think we can make much better predictions about the next 5-10 years than the market, and I don't think most of the market is even trying to make good predictions on those timescales.
Do you mean that it's possible to earn by betting long against the current market sentiment? I think this is wrong for multiple reasons, but perhaps most importantly, because the market specifically doesn't measure how well we are faring on a lot of components of polycrisis -- e.g., market would be great if all people are turned into addicted zombies. Secondly, people don't even try to make predictions in the stock market anymore -- its turned into a completely irrational valve of liquidity that is moved by Elon Musk's tweets, narratives, and memes more than by objective factors.
Replies from: stavros↑ comment by stavros · 2023-12-26T06:43:49.920Z · LW(p) · GW(p)
Joshua Williams created an initial version of a metacrisis map
It's a good presentation, but it isn't a map.
A literal map of the polycrisis[1] can show:
- The various key facets (pollution, climate, biorisk, energy, ecology, resource constraints, globalization, economy, demography etc etc)
- Relative degrees of fragility / timelines (e.g. climate change being one of the areas where we have the most slack)
- Many of the significant orgs/projects working on these facets, with special emphasis placed on those that are aware of the wider polycrisis
- Many of the significant communities
- Many of the significant funders
Do you mean that it's possible to earn by betting long against the current market sentiment?
In a nutshell [LW · GW]
- ^
I mildly prefer polycrisis because it's less abstract. The metacrisis points toward a systems dynamic for which we have no adequate levers, whereas the polycrisis points toward the effects in the real world that we need to deal with.
I am assuming we live in a world that is going to be reshaped (or ended) by technology (probably AGI) within a few decades, and that if this fails to occur the inevitable result of the metacrisis is collapse.
I think the most impact I can have is to kick the can down the road far enough that the accelerationistas get their shot. I don't pretend this is the world I would choose to be living in, or the horse I'd want to be betting on. It is simply my current understanding of reality.Hence: polycrisis. Deal with the symptoms. Keep the patient alive.
↑ comment by Roman Leventov · 2023-12-25T12:52:48.086Z · LW(p) · GW(p)
1.) Clearly state the problems that need to be worked on, and provide reasonable guidance as to where and how they might be worked on
2.) Notice what work is already being done on the problems, and who is doing it (avoid reinventing the wheel/not invented here syndrome; EA is especially guilty of this)
3.) Actively develop useful connections between 2.)
4.) Measure engagement (resource flows) and progress
I posted some parts of my current visions of 1) and 2) here [LW(p) · GW(p)] and here [LW(p) · GW(p)]. I think these, along with the Gaia Network [LW · GW] design that we proposed recently (the Gaia Network is not "A Plan" in its entirety, but a significant portion of it), address @Vaniver [LW · GW]'s and @kave [LW · GW]'s points about realism and sociological/psychological viability.
The platform for generating the plan would need to be more-open-than-not, and should be fairly bleeding edge - incorporating prediction markets, consensus seeking (polis), eigenkarma etc
I think this is a mistake to import "democracy" at the vision level. Vision is essentially a very high-level plan, a creative engineering task. These are not decided by averaging opinions. "If you want to kill any idea in the world, get a committee working on it." Also, Deutsch was writing about this in "The Beginning of Infinity" in the chapter about democracy.
We should aggregate desiderata and preferences (see "Preference Aggregation as Bayesian Inference [LW · GW]"), but not decisions (plans, engineering designs, visions). These should be created by a coherent creative entity. The same idea is evident in the design of Open Agency Architecture [LW · GW].
we're lacking meaningful 3rd party measurement
If I understand correctly what you are gesturing at here, I think that some high-level agents in the Gaia Network [LW · GW] should become a trusted gauge for the "planetary health metrics" we care about.
Replies from: stavros↑ comment by stavros · 2023-12-26T07:02:41.288Z · LW(p) · GW(p)
I think this is a mistake to import "democracy" at the vision level. Vision is essentially a very high-level plan, a creative engineering task. These are not decided by averaging opinions. "If you want to kill any idea in the world, get a committee working on it." Also, Deutsch was writing about this in "The Beginning of Infinity" in the chapter about democracy.
We should aggregate desiderata and preferences (see "Preference Aggregation as Bayesian Inference [LW · GW]"), but not decisions (plans, engineering designs, visions). These should be created by a coherent creative entity. The same idea is evident in the design of Open Agency Architecture [LW · GW].
Democracy is a mistake, for all of the obvious reasons.
As is the belief amongst engineers that every problem is an engineering problem :P
We have a whole bunch of tools going mostly unused and unnoticed that could, plausibly, enable a great deal more trust and collaboration than is currently possible.
We have a whole bunch of people both thinking about and working on the polycrisis already.
My proposal is that we're far more likely to achieve our ultimate goal - a future we'd like to live in - if we simply do our best to empower, rather than direct, others.
I expect attempts to direct, no matter how brilliant the plan or the mind(s) behind it, are likely to fail. For all the obvious reasons.
(caveat: yes AGI changes this, but it changes everything. My whole point is that we need to keep the ship from sinking long enough for AGI to take the wheel)
comment by cousin_it · 2023-12-12T14:26:43.592Z · LW(p) · GW(p)
I now think that the ultimate "rising tide that lifts all boats" is availability of jobs. The labor market should be a seller's market. Everything else, including housing / education / healthcare, follows from that. (Sorry Georgists, it's not land but labor which is key.) But the elite is a net buyer of labor, so it prefers keeping labor cheap. When Marx called unemployed people a "reserve army of labor", whose plight scares everyone else into working for cheap, he was right. And from my own experience, having lived in a time and place where you could find a job in a day, I'm convinced that it's the right way for a society to be. It creates a general sense of well-being and rightness, in a way that welfare programs can't.
So the problem is twofold: 1) which policies would shift the labor market balance very strongly toward job seekers, 2) why the elite would implement such policies. If you have a democracy, you at least nominally have a solution to (2). But first you need to figure out (1).
comment by Max H (Maxc) · 2023-12-12T04:41:27.389Z · LW(p) · GW(p)
ok, so not attempting to be comprehensive:
- Energy abundance...
I came up with a similar kind of list here [LW · GW]!
I appreciate both perspectives here, but I lean more towards kave's view: I'm not sure how much overall success hinges on whether there's an explicit Plan or overarching superstructure to coordinate around.
I think it's plausible that if a few dedicated people / small groups manage to pull off some big enough wins in unrelated areas (e.g. geothermal permitting or prediction market adoption), those successes could snowball in lots of different directions pretty quickly, without much meta-level direction.
I have a sense that lots of people are not optimistic about the future or about their efforts improving the future, and so don't give it a serious try.
I share this sense, but the good news is the incentives are mostly aligned here, I think? Whatever chances you assign to the future having any value whatsoever, things are usually nicer for you personally (and everyone around you) if you put some effort into trying to do something along the way.
Like, you shouldn't work yourself ragged, but my guess is for most people, working on something meaningful (or at least difficult) is actually more fun and rewarding compared to the alternative of doing nothing or hedonism or whatever, even if you ultimately fail. (And on the off-chance you succeed, things can be a lot more fun.)
Replies from: Vaniver↑ comment by Vaniver · 2023-12-12T07:16:49.312Z · LW(p) · GW(p)
Like, you shouldn't work yourself ragged, but my guess is for most people, working on something meaningful (or at least difficult) is actually more fun and rewarding compared to the alternative of doing nothing or hedonism or whatever, even if you ultimately fail. (And on the off-chance you succeed, things can be a lot more fun.)
I think one of the potential cruxes here is how many of the necessary things are fun or difficult in the right way. Like, sure, it sounds neat to work at a geothermal startup and solve problems, and that could plausibly be better than playing video games. But, does lobbying for permitting reform sound fun to you?
The secret of video games is that all of the difficulty is, in some deep sense, optional, and so can be selected to be interesting. ("What is drama, but life with the dull bits cut out?") The thing that enlivens the dull bits of life is the bigger meaning, and it seems to me like the superstructure is what makes the bigger meaning more real and less hallucinatory.
those successes could snowball in lots of different directions pretty quickly, without much meta-level direction.
This seems possible to me, but I think most of the big successes that I've seen have looked more like there's some amount of meta-level direction. Like, I think Elon Musk's projects make more sense if your frame is "someone is deliberately trying to go to Mars and fill out the prerequisites for getting there". Lots of historical eras have people doing some sort of meta-level direction like this.
But also we might just remember the meta-level direction that was 'surfing the wave' instead of pushing the ocean, and many grand plans have failed.