We can survive

post by Oxidize · 2024-11-04T19:33:23.816Z · LW · GW · 7 comments

Contents

7 comments

We subconsciously optimize for "Business as usual [LW · GW]"[1]

 

But in the back of our minds, we all know we're f*****.  
 

If AI doesn't kill us all, it will be biotech, nanotech, nuclear fallout, or a random science experiment that creates a substance or reaction not naturally occurring in nature that just happens to wipe us all out. In the minuscule chance technology doesn't accidentally wipe out all of humanity, there is the problem of world governance, ignorance, and incentives at the top of society. In other words, we will intentionally wipe ourselves out or do so because a bureaucratic fool has the superAI world-destruction button. From my understanding, we currently live in an age of international anarchism.[2] Inherent competition between states, and security incentives creating more incentive for security resource allocation, create an unstoppable prisoner's dilemma where Moloch's army march us all into extinction.

 

Additionally, nations all around the world are trending towards totalitarianistic governance. Likely because there is a state incentive for power acquisition & control inherent to human nature itself. As nations grow more authoritarian, innovation & individualism will decrease. In other words, in the unlikely event that human society makes it pass the Fermi paradox's technology filter, humanity will still implode as a consequence of bureaucratic patterns caused by the nature of operational complexity[3] and middle management paradoxes [LW · GW].

 

So the overtone window[4] for the innovation necessary to bring humanity pass this filter is quickly closing. 

 

 

 

Relative to human extinction, I take an optimists perspective. 

 

I think all humans are stupid, and the butterfly effect[5] is one of many heavily underestimated facets of reality. 

 

As a society, we over optimize for short-term results and routinely fall prey to simple psychological fallacies/macro-directional inaccuracies[6] that could easily be reverted with simple awareness of information.
 

I believe that humanity is at less than 1/10000th of its realistic capabilities capacity.

 

A systematized[7] approach to decision making, learning, opportunity vehicles, intelligent collaboration, self-discipline, and actionable approaches to EA and achievement of one's personal goals would exponentially increase society's productive output.

 

I believe that intelligent people are the correct system for saving the world, provided they are given the intellectual tools necessary to rapidly improve and be efficient in their actions

 

I am not native to this community. I found it a week ago after years of isolation from people who think anything like I do. I'm making this post because I believe in the results of collaboration between intelligent people, and I believe building in public is a much more efficient vehicle for proper systems creation. Since it allows high-level[8] systems to be altered in real-time from an early-adopter's[9] experience[10] perspective, which can be somewhat generalized to the broader market[11] by optimizing for estimated discrepancies in goodwill [12]& expectations[13].

 

I have a marketing background, and from a marketing perspective, it is very possible to take the layman's current general nonchalance and ignorance, to a simple, but accurate understanding of the problems currently facing society. With my current understanding of human psychology and sociology, it is even possible to make them care as much as, if not more than we do over a 40 year time horizon. As unknown unknowns [14]reveal themselves, I estimate that this conservative estimate of the time-horizon could potentially decrease to a number with a more realistic time-relevancy to humanity's current needs.

 

Words are cheap, and it is easy to talk in summary's, theories, and ideals.


I am presenting a high-level theory on the functional realities [15]associated with the problems we're facing, what will solve them, what can be done from a skills perspective, a learning[16] perspective, an unknown unknowns perspective, and what actions I believe I can take to facilitate the greater movement that needs to happen if we want our children to ever grow up.

Tear it apart as much as you can. I intend on spending the next few years bringing these ambitions to fruition.

If my premise is flawed, my efforts would be meaningless to the EA community.

 

The goal of the post linked below is to provide a rational basis for the claim I made at the beginning of this post. Which is that, "We can Survive."

 

Breaking Beliefs about saving the world

 

  1. ^

    The theory that society will not change drastically within the next few decades as a consequence of technology. (More of a psychological bias, less of a theory)

  2. ^

    The theory that morality from an international perspective is different than morality from a domestic/national perspective because of the inherent lack of accountability, state incentives of security, ambiguity of resources & intentions, cultural discrepancies, and the reign of Molarch.

  3. ^

    The concept describing the increasing difficulty of managing large-scale systems/operations.

    As things grow, founder intentions become diluted & the impact of unknown unknowns become unmanageable.


    Additionally, the ability of high-level management to control things like hiring practices, systems implementation, low-level worker incentives, environment, etc. decreases.

  4. ^

    The fading period of time before a change becomes impossible

  5. ^

    (Leverage, compounding returns, scale, branding)

    Ex 1. A youtube post is an example of a compounding return that operates over a long time horizon. The first day you post a video, you will have 0 views. The 100th day you post a video, your first video will have 10k views and your 100th video will have 1k views. 

    The people you've influenced/affected with your youtube post, now make every decision/action with the information you've given them in the back of their minds. Additionally, if your post is good they will tell their friends about it and youtube will recommend the video to more people, which will take the video from 10k, to 200k views. Additionally, if your video was compelling from a functional systems psychology perspective of the "long-term mind." then you may have influenced 3 people to start posting youtube videos exactly like you are today in 3 years.

    As an oppositionary concept, you could talk to some random person, say everything the same as the youtube video, and instead of changing 10k people's worldviews, you change 1 person's worldview.

  6. ^

    Wasted efforts from a high-level perspective. Typically caused by ignorance of the concept of opportunity cost[17] and not knowing what you don't know/what exists.


    Example: An elite hacker decides to quit coding and become a watercolor painter.

  7. ^

    System = A thing that is a combination of smaller parts that serves a function different than the smaller parts themselves.

    Systematized = Turned into a process/organized structure that is repeatable/efficient.

  8. ^

    Terming that makes broad generalizations with the goal of directing energy within the given domain.

  9. ^

    a subset of a population that has goodwill to spend and is willing to take a desired action even without perception of short-term selfish gain. (Typically used in a business context)

  10. ^

    Experience in this context refers to consumer convenience, utility, and value as opposed to the original creators of a thing. (A concept illustrated in UX design fields)

  11. ^

    The public (In a given context) | The target audience

  12. ^

    From a functional systems psychology perspective:

    Goodwill refers to the psychological currency that results in a state of being willing to give more than you get until the currency expires.

    Goodwill is owing someone | wanting to give | wanting to contribute 

    Goodwill can be quantified by the degree to which a person is willing to sacrifice for a choice that does not result in (typically short-term) selfish gain.

  13. ^

    Functional systems psychology perspective:

    Expectations refer to system within the brain that estimates the future reward of a given action, and how that system in collaboration with other systems create distortions in your perception relative to satisfaction & suffering

    See (Ending ignorance for a deeper context on reward, perception, and satisfaction & suffering.)

  14. ^

    what we don't know about what we don't know exists.

    Ex. cavemen did not have the context with which to conceptualize aliens, because their focus was on material needs and they were unaware outer space existed.

  15. ^

    Important functions that are part of a system or concept | 80/20 rule for directional accuracy/efficiency | Typically expressed from a high-level perspective in this context.

  16. ^

    High-level term for how the brain changes. Includes acquisition of knowledge, skills, beliefs, traits, tendencies, intellectual capacities (Think from an agency perspective. Ex is "processing power") etc.

  17. ^

    The concept that for any action you take or don't take, you are losing something as well as gaining something. This theory implies that prioritization is undervalued in society, and that limited resources (I.E time, attention, energy, capital) should be allocated efficiently. And just because something is a good opportunity does not mean it is the opportunity you should choose.

    (Believe it or not, most people don't think this way and are oblivious to the concept of opportunity cost)

     

7 comments

Comments sorted by top scores.

comment by Mitchell_Porter · 2024-11-05T03:09:18.297Z · LW(p) · GW(p)

If I understand you correctly, you want to create an unprecedentedly efficient and coordinated network, made out of intelligent people with goodwill, that will solve humanity's problems in theory and in practice? 

Replies from: Oxidize
comment by Oxidize · 2024-11-05T13:55:26.213Z · LW(p) · GW(p)

Correct. It lacks tactical practicality right now, but I think that from a macro-directional perspective, it's sensible to align all of my current actions to that end goal. And I believe there is a huge demand among business minded intellectuals and ambitious people for a community like this to be created.

comment by Oxidize · 2024-11-05T14:00:04.891Z · LW(p) · GW(p)

Could I get some constructive criticism about why I'm being downvoted? It would be helpful for the sake of avoiding the same mistakes in the future.

Replies from: abandon
comment by dirk (abandon) · 2024-11-08T00:10:03.289Z · LW(p) · GW(p)

I didn't vote, but one possible flaw that strikes me is that it's not as concrete as I'd like it to be—after reading the post, I'm still not clear on what precisely it is that you want to build.

Replies from: Oxidize
comment by Oxidize · 2024-11-08T20:22:58.686Z · LW(p) · GW(p)

Oh. I linked the wrong thing. I would down vote this too. Sorry about setting an expectation and then not fulfilling it.

Edit: I fixed the link at the end of the post.

It sucks that I have to wait a week before posting anything again though because I made a simple mistake. I guess I'll just have to hope I don't mess up again in the future.

Replies from: notfnofn
comment by notfnofn · 2024-11-09T19:48:11.414Z · LW(p) · GW(p)

quick comment: I like the content of the google doc that I've read (so far) but I only clicked on it at all because of this comment (and it's kind of ugly so I may not have even read it if I clicked on it). Out of curiosity, why couldn't it have been included in the post itself?

(edit: I don't think "Currently in early stages, so you will need to be sharp and knowledgeable to get the gist of my intentions" is a good idea. It sends signals of unjustified arrogance, even if it ends up being true)

Replies from: Oxidize
comment by Oxidize · 2024-11-10T19:45:05.778Z · LW(p) · GW(p)

Thanks for commenting.

I didn't include the contents in the link because I thought it would make the post too long and I thought it had a different main idea, so I figured it would be better if I made two separate posts. I can't because of the automatic rate-restriction, but maybe maybe it would've been a better post if I included the contents of the linked doc in the post itself. 

I'm realizing that I'm packing an unusually large amount of information within a single post, and I only attempt to fill gaps in information with links & footnotes that will take a significant amount of time to read, and I made little effort to give readers the motivation to read them. 

In  my next post, I'll try to give a better reason for reading & I'll be more thorough clarifying my positions & claims.

I also re-read the comment you're referring to from the perspective of if someone else had written it, and I see what you mean. I Edited it to "Currently in early phases, so forgive me for linking to a series of incomplete thoughts". Hopefully that sets expectations low without appearing arrogant or condescending.