How to avoid death by AI.

post by Krantz · 2024-07-23T01:59:54.339Z · LW · GW · 12 comments

Contents

12 comments

If we continue to scale ML systems, we will all perish.

If you disagree with that, you should probably be reading Eliezer's work instead.

If you are caught up on his work, then I have something new for you to think about.

 

One solution for solving this problem, would be to teach everyone on the planet the full set of reasons Eliezer holds this position.  That would be the 'humanity grows up and decides not to build AI' possible future.

That seems like an intractable task.  Most people do not care about those reasons.  They don't care about learning what they can do to alter the trajectory of our future.  They have no incentive to comprehend what you are wanting them to comprehend.  We can't force knowledge into people heads.

How on Earth could we possible educate so many people about such a nuanced topic? 

How could we verify that they really understand?

You give them something they do care about, money.

If you pay individuals to prove they understand something (in a way that works) you create a function that takes money as input and outputs public comprehension of the topics that are incentivized.

That's something we really need right now.  It would serve the function of a 'fire alarm' not only for Eliezer's message, but for any information any person wants another person to consider.

This can be done by constructing a decentralized collective intelligence that rewards individuals for using it.

If you aren't familiar with collective intelligence, do not mistake it for artificial intelligence.

It is a completely different field in computer science.  (https://cci.mit.edu/)

It is a paradigm shift from a self contained intelligent system that can evolve beyond us to a system that has humans as its parts and requires our interactions to grow.

It's a shift from trying to get a machine to twirl a pencil all by itself to getting a machine that can coordinate billions of people to solve much more complex problems.

That's intelligence also.

There are projects in this spirit (Community Notes, Wikipedia, Research Hub, predictive markets, Anthropic/collectiveintelligenceproject), but they fall short.

What we actually need is a place where people intentionally go to receive an education, read the news, verify the truth and get rewarded financially for it (without needing to invest capital).

I think people really underestimate the effect such a mechanism would have on society.

Or they just haven't thought about it carefully enough.

 

 

I believe I have an algorithm (from work on a gofai project from 2010) that will scale the effectiveness of collective intelligence orders of magnitude.  Unfortunately, it would also help LLMs reason more effectively and possible piss off intelligence agencies as much as Bitcoin pissed off the banks, so it's not online anywhere.

I am posting this on LW because I don't currently have a way to stamp my intellectual property onto a blockchain in such a way that only Eliezer can see it and I can pay him to demonstrate that he has thought about it.

It wasn't the meteor that killed everyone in "Don't look up.", it was the fact that they hadn't yet build a functional decentralized social reasoning platform where everyone earned their living by learning and verifying things.  If they would have built that, they could have communicated the problem, negotiated solutions and survived.  That's the lesson we were supposed to draw from that.

If we would have built what I'm talking about back in 2010 when I came up with it, then billions of people would share Eliezer's concerns today.

That's what we need to build.

 

 

You can listen to me talk about this for a couple hours here:

https://www.youtube.com/watch?v=eNirzUg7If8

https://x.com/therealkrantz/status/1739768900248654019

https://x.com/therealkrantz/status/1764713384790921355

 

If learning by wagering is more your thing, you can do that here.

https://manifold.markets/Krantz/krantz-mechanism-demonstration?r=S3JhbnR6

https://manifold.markets/Krantz/if-a-friendly-ai-takes-control-of-h?r=S3JhbnR6 

https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6 

https://manifold.markets/Krantz/if-a-machine-is-only-capable-of-ask?r=S3JhbnR6

https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6

12 comments

Comments sorted by top scores.

comment by Donald Hobson (donald-hobson) · 2024-08-22T20:11:48.386Z · LW(p) · GW(p)

A lot of the key people are CEO's of big AI companies making vast amounts of money. And busy people with lots of money are not easy to tempt with financial rewards for jumping through whatever hoops you set out. 

comment by ProgramCrafter (programcrafter) · 2024-07-29T20:15:17.962Z · LW(p) · GW(p)

This proposal, done without care, might turn out as a variation on advertisements, which are usually delivered to phone-tapping farms instead of to people who would be interested in a product/concept. If you have a provably better solution, you'd outcompete existing ad systems!

comment by quila · 2024-07-23T02:16:31.469Z · LW(p) · GW(p)

I am posting this on LW because I don't currently have a way to stamp my intellectual property onto a blockchain in such a way that only Eliezer can see it and I can pay him to demonstrate that he has thought about it.

If you want your algorithm to be seen by Eliezer specifically, I'd suggest e.g emailing MIRI or messaging a member. (If you're concerned about surveillance, also asking for a public key first)

(Note: my prior on this kind of claim being true is low, but this is the advice for in case it's true)

Replies from: Krantz, Krantz
comment by Krantz · 2024-08-18T09:50:33.120Z · LW(p) · GW(p)

Just an update.  I did email MIRI and had a correspondence with a representative.

Unfortunately, they were not willing to pass on my information to Eliezer or provide me with a secured way to provide further details.

I was encouraged to simply publish my infohazardous material on LW and hope that it becomes 'something other people are excited about', then they would take a serious look at it.

comment by Krantz · 2024-07-23T15:12:22.873Z · LW(p) · GW(p)

Thanks.  Done.

I'll let you know if I get a reply.

comment by chkno · 2024-10-14T09:10:33.248Z · LW(p) · GW(p)

Questions:

  • If this system is paying large chunks of the population salary-level payments, who is putting trillions of dollars into this system to buy belief updates?
  • Is this asymmetric [? · GW]? Is it more efficient at spreading truth than disinformation/propaganda?  How?
  • What prevents folks from making user-agent bots that automate interaction with this system, pretending to be humans & extracting the cash, with revenue shared between the developers & users of the bots?
  • Around here, participating in prediction markets is high-status, but in the broader culture this often scans as 'gambling', which is low-status.  This system seems like it could be perceived as a mix of arguing on the internet, receiving public support, consuming propaganda, (and maybe using prediction markets?), all of which are low-status.  What's the narrative for why high-status people should use this system / how people will raise their status by sharing that they participate in this system (in a culture where prediction-markets-are-good narratives have made so little progress)?
comment by Krantz · 2024-08-25T00:46:19.520Z · LW(p) · GW(p)

Here are some common questions I get, along with answers.

How do individuals make money?

By evaluating arguments, line by line, in a way where their evaluations are public. They do this on a social media platform (similar to X), where each element on the feed is a formal proposition and the user has two slider bars for "confidence" and "value" both ranging from 0 - 1.

Why would someone spend their time evaluating arguments?

Because others are willing to pay them. This can be either (1) an individual trying to persuade another individual of a proposition (2) a group or organization dedicating capital to a specific set of propositions (3) an individual choosing where publicly funded capital ought be allocated (primary reason for "value" metric).

Why would others pay them?

Because pointing out their publicly documented beliefs that contradict one another is a good way to demonstrate to the public why they are obviously wrong. This is fundamentally the same reason analytic philosophers have other philosophers write out arguments in the first place. It's easy to point out where they are wrong.

Why would we want to pay someone for publicly making themselves look obviously wrong?

Because we want them to change their beliefs. Imagine here, anyone you've wanted to change the mind of..

Why won't there be many contradictory beliefs held by anyone?

Because they don't want to be obviously wrong. Think politicians.

How much money goes to which arguments? The market decides.

What makes this system different than other communication tools? It is capable of identifying and directing discourse to the various cruxes we never seem to reach.

There's a number of influential intellectual leaders I've been wanting to speak with for many years because I want to change their beliefs. Our social communication system is broken. If I had access to a ledger that they rigorously used to document their accepted beliefs and sound lines of reasoning, I wouldn't need their presence to have an effective argument with them. I could simply insert the "missing proposition" into such a system and 'buy them a wager' on the truth they're not looking at.

This project is really important for a different reason though. This can also be used as an interpretable aligned foundation for the truth of a symbolic AI that could scale an alignment attractor faster than a capabilities one.

I think this could be the one thing capable of pulling the fire alarm Eliezer's been talking about for 20 years. As in, I think society would convey big important ideas to each other better (the lesson we were supposed to learn from "don't look up.") If we had a system that gave individuals control over the general attention mechanism of society.

There is a ton more.. This is going to need to be decentralized, humanity verified and hit a couple other security metrics as well. It will take a huge collaboration between the AI sector and crypto. I can't afford a ticket to any fancy conferences where they are building crypto cities or thinking about collective intelligence formally like this, let alone tying to merge it into a project like Lenat or Hillis had. If anyone knows someone in the AI safety grant community, that would be willing to look at what I'm not willing to put online, please share my information with them.

comment by closedAI · 2024-07-23T14:29:52.910Z · LW(p) · GW(p)

Isn't this what CCP's been doing all along? In China, individuals vie for higher education and status by regurgitating Xi's communist bullshits all day long, starting from the tender age of 7. It's a system where compliance and repetition are rewarded, not genuine understanding or critical thinking.

Is this what you mean/want?

Replies from: Krantz
comment by Krantz · 2024-07-23T15:33:11.051Z · LW(p) · GW(p)

Yes, it does share the property of 'being a system that is used to put information into people's heads'.

There's one really important distinction.

One is centralized (like the dollar)

One is decentralized controlled (like Bitcoin)

There's quite a bit of math that goes into proving the difference.

Where consistency is the primary metric for reward in a centralized system, a decentralize system is more about rewarding the ability to identify novelty / variation from the norm, that later becomes consistent.

It would feel more like a predictive market than Khan Academy.  Specifically, it won't be 'teaching you a curriculum'.  You will be exploring, voting on things.  It will do a good job of persuading you of what to go vote on.  It will 'recommend' the thing that will make you the most money to think about.

You see, this is where the money that you earn comes from.  If people want something thought about, they can incentivize those propositions.  The religious leaders will want you to learn about their religion, the phone companies will want you to learn about their new phone options.  Your neighbors might want you to watch the news.  Your parents might want you to learn math.  Your wife might want you to know you need to take out the trash.  People pay money to other people to demonstrate they understand things.

People will want to put money into this.

It's marketing, with an actual receipt.

 

The great thing is, you get to choose what you read.

comment by Krantz · 2024-07-23T02:03:08.211Z · LW(p) · GW(p)

You can listen to me talk about this for a couple hours here:

https://www.youtube.com/watch?v=eNirzUg7If8

https://x.com/therealkrantz/status/1739768900248654019

https://x.com/therealkrantz/status/1764713384790921355

 

If learning by wagering is more your thing, you can do that here.

https://manifold.markets/Krantz/if-a-friendly-ai-takes-control-of-h?r=S3JhbnR6 

https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6 

https://manifold.markets/Krantz/if-a-machine-is-only-capable-of-ask?r=S3JhbnR6

https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6

Replies from: Seth Herd
comment by Seth Herd · 2024-07-25T12:45:23.023Z · LW(p) · GW(p)

Asking people to listen to a long presentation is a bigger ask than a concise presentation with more details than the current post. Got anything in between?

Replies from: Krantz
comment by Krantz · 2024-07-25T17:04:13.120Z · LW(p) · GW(p)

If you read the description here you can get a rough idea of what I'm talking about.

Or I would recommend the Manifold predictions and comments.  An intelligent person should be able to discern the scope of the project efficiently from there.

But no, I do not have a published paper of the algorithmic logistics (the portion similar to the work of Doug Lenat and Danny Hillis, the part that performs the same function as community notes, the part that looks at everyone's constitution of truth and figures out what to recommend to each other) on the internet.  For concerns that they may be infohazardous.