Posts

How to avoid death by AI. 2024-07-23T01:59:54.339Z
A poem titled 'Tick Tock'. 2024-05-20T03:52:09.003Z
Justification for Induction 2023-11-27T02:05:49.217Z
Prosthetic Intelligence 2023-11-08T19:01:04.554Z
Question from amatuer. 2023-11-08T19:01:03.826Z

Comments

Comment by Krantz on How to avoid death by AI. · 2024-08-25T00:46:19.520Z · LW · GW

Here are some common questions I get, along with answers.

How do individuals make money?

By evaluating arguments, line by line, in a way where their evaluations are public. They do this on a social media platform (similar to X), where each element on the feed is a formal proposition and the user has two slider bars for "confidence" and "value" both ranging from 0 - 1.

Why would someone spend their time evaluating arguments?

Because others are willing to pay them. This can be either (1) an individual trying to persuade another individual of a proposition (2) a group or organization dedicating capital to a specific set of propositions (3) an individual choosing where publicly funded capital ought be allocated (primary reason for "value" metric).

Why would others pay them?

Because pointing out their publicly documented beliefs that contradict one another is a good way to demonstrate to the public why they are obviously wrong. This is fundamentally the same reason analytic philosophers have other philosophers write out arguments in the first place. It's easy to point out where they are wrong.

Why would we want to pay someone for publicly making themselves look obviously wrong?

Because we want them to change their beliefs. Imagine here, anyone you've wanted to change the mind of..

Why won't there be many contradictory beliefs held by anyone?

Because they don't want to be obviously wrong. Think politicians.

How much money goes to which arguments? The market decides.

What makes this system different than other communication tools? It is capable of identifying and directing discourse to the various cruxes we never seem to reach.

There's a number of influential intellectual leaders I've been wanting to speak with for many years because I want to change their beliefs. Our social communication system is broken. If I had access to a ledger that they rigorously used to document their accepted beliefs and sound lines of reasoning, I wouldn't need their presence to have an effective argument with them. I could simply insert the "missing proposition" into such a system and 'buy them a wager' on the truth they're not looking at.

This project is really important for a different reason though. This can also be used as an interpretable aligned foundation for the truth of a symbolic AI that could scale an alignment attractor faster than a capabilities one.

I think this could be the one thing capable of pulling the fire alarm Eliezer's been talking about for 20 years. As in, I think society would convey big important ideas to each other better (the lesson we were supposed to learn from "don't look up.") If we had a system that gave individuals control over the general attention mechanism of society.

There is a ton more.. This is going to need to be decentralized, humanity verified and hit a couple other security metrics as well. It will take a huge collaboration between the AI sector and crypto. I can't afford a ticket to any fancy conferences where they are building crypto cities or thinking about collective intelligence formally like this, let alone tying to merge it into a project like Lenat or Hillis had. If anyone knows someone in the AI safety grant community, that would be willing to look at what I'm not willing to put online, please share my information with them.

Comment by Krantz on How to avoid death by AI. · 2024-08-18T09:50:33.120Z · LW · GW

Just an update.  I did email MIRI and had a correspondence with a representative.

Unfortunately, they were not willing to pass on my information to Eliezer or provide me with a secured way to provide further details.

I was encouraged to simply publish my infohazardous material on LW and hope that it becomes 'something other people are excited about', then they would take a serious look at it.

Comment by Krantz on Zvi's Manifold Markets House Rules · 2024-07-31T17:02:19.513Z · LW · GW

These are great rules. I wish they existed on a decentralized general constitution ledger, so I could consent to them being true on an infrastructure nobody controls (yet everyone tends to obey because other people consider it an authority).

That way, when I read them, I could get paid by society for learning to be a good citizen and I could escape my oppressed state and make a living learning stuff (authenticating the social contract) on the internet.

Comment by Krantz on How to avoid death by AI. · 2024-07-25T17:04:13.120Z · LW · GW

If you read the description here you can get a rough idea of what I'm talking about.

Or I would recommend the Manifold predictions and comments.  An intelligent person should be able to discern the scope of the project efficiently from there.

But no, I do not have a published paper of the algorithmic logistics (the portion similar to the work of Doug Lenat and Danny Hillis, the part that performs the same function as community notes, the part that looks at everyone's constitution of truth and figures out what to recommend to each other) on the internet.  For concerns that they may be infohazardous.

Comment by Krantz on How to avoid death by AI. · 2024-07-23T15:33:11.051Z · LW · GW

Yes, it does share the property of 'being a system that is used to put information into people's heads'.

There's one really important distinction.

One is centralized (like the dollar)

One is decentralized controlled (like Bitcoin)

There's quite a bit of math that goes into proving the difference.

Where consistency is the primary metric for reward in a centralized system, a decentralize system is more about rewarding the ability to identify novelty / variation from the norm, that later becomes consistent.

It would feel more like a predictive market than Khan Academy.  Specifically, it won't be 'teaching you a curriculum'.  You will be exploring, voting on things.  It will do a good job of persuading you of what to go vote on.  It will 'recommend' the thing that will make you the most money to think about.

You see, this is where the money that you earn comes from.  If people want something thought about, they can incentivize those propositions.  The religious leaders will want you to learn about their religion, the phone companies will want you to learn about their new phone options.  Your neighbors might want you to watch the news.  Your parents might want you to learn math.  Your wife might want you to know you need to take out the trash.  People pay money to other people to demonstrate they understand things.

People will want to put money into this.

It's marketing, with an actual receipt.

 

The great thing is, you get to choose what you read.

Comment by Krantz on How to avoid death by AI. · 2024-07-23T15:12:22.873Z · LW · GW

Thanks.  Done.

I'll let you know if I get a reply.

Comment by Krantz on How to avoid death by AI. · 2024-07-23T02:03:08.211Z · LW · GW

You can listen to me talk about this for a couple hours here:

https://www.youtube.com/watch?v=eNirzUg7If8

https://x.com/therealkrantz/status/1739768900248654019

https://x.com/therealkrantz/status/1764713384790921355

 

If learning by wagering is more your thing, you can do that here.

https://manifold.markets/Krantz/if-a-friendly-ai-takes-control-of-h?r=S3JhbnR6 

https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6 

https://manifold.markets/Krantz/if-a-machine-is-only-capable-of-ask?r=S3JhbnR6

https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6

Comment by Krantz on Justification for Induction · 2023-11-29T18:23:46.928Z · LW · GW

Why would I want to change a person's belief if they already value philosophical solutions?  I think people should value philosophical solutions. I value them.

Maybe I'm misunderstanding your question.

It seemed like the poster above stated they do not value philosophical solutions.  The paper isn't really aimed at converting a person that doesn't value 'the why' into a person that does.  It is aimed at people which already do care about 'the why' and are looking to further reinforce/challenge their beliefs about what induction is capable of doing.

The principle of uniformity of nature is something we need to assume if we are going to declare we have evidence that the tenth swan to come out of the box would be white (in the situation where we have a box of ten swans and have observed 9 of them come out of the box and be white).  Hume successfully convinced me that this can't be done without assuming the principle of uniformity in nature.

What I am claiming though, is that although we have no evidence to support the assertion 'The 10th swan will be white.' we do have evidence to support the assertion 'All ten swans in the box will be white.' (an assertion made before we opened the box.).  This justification is not dependent upon assuming the principle of uniformity of nature.

 

In general, it is a clarification specifically about what induction is capable of producing justification for.

Future observation instances?  No.

But general statements?  I think this is plausible.

It's really just an inquiry into what counts as justification.

Necessary or sufficient evidence.

Comment by Krantz on Justification for Induction · 2023-11-29T15:17:46.080Z · LW · GW

It sounds like they are simply suggesting I accept the principle of uniformity of nature as an axiom.

Although I agree that this is the crux of the issue, as it has been discussed for decades, it doesn't really address the points I aim to urge the reader to consider.

Comment by Krantz on Justification for Induction · 2023-11-29T15:02:40.720Z · LW · GW

This is a good question.

I agree that you can't justify a prediction until it happens, but I'm urging us to consider what it actually means for a prediction to happen.  This can become nuanced when you consider predictions that are statements which require multiple observations to be justified.

If I predict that a box (that we all know contains 10 swans) contains 10 white swans (My prediction is 'There are ten white swans in this box.').  When does that prediction actually 'happen'?  When does it become 'justified'?

I think we all agree that after we've witnessed the 10th white swan, my assertion is justified. But am I justified at all to believe I am more likely to be correct after I've only witnessed 8 or 9 white swans? 

This is controversial.

Comment by Krantz on Justification for Induction · 2023-11-29T14:17:46.507Z · LW · GW

This was a paper I wrote 8 - 10 years ago while taking a philosophy of science course primarily directed at Hume and Popper.  Sorry about the math, I'll try to fix it when I have a moment.

 

The general point is this:

I am trying to highlight a distinction between two cases.

 

Case A - We say 'All swans are white.' and mean something like, 'There are an infinite number of swans in the Universe and all of them are white.'.

Hume's primary point, as I interpreted him, is that since there are an infinite number of observations that would need to be made to justify this assertion, making a single observation of a white swan doesn't make any sort of dent in the list of observations we would need to make.  If you have an infinitely long 'to do list', then checking off items from your list, doesn't actually make any progress on completing your list.

 

Case B - We say 'All swans are white.' and mean something like, 'There are a finite number of swans (n) in the Universe and all of them are white.' (and (n) is going to be really big.).

If we mean this instead, then we can see that no matter how large (n) is, each observation makes comprehensive and calculable progress towards justifying that (n) swans are indeed white.  I'm saying that, no matter how long your finite 'to do list' is, checking off an item is calculable progress towards the assertion that (n) swans are white.

 

In general, I think Hume did a great job of demonstrating why we can't justify assertions like the one in case A.  I agree with him on that.  What I'm saying, is that we shouldn't make statements like the one in case A.  They are absurd (in the formal sense). 

What I'm saying is that, yes, observations of instances can't provide any justification for general claims about infinite sets, but they can provide justification of general claims about finite sets (as large as you would like to make them) and that is important to consider.

Comment by Krantz on Justification for Induction · 2023-11-27T02:13:34.167Z · LW · GW

Well, that looks absolutely horrible.

I promise, it looked normal until I hit the publish button..

Comment by Krantz on Prosthetic Intelligence · 2023-11-14T16:54:50.948Z · LW · GW

It will not take a long time if we use collective intelligence to do it together.  The technology is already here.  I've been trying to share it with others that understand the value of doing this before AI learns to do it on its own.  If you want to learn more about that, feel free to look me up on the 'X' platform @therealkrantz.

Comment by Krantz on Prosthetic Intelligence · 2023-11-14T00:26:12.490Z · LW · GW

I understood the context provided by your 4 color problem example.

What I'm unsure about is how that relates to your question.

Maybe I don't understand the question you have.

I thought it was, "What should happen if both (1) everything it says makes sense and (2) you can't follow the full argument?".

My claim is "Following enough of an argument  to agree is precisely what it means for something to make sense.".

In the case of the four color problem, it sounds like for 20 years there were many folks that did not follow the full argument because it was too long for them to read.  During that time, the conclusion did not make sense to them.  Then, in 1995 a new shorter argument came along.  One that they could follow.  It included propositions that describe how the computer proofing system works. 

For your latter question, "What would it take for me to trust an AI's reasoning over my own beliefs when I'm unable to actually verify the AI's reasoning?".  My answer is "A leap of faith.".  I would highly recommend that people not take leaps of faith.  In general, I would not trust an AI's reasoning if I were not able to actually verify the AI's reasoning.  This is why mechanistic interpretability is critical in alignment.

Comment by Krantz on Prosthetic Intelligence · 2023-11-13T02:19:58.181Z · LW · GW

I'm not sure what the hypothetical Objectivist 'should do', but I believe the options they have to choose from are:

 

(1) Choose to follow the full argument (in which case everything that it said made sense)

and they are no longer an Objectivist

or

(2) Choose to not follow the full argument (in which case some stuff didn't make sense)

and they remain an Objectivist

 

In some sense, this is the case already.  People are free to believe whatever they like.  They can choose to research their beliefs and challenge them more.  They might read things that convince them to change their position.  If they do, are they "compelled" are they "forced"?  I think they are in a way.  I think this is a good type of control.  Control by rational persuasion.

For the question of whether an external agent should impose its beliefs onto an agent choosing option (2), I think the answer is 'no'.  This is oppression.

I think the question you are getting at is, "Should a digital copy of yourself be able to make you do what you would be doing if you were smarter?". 

Most would say no, for obvious reasons.  Nobody wants their AI bossing them around.  This is mostly because we typically control other agents (boss them around) by force.  We use rules and consequences. 

What I'm suggesting, is that we will get so much better at controlling things through rational persuasion, that force will not be required for control.  All that the 'smarter version of yourself' does is tell you what you probably need to hear.  When you need to hear it.  Like your conscience.  

It's important to retain the right to choose to listen to it.

 

In general, I see the alignment problem as a category error.  There is no way to align artificial intelligence.  AI isn't really what we want to build.  We want to build an oracle that can tell us everything. That's a collective intelligence. A metaphorical brain that represents society by treating each member as a nerve on its spinal cord.

Comment by Krantz on Prosthetic Intelligence · 2023-11-11T02:32:40.733Z · LW · GW

I'm not sure I can imagine a concrete example of an instance where both (1) everything that it said made sense and (2) I am not able to follow the full argument.

Maybe you could give me an example of a scenario?

I believe, if the alignment bandwidth is high enough, it should be the case that whatever an external agent does could be explained to 'the host' if that were what the host desired.