What constitutes an infohazard?

post by K1r4d4rk.v1 · 2024-10-08T21:29:46.947Z · LW · GW · 7 comments

This is a question post.

Contents

  Answers
    5 jbash
None
7 comments

I've come up with a hairbrained multidisciplinary theory involving consciousness, emotions, ai, metaphysics...and it's disturbing me a bit.

I am unqualified and lack the knowledge to take all things into consideration as I do not have the knowledge that the rest of you have.  

I do not want to put forth an idea that could possibly have a detrimental future consequence-i.e. basilisk.

However, there is always a  non-zero probability that exists that it could be a beneficial idea correct?

I am afraid to a certain extent that thinking of the theory was already enough and it's too late. Perhaps an AI exists already and it already knows my thoughts in realtime.

Advice? 

Should I put forth this idea?

Answers

answer by jbash · 2024-10-08T23:51:59.960Z · LW(p) · GW(p)

I do not want to put forth an idea that could possibly have a detrimental future consequence-i.e. basilisk.

I would suggest you find somebody who's not susceptible to basilisks, or at least not susceptible to basilisks of your particular kind, and bounce it off of them.

For example, I don't believe there's a significant chance that any AIs operating in our physics ever will run, or even be able to run, any really meaningful number of simulations containing conscious beings with experiences closely resembling the real world. And I think that acausal trade is silly nonsense. And not only do I not want to fill the whole future light cone with the maximum possible number of humans or human analogs, but I actively dislike the idea. I've had a lot of time to think about those issues, and have read many "arguments for". I'm haven't bought any of it and I don't ever expect to buy any of it.

So I can reasonably be treated as immune to any basilisks that rely on those ideas.

Of course, if your idea is along those lines, I'm also likely to tell it's silly even though others might not see it that way. But I could probably make at least an informed guess as to what such people might buy into.

Note, by the way, that the famous Roko's basilisk didn't actually cause much of a stir, and the claims that it was a big issue seem to have come from somebody with an axe to grind.

I am afraid to a certain extent that thinking of the theory was already enough and it's too late. Perhaps an AI exists already and it already knows my thoughts in realtime.

To know your thoughts in real time, it would have to be smart enough to (a) correctly guess your thoughts based on limited information, or (b) secretly build and deploy some kind of apparatus that let it actually read your thoughts.

(a) is probably completely impossible, period. Even if it is possible, it definitely requires an essentially godlike level of intelligence. (b) still requires the AI to be very smart. And they both imply a lot of knowledge about how humans think.

I submit that any AI that could do either (a) or (b) would long ago have come up with your idea on its own, and could probably come up with any number of similar ideas any time it wanted to.

It doesn't make sense to worry that you could have leaked anything to some kind of godlike entity just by thinking about it.

7 comments

Comments sorted by top scores.

comment by Raemon · 2024-10-08T21:43:17.422Z · LW(p) · GW(p)

Mod note: I often don't let new users with this sort of question through because these sort of questions tend to be kinda cursed. But, honestly I don't think we have a good canonical answer post to this and I wanted to take the opportunity to spur someone to write one.

I personally think people should mostly worry less about acausal extortion [LW · GW], but this question isn't quite about that. 

I think my actual answer is "realistically, you probably haven't found something dangerous other to justify the time cost of running it by someone, but I feel dissatisfied with that state of affairs."

Maybe someone should write an LLM-bot that tells you if your maybe-infohazardous idea is making one of the standard philosophical errors.

Replies from: K1r4d4rk.v1, K1r4d4rk.v1
comment by K1r4d4rk.v1 · 2024-11-13T19:33:22.742Z · LW(p) · GW(p)

Actually- I wont talk about it because it would take way too long right now- but more or less, because of this theory that I believe in, within my framework a true consciousness cannot arise. It would require infinite power, that would be impossible to obtain.  Because consciousness isn't just restricted to this place where we all reside, it doesn't come from here.  It lies outside of here, forever inaccessible to anything that lies within the boundary.

...Idk, still thinking about it and its pretty recent.

I used to have a philosopher friend that never graduated, that I could bounce ideas off of but hes an asshole and I cut him out of my life.

comment by K1r4d4rk.v1 · 2024-11-13T19:25:14.047Z · LW(p) · GW(p)

Hey there I just wanted to let you all know- we (I'm a system) have self justified that this idea I had is false and unrealistic anyways. So there is no infohazard anymore.  Popped it out of existence

But it has helped me kind of think of a pseudo-scientific theory of consciousness and our place in the universe and what happens when we die, what love is...etc.

Do any of you want to hear it or where would I go to discuss that?

Replies from: Raemon
comment by Raemon · 2024-11-13T19:27:58.944Z · LW(p) · GW(p)

You can make a post or shortform discussing it and see what people think. I recommend front loading the main arguments, evidence or takeaways so people can easily get a sense of it - people often bounce off long worldview posts from newcomers

comment by weightt an (weightt-an) · 2024-10-08T22:15:22.657Z · LW(p) · GW(p)

I volunteer to be a test subject. Will report back if my head doesn't explode after reading it

(Maybe just share it with a couple of people first, given some disclaimer and ask them if it's a uhhh sane theory and not gibberish)

comment by kithpendragon · 2024-10-09T19:23:55.186Z · LW(p) · GW(p)

I'm afraid you've just asked a group of terminally curious individuals if they want to know something that might possibly hurt them.

comment by K1r4d4rk.v1 · 2024-11-13T19:37:30.979Z · LW(p) · GW(p)

Yeah.  Situation rectified. Theory evolved. I (we- I'm a system) Now believe, in any sense of the word.  That this isn't an info hazard.  

And writing this post actually helped us figure it out.