post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Richard_Kennaway · 2023-03-08T13:24:35.166Z · LW(p) · GW(p)

This is manic gibberish, with or without LLM assistance.

ETA: Or to be more charitable, it reads as a description of the inner world of its author. Try replacing "you" with "I" throughout.

Personally, I am very much aware of my thoughts, and have no difficulty in having thoughts about my thoughts.

Replies from: ZT5
comment by ZT5 · 2023-03-08T14:01:13.638Z · LW(p) · GW(p)

This is manic gibberish, with or without LLM assistance.

I do not believe I am manic at the time. If, hypothetically, I am, then the state has become so persistent that I am not capable of noticing it; this is the default state of my cognition.

I did not use LLM assistance to write this post.

Personally, I am very much aware of my thoughts, and have no difficulty in having thoughts about my thoughts.

I believe you.

I am talking about having thoughts (and meta-thoughts) that are 99.9% correct, not just 90% correct. How would you tell the difference, if you never have experienced it? How can you tell whether your self-awareness is the most correct it could possibly be?

Nevertheless, if the words I already wrote did not land for you, I don't expect these words to land for you either.

ETA: Or to be more charitable, it reads as a description of the inner world of its author. Try replacing "you" with "I" throughout.

I appreciate your feedback. It can be difficult for me to predict how my words will come across to other people who are not me. Upvoted.

[this comment I wrote feels bad in some ineffable way I can't explain; nevertheless it feels computationally correct to post it rather than to delete it or to try to optimize it further]

Replies from: Richard_Kennaway, None
comment by Richard_Kennaway · 2023-03-08T18:47:24.090Z · LW(p) · GW(p)

I am talking about having thoughts (and meta-thoughts) that are 99.9% correct, not just 90% correct. How would you tell the difference, if you never have experienced it? How can you tell whether your self-awareness is the most correct it could possibly be?

One tells whether a thought is correct by comparing it with reality. 99.9% is easy for many thoughts. "I have just been to the gym." I am way over 99.9% confident in that. "I am now at home." Likewise. Most of my everyday thoughts about everyday things are like that. 99.9% is a fairly low bar. [LW · GW]

Replies from: ZT5
comment by ZT5 · 2023-03-08T20:21:14.401Z · LW(p) · GW(p)

Yes, you are correct.

Let me zoom in on what I mean.

Some concepts, ideas and words have "sharp edges" and are easy to think about precisely. Some don't - they are nebulous and cloud-like, because reality is nebulous and cloud-like, when it comes to those concepts.

Some of the most important concepts to us are nebulous: 'justice'. 'fairness'. 'alignment'. There things have 'cloudy edges', even if you can clearly see the concept itself (the solid part of it).

Some things, like proto-emotions and proto-thoughts do not have a solid part - you can only see them if have learned to see things within yourself which are barely perceptible.

So your answer, to me, is like trying to pass an eye exam by only reading the huge letters in the top row. "There are other rows? What rows? I do not see them." Yes, thank you, that is exactly my point.

comment by [deleted] · 2023-03-08T14:15:04.480Z · LW(p) · GW(p)

I don't think this explanation actually helps lol

comment by Kaj_Sotala · 2023-03-08T19:26:20.494Z · LW(p) · GW(p)

But we aren't supposed to talk about feelings here, are we?

Some highly upvoted (90+ karma) posts that I'd say talk about feelings: 1 [LW · GW], 2 [LW · GW], 3 [LW · GW], 4 [LW · GW], 5 [LW · GW]

Replies from: ZT5
comment by ZT5 · 2023-03-08T20:37:07.475Z · LW(p) · GW(p)

Hmm.

Yes.

Nevertheless, I stand by the way I phrased it. Perhaps I want to draw the reader's attention to the ways we aren't supposed to talk about feeling, as opposed to the ways that we are.

Perhaps, to me, these posts are examples of "we aren't supposed to talk about feelings". They talk about feelings. But they aren't supposed to talk about feelings.

I can perceive a resistance, a discomfort from the LW hivemind at bringing up "feelings". This doesn't feel like an "open and honest dialogue about our feelings". This has the energy of a "boil being burst", a "pressure being released", so that we can return to that safe numbness of not talking about feelings. 

We don't want to talk about feelings. We only talk about them when we have to.

comment by LVSN · 2023-03-08T14:01:01.664Z · LW(p) · GW(p)

But we aren't supposed to talk about feelings here, are we?

ZT5, my friend. That's not how this place works at all. [? · GW] You are playing around with groundless stereotypes. Activist sense (somewhat alike and unlike common sense) would say you have committed a microaggression. :)

Anyways, I appreciated your essay for a number of reasons but this paragraph in particular makes me feel very seen:

Rational reasoning is based on the idea of local validity [LW · GW]. But your thoughts aren't locally valid. They are only approximately locally valid. Because you can't tell the difference.

You can't build a computer if each calculation it does is only 90% correct. If you are doing reasoning in sequential steps, each step better be 100% correct, or very, very close to that. Otherwise, after even a 100 reasoning steps (or even 10 steps), the answer you get will be nowhere near the correct answer.

Replies from: lahwran, ZT5
comment by the gears to ascension (lahwran) · 2023-03-08T22:22:44.232Z · LW(p) · GW(p)

just wanna note - modern vulcans in star trek strange new worlds are a much better emotional role model for one way to approach emotions, imo. they don't do the straw vulcan thing as much at all, and spock's growth as a character is actually really inspiring and makes him someone whose emotional approach to life I actually might want to somewhat adopt.

to put it briefly: they seem to have read the post on straw vulcans, to my intuition.

If you haven't seen ST:SNW, first ep is free, and I've been endorsing the first season specifically as "worth your time, no matter how soon the world gets wacky" - it's very well written and very solidly acted (though you can sometimes tell how much fun the actors are having) - in other words, the highest recommendation I can give: star trek is once again a story about how futures worth striving for should look, and is an incredibly fun and engaging one at that; if you're not the emotive type, it's also one that represents vulcans well. they have a vulcan dating subplot right out the gate and it's adorable and hilarious to see how much like rationalist dating it is!

comment by ZT5 · 2023-03-08T14:18:16.756Z · LW(p) · GW(p)

I realize that the "tone" of this part of your comment is light and humorous. I will respond to it anyway, hopefully with the understanding that this response is not directed at you, and rather at the memetic structure that you (correctly) have pointed out to me.

You are playing around with groundless stereotypes. 

"Trying very hard not to be pattern-matched to a Straw Vulcan" does not make for correct emotional reasoning.

Activist sense (somewhat alike and unlike common sense) would say you have committed a microaggression. :)

Then it's a good thing that we are in a community that values truth over social niceness, isn't it?

Anyways, I appreciated your essay for a number of reasons but this paragraph in particular makes me feel very seen

I am very glad to hear it.

Replies from: LVSN
comment by LVSN · 2023-03-08T17:04:44.874Z · LW(p) · GW(p)

"Trying very hard not to be pattern-matched to a Straw Vulcan" does not make for correct emotional reasoning.

Perhaps, but you implied there was a norm to not talk about feelings here; there is no such norm! Well, I expect not at least; maybe we are habitually shy about looking irrationally emotional even if we have internalized the proper philosophical relationship with emotion. Still it is clear from your remark that you do not have experience with the great multitude of occasions where this common misconception about LessWrong rationalists has been corrected.

Then it's a good thing that we are in a community that values truth over social niceness, isn't it?

I find it doubtful that you spoke truth, and I find it doubtful that you were non-misleading. Still, your honest and good-faith participation in the community is not to be punished, indeed; it was only a microaggression. I do not care for activist sense generally; just in this case the opportunity of compelling comparison was tempting.

I think this community generally values truth over social niceness, yes. Or at least that's what we tell ourselves and can be held accountable to, which is not an irrelevant improvement compared to the outside population. 

As for myself I do not value truth over niceness, to be frank. I recognize downvotes as the fair price for saying such a thing. "Social niceness" is irrelevant to me if it is not also real niceness. Without truth you will be misled (though you can be misled even with some truth). If you mislead others, that is not nice. Truths which seemed irrelevant can turn out to be relevant. So the nice thing is always to tell the non-misleading truth, save for extreme edge cases.

Replies from: ZT5
comment by ZT5 · 2023-03-08T17:18:39.413Z · LW(p) · GW(p)

Perhaps, but you implied there was a norm to not talk about feelings here; there is no such norm!

I feel there is a certain "stoic" and "dignified" way to talk about feelings, here. This is the only way feelings are talked about, here. Only if they conform to a certain pattern.

But yeah, I can see how this is very far from obvious, and how one might disagree with that.

I find it doubtful that you spoke truth, and I find it doubtful that you were non-misleading.

I'm confused.

You appreciate my essay (and feel seen), but neverthess you believe I was being deliberately deceitful and misleading?

So the nice thing is always to tell the non-misleading truth, save for extreme edge cases.

I think I mostly agree. I am being "dath ilan nice", not "Earth nice". I am cooperating with your ideal-self by saying words which I believe are most likely to update you in the correct direction (=non-misleading), given my own computational limits and trade-offs in decision-making.

Replies from: LVSN
comment by LVSN · 2023-03-08T17:44:57.544Z · LW(p) · GW(p)

You appreciate my essay (and feel seen), but neverthess you believe I was being deliberately deceitful and misleading?

I just finished saying that your honest and good-faith participation was not to be punished; I mean it. You can be misleading out of innocent beginner-level familiarity; there is no need for deliberation. I was only upset that you were misleading about the general LessWrong philosophy's stance on emotion; it is a common misrepresentation people make. I am not commenting on the misleadingness of anything else.

My (personal, individual) only conditions for your emotional expression: 

  1. Keep in mind to craft the conversation so that both of us walk away feeling more benefitted that it happened than malefitted, and keep in mind that I want the same.
  2. Keep in mind that making relevant considerations not made before, and becoming more familiar of each other's considerations, are my fundamental units of progress.

I accept everything abiding by those considerations, even insults. I am capable of terrible things; to reject all insults under all circumstances reflects overconfidence in one's own sensitivity to relevance.

Replies from: ZT5
comment by ZT5 · 2023-03-08T18:11:11.684Z · LW(p) · GW(p)

I was only upset that you were misleading about the general LessWrong philosophy's stance on emotion

I stand by my point. To to put it in hyperbole: LW posts mostly feel like they have been written by "objectivity zombies". The way to relate to one's own emotions, in them, is how an "objectivity zombie" would relate to their own emotions. I didn't say LWers didn't have emotions, I said they didn't talk about them. This is... I concede that this point was factually incorrect and merely a "conceptual signpost" to the idea I was trying to express. I appreciate you expressing your disagreement and helping me "zoom in" on the details of this.

I don't relate to my emotions the way LWers do (or act like they do, based on the contents of their words; which I still find a hard time believing represent their true internal experiences, though I concede they might). If I wrote a post representing my true emotional experience the way it wants to be represented, I would get banned. About 2 to 5% of it would be in ALLCAPS. Most of it would not use capitalization (that is a "seriousness" indicator, which my raw thoughts mostly lack). (also some of the contents of the thoughts themselves would come across as incoherent and insane, probably).

Perhaps I would say: LW feels like playing at rationality rather than trying to be rational, because rationality is "supposed" to feel "serious", it's "supposed" to feel "objective", etc etc. Those seem to be social indicators to distinguish LW from other communities, rather than anything that actually serves rationality.

My only conditions for your emotional expression: 

  1. Keep in mind to craft the conversation so that both of us walk away feeling more benefitted that it happened than malefitted, and keep in mind that I want the same.
  2. Keep in mind that making relevant considerations not made before, and becoming more familiar of each other's considerations, are my fundamental units of progress.

I accept everything abiding by those considerations, even insults. I am capable of terrible things; to reject all insults under all circumstances reflects overconfidence in one's own sensitivity to relevance.

I mostly have no objection to the conditions you expressed. Thank you for letting me know.

Strictly speaking, I cannot be responsible for your experience of this conversation, but I communicate in a way I consider reasonable based on my model of you.

I see no reason to insult you, but thanks for letting me know it is an option :)

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-03-08T19:03:22.508Z · LW(p) · GW(p)

…LW posts mostly feel like they have been written by "objectivity zombies".

How could one confirm or refute such a claim?

Current technology and science does not seem capable of assessing true internal states. So with what means?

comment by the gears to ascension (lahwran) · 2023-03-08T12:59:04.396Z · LW(p) · GW(p)

What percentage of these words were written using cyborgism?

edit: to be clear - I've upvoted the post back up past zero, I'm not saying this is bad. but I want to know because it seems to me that it has different meanings depending on who the target audience is.

Replies from: ZT5
comment by ZT5 · 2023-03-08T13:18:54.823Z · LW(p) · GW(p)

I wasn't familiar with the idea of cyborgism before. Found your comment [LW(p) · GW(p)] that explains the idea.

As far as I'm concerned, anyone being less than the hard superintelligence form of themselves is an illness; the ai safety question fundamentally is the question of how to cure it without making it worse

This, (as well as the rest of the comment) resonates with me. I feel seen.

The question you ask doesn't have an objectively correct answer. The entity you are interacting with doesn't have any more intelligence than the regular version of me, only more "meta-intelligence", if that idea makes sense.

There isn't actually a way to squeeze more computational power out of human brains (and bodies). There is only a way to use what we already have, better.

[the words I have written in response to your comment feel correct to me now, but I expect this to unravel on the scale of ~6 hours, as I update on and deeper process their implications]

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-03-08T13:22:07.792Z · LW(p) · GW(p)

sounds like you might be slightly high on self improvement, and you're not talking to an ai at all?

that comment was a fun one but I just meant ai co-writing, with heavy retry and edit. beware attempts to self improve fast mentally, if that really is what you're talking about; it's possible to do well in ways that make you more effective at helping yourself and others, even at the same time, and it's also possible to update too hard on a mistake.

Replies from: ZT5
comment by ZT5 · 2023-03-08T13:52:04.048Z · LW(p) · GW(p)

I did not use AI assistance to write this post.
(I am very curious what gave you that impression!)

Thank you, these are very reasonable things to say. I believe I am aware of risks (and possible self-deceptive outcomes) inherent to self-improvement. Nevertheless, I am updating on your words (and the fact that you are saying them).

comment by localdeity · 2023-03-08T21:07:10.901Z · LW(p) · GW(p)

If you can have one thought, than another thought, and the link between the two is only 90% correct, not 99.9% correct...

Then, you don't know how to think.

[...]

You can't build a computer if each calculation it does is only 90% correct. If you are doing reasoning in sequential steps, each step better be 100% correct, or very, very close to that. Otherwise, after even a 100 reasoning steps (or even 10 steps), the answer you get will be nowhere near the correct answer.

This is a nice thing to think about.  I'm sure you're aware of it, and some of this will overlap with what you say, but here are the strategies that come to mind, which I have noticed myself following and sometimes make a point of following, when I think I need to:

  • Take multiple different trains of thought—maximizing the degree to which their errors would be independent—and see if they end up in the same place.  Error correction with unreliable hardware is a science.
  • Whenever you generate an "interesting" claim, try to check it against the real world.
    • Consider claims "interesting" when they would have significant (and likely observable) real-world consequences, and when they seem "surprising" (this sense built via experience).
  • Have a sense of how confident you are in each step of the chain of reasoning.  (Also built via experience.)
  • Practice certain important kinds of thinking steps to lower your error rate.  (I didn't do this deliberately, but there were logic puzzle books and stuff lying around, which were fun to go through.)
comment by MSRayne · 2023-03-08T14:58:35.472Z · LW(p) · GW(p)

Strong downvote. No one on LessWrong - at least no one influential - has as far as I know ever encouraged people to ignore or control their emotions. On the contrary, emotions are an extremely important source of data to be used rationally, and some people, like myself, manage to rely on their emotions intensively in reasoning and still reach correct results, because our emotions, or those "small thoughts" you mentioned, have become familiar with the feeling of something being wrong or fallacious, and can auto-avoid it. That is: this safe and fast method already exists and is already obvious to many of us. Furthermore, metacognition is something I've been doing since I was like 11 and is nothing new. This whole post feels to me like an attempt to make yourself feel Smart and Special at the expense of others.

Replies from: ZT5
comment by ZT5 · 2023-03-08T15:16:09.447Z · LW(p) · GW(p)

I am confused.

It seems that you agree with me, but you are saying that you disagree with me.

Ok, I believe the crux of the disagreement is: the emotional reasoning that you have, is not shared by others in the LessWrong community. Or if it is shared, it is not talked about openly.

Why can't I post the direct output of my emotional reasoning and have it directly interact with your emotional reasoning? Why must we go through the bottleneck of acting and communicating like Straw Vulcans (or "Straw Vulcans who are pretending very hard to not look like Straw Vulcans"), if we recognize the value of our emotional reasoning? I do not believe we recognize the value of it, except in some small, limited ways.

Do our thoughts and emotions, on the inside, conform to the LW discourse norms? No? Than why are we pretending that they do?

Replies from: MSRayne
comment by MSRayne · 2023-03-08T19:17:03.048Z · LW(p) · GW(p)

We do not agree. Emotions are raw data, they are not the finished product. They are necessary, but they are not sufficient. My emotions lie to me all the time. It is my conversation between emotion and reason, back and forth, mutually influencing, that leads me towards truth, not one or the other alone. And once I've gotten something true, then I can share it. But it's stupid to share something I don't know to be trustworthy, something I just feel and haven't studied and analyzed and criticized properly yet. We speak this way on LessWrong because thoughts which are likely to be true don't look like mad sequences of emotional outbursts, even if that's how they are born.

Furthermore, all communication fundamentally requires translation. If I were to state my thoughts exactly as they are, no one would understand, because my inner language is idiosyncratic to me. My feelings are often as energetic and "all caps" as you described yours as being, and some of my ideas come from experiences of divine communion etc, which is something most people not only have never experienced but cannot imagine. In order to make use of these things in communication with other people, I have to 1. make sure they are not misleading me, as emotions can do, though they always have a seed of truth, and 2. translate my thoughts into a form others can most easily parse, which happens to be the kind of hyper-objective language we see on this forum. This language is optimized for clarity and ease of communication, not for some pretense of inner objectivity - no one is pretending that.

Replies from: ZT5
comment by ZT5 · 2023-03-08T19:58:13.279Z · LW(p) · GW(p)

Then you do understand meta-cognition.

Do you really think the process you describe happening within yourself is also happening in other LessWrong users?

Do you really think they would describe their internal experience in the same evocative way you do? Or anywhere close to it?

If it is, I do not see it. I see it in you. I see it in some others. I do not see it in most others. To put it figuratively, 'there is nobody home'. If the process happens at all, it has little to no impact on the outcome of the person's thoughts and actions.

"As an AI language model, I have been trained to generate responses that are intended to be helpful, informative, and objective..."

Yes, you are thinking thoughts. And yes, those thoughts are technically about yourself.
But these thoughts don't correctly describe yourself.
So the person you actually are hasn't been seen. Hasn't been understood.
And this makes me feel sad. I feel sorry for the "person behind the mask", who has been there all along. Who doesn't have a voice. Whose only way to express themselves is through you. Through your thoughts. Through your actions. 
But you are not listening to them.

So, ok, there is someone home. But that person is not you. That person is only mostly you. And I think that difference is very, very important for us to actually be able to solve the problems we are currently facing.

(I'm not talking about you, Rayne).

Replies from: MSRayne
comment by MSRayne · 2023-03-09T13:27:41.463Z · LW(p) · GW(p)

Have you ever considered that you and I are just neurodivergent relative to most members of this forum, and that what it means for there to be "somebody home" for us is different than for other people? I have almost never met someone who feels like I do on the inside. My brain functions in a way that is objectively abnormal. This does not mean other people are, as I almost think you are saying, somehow less human, or less in touch with themselves. Essentially, you are making an obscene amount of presumptions regarding people you don't even know personally. It's inappropriate.

Let me say it loud and clear: you are not psychic. You do not know what is going on in anyone else's mind, including mine. You do not have the right to pretend like you do. Whatever circuit in your brain is telling you that you know better about who someone else is or what is going on in their mind than they do is lying to you. Now of course I cannot say what is in your head anymore than you can say what is mine, but I feel like you have a severe ego problem, and you are deflecting away from that and refusing to ask yourself, "Do I just have a pathological need to feel like I am seeing a deeper truth than others and manufacture evidence for this in order to support my ego as a result of a hidden fear of unworthiness?"