Posts

AGI Alignment is isomorphic to Unconditional Love 2023-10-09T15:58:31.342Z
Raghuvar Nadig's Shortform 2023-10-07T17:07:43.611Z

Comments

Comment by raghuvar-nadig on [deleted post] 2024-03-04T14:15:07.416Z

I'm curious how people are parsing this rumor (part of Connor's tweets):

I recall a story of how a group of AI researchers at a leading org (consider this rumor completely fictional and illustrative, but if you wanted to find its source it's not that hard to find in Berkeley) became extremely depressed about AGI and alignment, thinking that they were doomed if their company kept building AGI like this. So what did they do? Quit? Organize a protest? Petition the government? They drove out, deep into the desert, and did a shit ton of acid...and when they were back, they all just didn't feel quite so stressed out about this whole AGI doom thing anymore, and there was no need for them to have to have a stressful confrontation with their big, scary, CEO. 

Do people who are in proximity to the relevant community consider this anecdote fictional/not-pertinent/exaggerated/but-of-course with respect to AI safety?

Comment by Raghuvar Nadig (raghuvar-nadig) on How to deal with the sense of demotivation that comes from thinking about determinism? · 2024-02-08T23:23:16.988Z · LW · GW

Sure! I think a bunch of other answers touch upon this though. 

The idea is that it's not determinism in itself that's causing the demotivation, that's just a narrative your subconscious mind brings forward when faced with a tough task, to protect you from thinking about something that is more difficult to face, but often actionable, eg. "I feel I'm not smart enough", "I think I will fail", "I'm embarrassed about what others will think".  By explicitly asking yourself what that 'other' cause is (by phrasing it as above, or perhaps by imagining a stern parent/coach giving you a reality check), you can focus on something that might be very tough but not literally impossible to solve like the universe being deterministic. 

Comment by Raghuvar Nadig (raghuvar-nadig) on How to deal with the sense of demotivation that comes from thinking about determinism? · 2024-02-07T15:07:41.755Z · LW · GW

The tool you essentially have in the face of determinism despair is awareness of distributed causality. It is the 'thinking about/sense of' part that is (or seems to be) causing it. A practical exercise I like is asking "If I had to bring myself to face the most 'makes me feel bad about myself' cause of my demotivation, what would it be?". Existential despair often masks some other pertinent but deeply invalidating anxiety.

Comment by Raghuvar Nadig (raghuvar-nadig) on Announcing Dialogues · 2023-10-18T23:11:11.810Z · LW · GW

I'm a former quant now figuring out how to talk to tech people about love (I guess it's telling that I feel a compelling pressure to qualify this). 

Currently reading

https://www.nytimes.com/2023/10/16/science/free-will-sapolsky.html

Open to talking about anything in this ballpark!

Comment by Raghuvar Nadig (raghuvar-nadig) on Raghuvar Nadig's Shortform · 2023-10-12T17:45:26.393Z · LW · GW

Ok, this is me discarding my 'rationalist' hat, and I'm not quite sure of the rules and norms applicable to shortforms, but I simply cannot resist pointing out the sheer poetry of the situation. 

I made a post about unconditional love and it got voted down to the point I didn't have enough 'karma' to post for a while. I'm an immigrant from India and took Sanskrit for six years - let's just say there is a core epistemic clash in its usage on this site[1]. A (very intelligent and kind) person whose id happens to be Christian takes pity and suggests, among other things, an syntactic distancing from the term 'love'. 

TMI: I'm married to a practicing Catholic - named Christian.


 

  1. ^

    Not complaining - I'm out of karma jail now and it's a terrific system. Specifically saying that the essence of 'karma', etymologically speaking, lies in its intangibility and implicit nature. 

Comment by Raghuvar Nadig (raghuvar-nadig) on Raghuvar Nadig's Shortform · 2023-10-12T13:11:26.221Z · LW · GW

Thank you - I agree with you on all counts, and your comment on my thesis needing to be falsifiable is a helpful direction for me to focus. 

I alluded to this above - this constraint to operate within provability was specifically what led me away from rationalist thinking a few years ago - I felt that when it really mattered (Trump, SBF, existential risk, consciousness), there tended to be this edge-case Godelian incompleteness when the models stopping working and people ended up fighting and fitting theories to justify their biases and incentives, or choosing to focus instead on the optimal temperature for heating toast. 

So for the most part, I'm not very surprised. I have been re-acquainting myself the last couple of weeks to try and speak the language better. However, it's sad to see, for instance, the thread on MIRI drama, and hard not to correlate that with the dissonance from real life, especially given the very real-life context of p(doom).

The use of 'love' and 'unconditional love' from the get-go was very intentional, partly because they seem to bring up strong priors and aversion-reflexes, and I wanted to face that head on. But that's a great idea - to try and arrive at these conclusions without using the word.

Regardless, I'm sure my paper needs a lot of work and can be improved substantially. If you have more thoughts, or want to start a dialogue, I'd be interested. 

Comment by Raghuvar Nadig (raghuvar-nadig) on Related Discussion from Thomas Kwa's MIRI Research Experience · 2023-10-11T14:21:56.518Z · LW · GW

But, your phrasing here feels a bit like a weird demand for exceptional rigor. 

No - the opposite. I was implying that there's clearly a deeper underpinning to these patterns that any amount of rigor will be insufficient in solving, but my point has been articulated within KurtB's excellent later comment, and solutions in the earlier comment by jsteinhardt.

it's not that weird for a company to have an intense manager

I agree; that's very true. However, this usually occurs in companies that are chasing zero-sum goals. Employees treated in this manner might often resort to a combination of complaining to HR, being bound by NDAs, or biting the bullet while waiting for their paydays. It's just particularly disheartening to hear of this years-long pattern, especially given the induced discomfort in speaking out and the efforts to downplay, in an organization that publicly aims to save the world.

Comment by Raghuvar Nadig (raghuvar-nadig) on Raghuvar Nadig's Shortform · 2023-10-10T21:56:17.121Z · LW · GW

Thanks - that's fair on all levels. Where I'm coming from is an unyielding first-principles belief in the power and importance of love. It took me some life experience and introspection to acquire, and it doesn't translate well to strictly provable models. Takes a lot of iterations of examining things like "people (including very smart ones) just end up believing the world models that make them feel good about themselves" and "people are panicked about AI and their beliefs are just rationalizations of their innate biases", "if my family or any social circle don't really love each other, it always comes through", "Elon's inclination to cage fight or fly to Mars is just repressed fight or flight" to arrive at it.

I tried to justify it through a model of recurrence and self-similarity in consciousness, but clearly that's not sufficient or well articulated enough. 

So yeah, I hear you on the inferential distance from LW ideas, and your model of "unconditional love" being more cloistered.  For what it's worth - it really isn't, maybe I should find an analogue in diffusion models, I dunno.  The negative, anti-harmonic effects at least are clearly visible and pervasive everywhere - there is no model that adequately captures our pandemic trauma and disunity, but it ends up shaping everything because we are animals and not machines, and quite good at hiding our fears and insecurities when posting on social media or being surveyed or even being honest with ourselves.

Thank you for taking the time to reply and engage - it's an unconditional kind act! 

Comment by Raghuvar Nadig (raghuvar-nadig) on Related Discussion from Thomas Kwa's MIRI Research Experience · 2023-10-10T16:10:39.484Z · LW · GW

Three points that might be somewhat revealing:

  1. There was never an ask for reciprocal documents from employees. "Here's a document describing how to communicate with me. I'd appreciate you sending me pointers on how to communicate with you, since I am aware of my communication issues." was never considered.
  2. There are multiple independent examples of people in various capacities, including his girlfriend, expressing that their opinions were not valued, and a clear hierarchical model was in play.
  3. The more humble "my list of warnings" was highlighted immediately as justification but never broadcast broadly,  and there seems to be no cognizance that it's not something anyone else would ever take upon themselves to share.
Comment by Raghuvar Nadig (raghuvar-nadig) on Raghuvar Nadig's Shortform · 2023-10-10T14:20:34.161Z · LW · GW

So I posted my paper, and it did get downvoted, with no comments, to the point I can't comment or post for a while. 

That's alright - the post is still up, and I am not blind to the issue with trying to convince rationalists that love is real and super important biologically and obviously all that actually matters to save the world and exponentially more so because AI people are optimizing for everything else - without coming off as insulting or condescending.  This presumption, of course, is just me rephrasing my past issues with rationalism, but it was always going to be hard to find an overlap of people who value emotions and understand AI.

For now, I'm taking this as a challenge to articulate my idea better, so I can least can get some critique. Maybe I'll take your suggestion and try distilling it in some way.

Comment by Raghuvar Nadig (raghuvar-nadig) on Raghuvar Nadig's Shortform · 2023-10-07T17:40:11.535Z · LW · GW

Thank you!

Comment by Raghuvar Nadig (raghuvar-nadig) on Raghuvar Nadig's Shortform · 2023-10-07T13:46:56.268Z · LW · GW

I'd call myself a lapsed rationalist. I have an idea I've been thinking about that I'd really like feedback on, have it picked apart etc. - and strongly feel that LessWrong is a good venue for it.

As I'm going through the final edits, while also re-engaging with other posts here, I'm discovering that I keep modifying my writing to make it 'fit' LW's guidelines and norms, and it's not been made easy by the fact that my world-lens has evolved significantly in the last five-ish years since I drifted away from this modality. 

Specifically, I keep second-guessing myself with stuff like "ugh, this is obvious but I should really spell it out", "this is too explicit to the point of being condescending", "this is too philosophical", "this is trivial". 

I haven't actually ever posted anything or gotten feedback here, so I'm sure it's some combination of overthinking, simply being intimidated and being hyper-aware of the naivete in my erstwhile world view.

My goal really is to get to the point that I'm reasonably confident it doesn't get deleted for some reason after I post.

I guess this is serving to dip my toe in the water and express my frustration. Thoughts?