Posts

Comments

Comment by Aleksey Bykhun (caffeinum) on My simple AGI investment & insurance strategy · 2024-06-08T08:51:00.650Z · LW · GW

If you don't believe this, the strategy could be to take on as much debt as possible, and spend the money right now.

(Obviously not a financial advice)

Comment by Aleksey Bykhun (caffeinum) on Transformers Represent Belief State Geometry in their Residual Stream · 2024-04-23T03:10:44.081Z · LW · GW

I have tried to play with Claude – I would ask it to think of a number, drop the hint, and only then print the number. It should have test the ability to have "hidden memory" that's outside the text.

I expected it to be able to do that, but the hints to be too obvious. Instead, actually it failed multiple times in a row!

Sharing cause I liked the experiment but wasn't sure if I executed it properly. There might be a way to do more of this.

P.S. I have also tried "print hash, and then preimage" – but this turned out to be even harder for him

Comment by Aleksey Bykhun (caffeinum) on Seminyak – ACX Meetups Everywhere Spring 2024 · 2024-04-03T07:56:53.500Z · LW · GW

I live in Ubud, but I will try to get there!

Comment by Aleksey Bykhun (caffeinum) on Group Debugging · 2023-03-26T18:21:55.336Z · LW · GW

Hi! Sorry, i’m running late

Comment by Aleksey Bykhun (caffeinum) on Ethical AI investments? · 2023-03-18T02:24:38.097Z · LW · GW

...in the sense of making an expected profit from actions that reduce this risk

 

back of the napkin reasoning is that actually we have to PAY to reduce risk, so there's no way to make money doing that

Comment by Aleksey Bykhun (caffeinum) on Enemies vs Malefactors · 2023-03-18T02:00:12.873Z · LW · GW

After a recent article in NY Times, I realized that it's a perfect analogy. The smartest people, when motivated by money, get so high that they venture into unsafe territory. They kinda know its unsafe, but even internally it doesn't feel like crossing the red line.

It's not even about the strength of characters, when incentives are aligned 99:1 against your biology, you can try to work against it, but you most probably stand no chance.

It takes enormous willpower to quit smoking explicitly because the risks are invisible and so "small". It's not only you have to fight against this irresistible urge, BUT there's also nobody on "your side", except for intellectual realization, of which you're not even so sure of.

In the same vein, being a CEO of a big startup, being able to single-handedly choose direction, and getting used to people around you being less smart, less hard-working, less competitive, you start trusting your own decision-process much more. That's when incentives start to water down through the cracks in the shell. You don't even remember what feels right anymore, the only thing you know is taking bold actions brings you more power, more money, more dukka. And you do those.

Comment by Aleksey Bykhun (caffeinum) on Looking back on my alignment PhD · 2023-03-15T03:00:48.994Z · LW · GW

Generally I would tweak my brain if it would reliably give me the kind of actions I'd now approve of, while providing at worst the same sort of subjective state as I'd have if managing the same results without the intervention. I wouldn't care if the center of my actions was different as long as the things I value today were bettered.

 

Technically, we do this all the time. Reading stuff online, talking to people, we absorb their models of the world, their values and solutions to problems we face.

Hence the Schwartznegger poster on the wall makes you strong, the countryside folks make you peaceful, and friend reminding you "you're being a jerk right now" makes you calm down

Comment by Aleksey Bykhun (caffeinum) on The Waluigi Effect (mega-post) · 2023-03-05T09:19:47.273Z · LW · GW

Do humans have this special token that exist outside language? How would it be encoded in the body?

One interesting candidate is a religions feeling of awe. It kinda works like that — when you’re in that state, you absorb beliefs. Also, social pressure seems to work in a similar way.

Comment by Aleksey Bykhun (caffeinum) on Sam Altman: "Planning for AGI and beyond" · 2023-02-26T04:08:01.504Z · LW · GW

to (2): (a) Simulators are not agents, (b) mesa-optimizers are still "aligned"

(a) amazing https://astralcodexten.substack.com/p/janus-simulators post, utility function is a wrong way to think about intelligence, humans themselves don't have any utility function, even the most rational ones

(b) the only example of mesa-optimization we have is evolution, and even that succeeds in alignment, people:

  • still want to have kids for the sake of having kids
  • the evolution's biggest objective (thrive and proliferate) is being executed quite well, even "outside training distribution"

yes, there are local counterexamples, but we gonna look on the causes and consequences – and we're at 8 billion already, effectively destroying or enslaving all the other DNA reproductors

Comment by Aleksey Bykhun (caffeinum) on AI Forecasting: One Year In · 2022-07-07T07:38:25.871Z · LW · GW

If everyone is so bad at this, is it a reasonable strategy to just bet against the market even more aggressively, making $ on prediction market platforms?

On a similar note, does it make sense to raise a charity fund and bet a lot of money on "AGI by 2025", motivating forecasters to produce more reasonable predictions?

Comment by Aleksey Bykhun (caffeinum) on My vision of a good future, part I · 2022-07-07T06:47:19.562Z · LW · GW

My take on wire heading is that I precommit to live in the world which is more  detailed and complex (vs more pleasant).

For example, online world of Instagram or heroine addiction is more pleasant, but not complex. Painfully navigating maze of life with its ups and downs is complex, but not always pleasant. Living in a "Matrix" might be pleasant, but essentially the details are missed out because the systems that created these details are essentially more detailed and live in a more detailed world.

On the same note, if 99% of the Earth population "uploads", and most of the fun stuff gonna happen "in the matrix", most of the complexity gonna exist there. And even if 1% of contrarians stay outside, their lives might not be as interesting and detailed. So "going out of the matrix" would actually be "running away from reality" in that example.

With wire heading it's a similar thing. From what I know, actually "nirvana" is a more detailed experience where you notice more and where you can observe subconscious processes directly; that's why they don't own you and you become free from "suffering". Nirvana is not total bliss, from what they say (like heroine, I presume).

(e.g. see discussion on topic of paradises on Qualia Computing between Andres Gomez and Roger Thisdell: https://qualiacomputing.com/2021/11/23/the-supreme-state-unconsciousness-classical-enlightenment-from-the-point-of-view-of-valence-structuralism/)

So yeah I would choose this kind of wire heading that allows me to switch into nirvana. Shinzen Young actually works on research trying to accomplish this even before AGI.

Comment by Aleksey Bykhun (caffeinum) on Do you consider your current, non-superhuman self aligned with “humanity” already? · 2022-07-01T14:58:27.593Z · LW · GW

I don’t think NVC tries to put down an opponent, it’s mostly about how you present your ideas. I think it models an opponent as “he tries to win the debate without thinking about my goals. let me think of both mine and theirs goals, so i’m one step ahead”. Which is a bit prerogative and looking down, but not exactly accusatory

Comment by Aleksey Bykhun (caffeinum) on Do you consider your current, non-superhuman self aligned with “humanity” already? · 2022-06-26T08:19:50.896Z · LW · GW

Okay, hold my gluten-free kefir, boys! Please let me say it in full first without arguments, and then I will try to find more relevant links for each claim. I promise it's relevant.

Introduction – Enlightenment?

Lately, I have been into hardcore mindfulness practices (see book) aimed at reaching "Enlightenment" in the sense on Buddha. There are some people who reliably claim they've succeeded and talk about their experience and how to reach there (e.g. see this talk and google each of the fellows if it resonates)

My current mental model of "Enlightenment" is as follows:

Evolutionally, we've had developed simple lizard brains first, mostly consisting of "register => process => decide => react" without much thought. Similar to the knee reflex, but sometimes a bit more complicated. Our intellectual minds capable of information processing, memory, superior pattern-matching; they have happened later.

These two systems coexist, and first one possesses second. However, the hardware of our brains has general information processing capabilities, and doesn't require any "good-bad" instant decision reactionary mechanism. Even though it was "invented" earlier, it's ad-hoc in the system. My metaphor would be a GPU or an ASIC that short-circuits some of the execution to help CPU process info faster.

However, makes a big difference in your subjective experience whether that first system being used or not. Un-winding this circuitry from your default information processing, which hand-wavily is "conscious attention", or the "central point"; is what mindfulness is about.

"Enlightenment" is a moment when you relax enough so that your brain starts being able (but not required) to run information flows around the the lizard brain and experiencing sensory stimuli "directly".

Similar "insight" moment happens when you realize that "money" is just paper, and not the Ultimate Human Value Leaderboard. You still can play along the illusion of money, you still can earn money, you still can enjoy money, but you can never go back to blindly obey what capitalism asks from you.

It should be quite obvious why this is good, but let me re-state again.

  • Anxiety goes down and doesn't control you anymore
  • Motivation issues go away, the gap between "I want this to happen" and "I find myself doing different thing" is removed
  • You don't care about status and external judgement anymore
  • You become more caring person to others internal states, but it feels freeing instead of locking-down
  • You find yourself in a space between stimulus and reaction
  • You can research your subjective experience deeper, e.g. find out how does brain constructs things like "time arrow" (answer: it's lazy-loading)

What does it all have to do with the question?

First answer is alignment becomes easier.

I believe that once we normalize this enlightenment thing, and once it becomes the normal part of human medical care system (or even child development as vaccines); the things we think we value and things we do value will synchronize much more. E.g. there is non-trivial number of examples of people losing their addictions after getting a week of hardcore training in mindfulness (see dhamma.org for signing up, it's completely free and worldwide).

Personally, for me alignment feels like "remembering" I always cared about other people, but was oblivious of that. It's like how it's hard to tune your attention to hear the music when there's loud noise around you.

It's like when there's a sound that bugs you a lot, but you don't notice it until it stops. In my case, when I noticed the "sound" (like how my actions hurt other people AND that I don't enjoy them being hurt) I stopped the behavior myself.

Second answer is even more tentative.

I'll say it anyway, because it's too big if true. However, again I can't promise any arguments and verifiable prediction. Read this as an invite to pick my mind further and try to strongman the position.

Love is the default human mode of perception, and it's informationally/computationally easy.

Most of the "enlightened" people report that if you look close enough, existence consists only of one building block, and that is Pure Universal Love, aka God.

It's not hidden somewhere or limited, it's literally everywhere. It's the same thing as "No-Self" or "True Self", and "God-realization". It was there all along and it will exist forever. It is fractally every small piece of reality, and the Reality itself as a whole.

When you really ask yourself what is that you want, and you skip the default "reactionary" answers, you find out that there's only one course of action that you won't regret and that you will genuinely enjoy.

In simpler examples, if you pay close attention to what you're feeling when you smoke, you might find out that the nicotine hit is not worth these mouth feelings, smoke it your lungs, instant slight headache, upcoming down-wave of tiredness. That requires attention and deep inspection, but that's presumably what our real nature is.

Same way, if you closely inspect your interactions with other people, you might find out that "winning" them doesn't feel good. And "helping" them sometimes doesn't feel good either. The only thing that deeply, really, genuinely feels good is caring for them. You might still be incentivized to not do that; or you might find yourself in situation not possible to change. But when you look close enough, there is no uncertainty.

Obviously, on the one hand it only tells us that Homo Sapiens are the agents that have their base execution layer wired to help each other (see Qualia Computing on indirect realism). It makes total sense from evolutionary standpoint.

However, it also feels computationally easy to do that. It doesn't feel like work to find "True Love". It's not always easy, but when you do this, it feels like a relief, like un-doing of work. Like dropping off the coat after coming home from rainy outside. Finally I get to be free and care about others.

Can this hint that there's some dynamic that makes is easier to align? That in some specific sense, alignment and cooperation is universally easier than defection?

I am not saying this because I want it to be true. I don't really believe computer can accidentally "wake up" to the "True Love".

I am saying this because it might happen so that there's some invariant at play that makes it easier to wish for low-entropy worlds, or to compute them, or something along these lines.

 

Finally, answering the original question. Yes, I consider myself fully aligned in the sense of my super-ego caring about each individuals' subjective experience.

In my current state, I don't always act on that, but wherever I catch myself in a tough choice, I try to apply the mechanism of "what's that answer that is most obvious?"

 

P.S. Two caveats:

  • Looks like this is a societal change to integrate this unwind-reactionary-behavior-Enlightenment into normal medical practice is even bigger than AGI alignment program.

    Even given we find a chemical that can trigger this change, people would most probably be very reluctant to normalize it (e.g. see MDMA-therapy only becoming socially acceptable around now). Most probably we would face the alignment problem faster than this, and after this it wouldn't matter
  • I might have just gone crazy from meditation and have started believing things that are not true. Subjectively, I feel there's something to it that is very much worth exploring. But it might be similar to an LSD effect when you feel that "you've finally got it" but in reality you're just drawing triangles inscribed in circles
Comment by Aleksey Bykhun (caffeinum) on Do you consider your current, non-superhuman self aligned with “humanity” already? · 2022-06-26T07:00:13.827Z · LW · GW

I think the majority of humanity has not studied or thought through some important basics, for example in economics, which lead them to support plenty of things (e.g. price-gouging laws) that I consider mind-bogglingly stupid.  

 

Interestingly, have just discussed similar issue with a friend and came up with a solution. Obviously, aligned AI cares about people's subjective opinion, but that doesn't mean it's not allowed to talk/persuade them. Imagine a list of TED-style videos tailored specifically for you on each pressing issue that requires you changing your mind.

On the one hand, it presumes that people trust the AI enough to be persuaded, but keep in mind that we're dealing with a smart restless agent. The only thing it asks is that you keep talking to it.

The last resort would be to press people on "if you think that's a bad idea, are you ready to bet that this implemented is going to make the world worse?" and create a virtual prediction market between supporters

P.S. This all implies that AI is non-violent communicator.  There are many ways to pull people's strings to persuade them, I presume that we know how to distinguish between manipulative and informative persuasion.

A hint on how to do that is that AI should care about people making INFORMED decisions about THEIR subjective future, not about getting their opinions "objectively" right.

Comment by Aleksey Bykhun (caffeinum) on AI Training Should Allow Opt-Out · 2022-06-25T13:41:41.091Z · LW · GW

I think opt-out should only be possible on first publish, same as e.g. GPL-v3 works, once you publish, you cannot re-claim your rights back

Comment by Aleksey Bykhun (caffeinum) on Secure homes for digital people · 2021-10-31T09:28:26.262Z · LW · GW

Actually it sounds like a poker game anyway. People try to build a model of you and predict you, and you respond randomly from time to time to mess with their trainings.

Comment by Aleksey Bykhun (caffeinum) on Coordination as a Scarce Resource · 2021-04-22T06:03:17.613Z · LW · GW

Oh boy I love this. Quite obvious, but such elegant analysis!

I have one example on top of this:

There is a phenomenon called “teal organizations”, where they replace top-down management with trust. Everyone in the company gets to decide which actions to take, as he’s probably the person closest to the problem. This is counteracted by additional responsibility to always ask someone more experienced than you when you make a decision — but instead of waiting until he gives you orders, you ask him yourself. Also, it doesn’t have to be your boss, it can be more meritorious colleague.

Sorry for the mixed description, you can read more at Reinventing Organizations by Frederic Laloux or at their website. There are many examples of successful organizations that works like that: Zappos, Valve are ones of the popular.

Overall, I love the series. Very deep and to the point, plus a lot of examples.