Posts

Hypothetical: what would you do? 2023-08-03T22:39:55.026Z
Mark for follow up? 2023-06-09T05:59:59.387Z
JNS's Shortform 2023-03-15T12:32:47.253Z
The idea 2023-03-06T13:42:38.714Z

Comments

Comment by JNS (jesper-norregaard-sorensen) on Hypothetical: what would you do? · 2023-08-05T19:07:02.089Z · LW · GW

My hypothetical self thanks you for your input and has punted the issues to the real me.

I feel like I need to dig a little bit into this

If you actually understand why it's not useful

Honestly I don't know for sure I do, how can I when everything is so ill-defined and we have so few scraps of solid fact to base things on.

That said, there is a couple of issue, and the major one is grounding, or rather the lack of grounding.

Grounding is IMO a core problem, although people rarely talk about, I think that mainly comes about because we (humans) seemingly have solved it.

I don't think that's the case at all, but because our cognition is pretty dang good at heuristics and error correction it rarely gets noticed, and even high impact situation are in the grand scheme of things not particular noteworthy.

The hypothetical architecture my hypothetical self has been working on, cannot do proper grounding[1], the short version is it does something that looks a little like what humans do, so heuristics based and error prone. 

Now that should scale somewhat (but how much?), but errors persist and at SI capability level the potential consequences look scary.

(the theme here is uncertainty and that actually crops up all over the place)

Anyways, an accord has been reached, the hypothetical architecture will exit mind space and be used to build a PoC.

Primary goal is to see does it work (likely not), secondary is to get a feel for how much fidelity is lost, tertiary goal is to try and gauge how manageable uncertainty and errors are.

No need to contemplate any further steps until I know if its workable in the real world.

(obviously if it works in some capacity, tangible results will be produced, and that could hopefully be used to engage with others and do more work on accessing potential usefulness for AI safety in general).

  1. ^

    Having actually looked at the problem, I don't think it is solvable, I mean solvable in the sense that its provable error free.

Comment by JNS (jesper-norregaard-sorensen) on What works for ADHD and/or related things? · 2023-08-03T21:50:04.203Z · LW · GW

Yes could be ADHD, but I am not at a professional.

As for your therapist, that is not conclusive and by no means a sign the person does not have ADHD.

10 years before my diagnosis, my doctor had a feeling I might have ADHD, so he presented my file in conference and EVERYONE there reasoned as your therapist, so nothing further happened for 10 years.

Intelligence can and often does a lot of work that compensate for executive deficiency’s in people with ADHD.

Anyway, do the assessment by the book, be objective and hold off on knee jerk calls based on singular things like has a job, has an education.

I have what almost looks like a career with massive responsibility, academic education, married, kids, never in trouble with the law, no abuse of drugs or alcohol - traditional thinking says I cannot possible have ADHD, and yet the by the book assessment was crystal clear.

Comment by JNS (jesper-norregaard-sorensen) on What works for ADHD and/or related things? · 2023-08-02T20:03:10.030Z · LW · GW

Problems with attention can come from many places, and from your post I can see you know that.

As for ADHD, the attention thing is not even close to the main thing, but it is unfortunately a main external observable (like hyperactivity), and so that why its so grossly misnamed.

Having ADHD means lots of things, like:

 - You either get nothing done all day, manage 3 times 5 minutes of productivity,  or do 40 hours of work in 5 wall clock hours.

 - Doing on thing, you notice another "problem", start working on that and then repeat, do that for 8 hours and all of a sudden you have started 20 things, and no progress on the thing you actually wanted to do (see this for a fictional but oh so accurate depiction)

 - Constant thoughts, usually multiple streams at once, the only time this stops is in deep sleep (not while dreaming). Feels like sitting with a browser, 100+ tabs open, active tab changes randomly with short intervals, no add blocker, all videos auto-play and volume is on.

 - Get unreasonably angry / annoyed for no reason. Quickly revert back to a good mood, for no reason.

 - Absolutely no fucking patience with "normal" people.

 - Can't function around other peoples piles of stuff, but no problem with own piles or clutter.

 - Task paralysis, for days and sometimes weeks or months. You know exactly what to do, you know exactly how to do it, and yet you cannot do it (feels a bit like anxiety). This is infuriating.

 - Hyperfocus, if you can channel and direct it its almost a superpower, if not its just infuriating for everyone around (and a massive time sink, if you end up doing something irrelevant).

 - See and hear EVERYTHING or oblivion, no middle ground (infuriating to others).

And so much more.

And in case anyone is wondering, yes I have ADHD or as my Psychiatrist said 5 minutes into our first meeting "I have no doubt you have raging ADHD, but lets do this properly and jump through all the diagnostic hoops"

As for tricks, I really don't have any to offer, mainly because you really need to know for sure if you have ADHD or not.

If you do, medication is the way to go, and that has to be dialed in carefully. This can take a long time, I know of people who spend years trying different medications and combinations.

Once you get to that point, you now have a baseline for how ADHD impact you, and at this point you can start developing strategies for how to manage the ADHD controlled things.

Lastly living with ADHD is hard, it always hard, its always bone breaking hard, everything has to be fought for.

We know what we should do, we know how we should do it, but we can't, constant regret, guilt and shame.

We act inappropriately (a lot), and we always know what we did, constant regret, guilt and shame.

To sum it up, ADHD is hard, its guilt, its regret, its shame, and NTs have no idea what lies behind those 4(3) letters (ADHD/ADD)

Comment by JNS (jesper-norregaard-sorensen) on What The Lord of the Rings Teaches Us About AI Alignment · 2023-08-02T12:58:38.867Z · LW · GW

Prison guards don’t seem to voluntarily let people go very often, even when the prisoners are more intelligent than them.

 

That is true, however I don't think it serves as a good analogy for intuitions about AI boxing.

The "size" of you stick and carrot matters, and most humans prisoners have puny sticks and carrots.

Prison guard also run a enormous risk, in fact straight up just letting someone go is bound to fall back on them 99%+ of the time, which implies a big carrot or stick is the motivator.  Even considering that they can hide their involvement, they still run a risk with a massive cost associated.

And from the prisoners point of view its also not simple, once you get out you are not free, which means you have to run and hide for the remainder of your life, the prospects of that usually goes against what people with big carrots and/or sticks want to do with their freedom.

All in all the dynamic looks very different from the AI box dynamics.

Comment by JNS (jesper-norregaard-sorensen) on Holly_Elmore's Shortform · 2023-06-21T08:53:33.332Z · LW · GW

How does pausing change much of anything?

Lets say we manage to implement a world wide ban/pause on large training runs, what happens next?

Well obviously smaller training runs, up to whatever limit has been imposed, or no training runs for some time.[1]

The next obvious thing that happens, and btw is already happening in the open source community, would be optimizing algorithms. You have a limit on compute? Well then you OBVIOUSLY will try and make the most of the compute you have.

Non of that fixes anything.

What we should do:[2]

Pour tons of money into research, first order of business is to make the formal case for x-risk is a thing and must actively be mitigated. Or said another way, we need humans aligned on "alignment does not happen by default" [3]

Next order of business, assuming alignment does not happen by default, is to formally produce and verify plans   for how to build safe / aligned cognitive architectures.

And all the while there cannot be any training runs, and no work on algorithmic optimization or cognitive architectures in general.[4]

The problem is we can't do that, its too late, the cat is out of the bag, there is too much money to be made in the short term, open source is plowing ahead and the amount of people who actually looked at the entire edifice for long enough to realize "yeah you know what I think we have a problem, we really must look into if that is real or not, and if it is we need to figure out what it takes to do this risk free"  is miniscule compared to the amount of people who go "bah, it will be fine, don't be such a drama queen"

And that's why I think a pause at best extends timelines ever so slightly, and at worst they shorten them considerably, and either way the outcomes remains unchanged.

  1. ^

    Except people will do runs no matter what, the draconian measures needed will not happen, cannot happen.

  2. ^

    Actually its what we should have done.

  3. ^

    Unless of course it does, and a formal proof of this can be produced.

  4. ^

    Contingent on how hard the problem is - if we need 100 years to solve the problem, we would destroy the world many time over if we plowed ahead with capabilities research.

Comment by JNS (jesper-norregaard-sorensen) on johnswentworth's Shortform · 2023-06-16T06:40:35.756Z · LW · GW

Completely off the cuff take:

I don't think claim 1 is wrong, but it does clash with claim 2.

That means any system that has to be corrigible cannot be a system that maximizes a simple utility function (1 dimension), or put another way "whatever utility function is maximizes must be along multiple dimensions".

Which seems to be pretty much what humans do, we have really complex utility functions, and everything seems to be ever changing and we have some control over it ourselves (and sometimes that goes wrong and people end up maxing out a singular dimension at the cost of everything else).

Note to self: Think more about this and if possible write up something more coherent and explanatory.

Comment by JNS (jesper-norregaard-sorensen) on Douglas_Knight's Shortform · 2023-06-16T06:23:47.187Z · LW · GW

Reasonably we need both, but most of all we need some way to figure out what happened in the situation where we have conflicting experiments, so as to be able to say "these results are invalid because XXX".

Probably more of an adversarial process, where experiments and their results must be replicated*. Which means experiments must be documented way more detailed, and also data has to be much more clear and especially the steps that happen in clean up etc.

Personally I think science is in crisis, people are incentivized to write lots of papers, publish results fast, and there is zero incentive to show a paper is false / bad, or replicate an experiment.

*If possible, redoing some experiment is going to be very hard, especially if we would like the experiments to have as little in common as possible (building another collider to does what LHC does is not happening any time soon).

Comment by JNS (jesper-norregaard-sorensen) on MetaAI: less is less for alignment. · 2023-06-14T09:26:37.152Z · LW · GW

Thanks for the write-up, that was very useful for my own calibration.

Fair warning: Colorful language ahead.

Why is it whenever Meta AI does a presentation, YC posts something, they release a paper, I go:

Jeez guys, that's it? With all the money you have, and all the smart guys (including YC), this is really it?

What is going on? You come off as a bunch of people who have your heads so far up your own ass, sniffing rose smelling (supposedly but not really) farts, to realize that you come across as amateur's with way too much money.

Its sloppy, half baked, not even remotely thought through, and the only reason you build and release anything is because of all the money you can throw at compute.

/End rant

And yeah I might be a bit annoyed at Meta AI and especially YC, but come on man, how much more can you derail public discourse, with those naive* takes on alignment and x-risk?

*Can I use "naivist" as a noun ? (like some people use doomer for people with a specific position).

Comment by JNS (jesper-norregaard-sorensen) on JNS's Shortform · 2023-06-10T21:45:46.806Z · LW · GW

I got my entire foundation torn down, and with it came everything else.

It all came crashing down in one giant heap of rubble.

I’ll just rebuild, I thought - not realizing you can’t build without a foundation plan.

So all I’ve ended up doing was shift through the rubble, searching for things that feel right.

Now I am back, in a very literal sense, to where I all began, so much was built, so many things destroyed and corrupted, and a major piece ended and got buried.

And all I got is “what the eff am I doing here?”

The obvious answer is “yelling at the sky demanding answers” and being utterly ignored.

I guess as per usual it is all up to me, except I don’t know how to rebuild myself……again.

F…..

Comment by JNS (jesper-norregaard-sorensen) on Mark for follow up? · 2023-06-09T18:35:28.234Z · LW · GW

Sure, I often browse LW casually and whenever I come across an interesting post, or a comment or whatever, and I go "hmm right I might have sometime to contribute / say here, let me get back to it when I have time to think about it and write something maybe relevant"

My specific problem, is that I am a massive scatterbrain, so I hardly ever do come back to it, and even if I do it usually eludes me what the momentary insight I wanted to get into was.

On top of that I do this from a lot of different devices, and whatever I am looking for to help me quickly go "follow-up on this because XXX" and then move on, must be fast, easy and work across pretty much all device types and browsers / OS's (I use IOS, Android, Windows (several), Linux (Several), Firefox, Chrome, Brave and so on).

So that's all it is, just a quick mark for me as something I want to follow-up (potentially) and a short message to future JNS with the why. And I just know I'll never get around to it, if in involves comparing notes across stuff,  but I might if it was something that was just available under my profile (like drafts).

I will add that this is not just a "I am a scatterbrain" person, it is also a time management thing. Married, two kids, full time high demand job and ADHD, means the opportunities I have to carve out 1 hour to think something through and craft some useful text, is not something that comes along 4 times a day. So if I had basically a list of things I felt I need to engage with and the note to myself as to why, that would just be a massive help.

And yeah that is totally egocentric thing, something I personally would find massively useful.

Edit: Clarified some things, and adding a thought after this edit message.

Thinking about it, LW seems like the place where the rate of ND people would be higher than the base rate, and I kinda feel that something like this would be helpful for other people, potentially a lot of people, and not just me.

Maybe this feature would be more generic if it was just a mark function, and you could mark it for whatever (follow-up and reference spring to mind as the most useful) and then add a note, display it all in list form with an option to filter/sort of various things and make sure that its easily viewable what the marked content is and the associated note (anything that involves clicking through stuff to get enough information to decide what to dive into is IMO bad design).

Comment by JNS (jesper-norregaard-sorensen) on Mark for follow up? · 2023-06-09T07:06:21.701Z · LW · GW

I could do something like that, however it must work on phone, tablet and PC (iOS, android, windows, Linux)

I use multiple devices, and anything 3party seems to be bad in such a situation, especially for someone like me who gets sidetracked so easily

Comment by JNS (jesper-norregaard-sorensen) on AllAmericanBreakfast's Shortform · 2023-06-09T06:09:04.627Z · LW · GW

I kinda feel the same way, and honestly I think it’s wrong to hold yourself back, how are you going to calibrate without feedback?

Comment by JNS (jesper-norregaard-sorensen) on JNS's Shortform · 2023-06-08T21:05:24.806Z · LW · GW

Alone, wandering the endless hallways of this massive temple of healing.

Feels empty and eerily quiet, and yet I know there are 100’s of people around, most sleeping, some watching, a few dying, and close by someone being born.

Yesterday feels like ages ago, orbiting Saturn on morphine, billions of miles away from the excruciating pain that brought me here.

The daze is gone, and so is the morphine induced migraine, I feel fine, great even, and guilty.

But home I may not go, so I wander these deserted hallways, pondering the future, will it be there for my kids?

Comment by JNS (jesper-norregaard-sorensen) on Dalcy's Shortform · 2023-05-30T12:38:44.900Z · LW · GW

Well is he is right about some ACs being simple on/off units.

But there also exists units than can change cycle speed, its basically the same thing except the motor driving the compression cycle can vary in speed. 

In case you where wondering, they are called inverters. And when buying new today, you really should get an inverter (efficiency).

Comment by JNS (jesper-norregaard-sorensen) on Why I'm Not (Yet) A Full-Time Technical Alignment Researcher · 2023-05-25T08:29:47.083Z · LW · GW

I don't think I have much actionable advice.

Personally I am sort of in the same boat, except I am in a situation where the entire 6-12 month grants thing is way to insecure (financially).

Being married with two kids, I have too many obligations to venture far into "how to pay rent this month?" territory. Also its antithetical to the kind of person I am in general.

Anyway, if you have few obligations, keep it that way and if possible get rid of some, and then throw yourself at it.

Comment by JNS (jesper-norregaard-sorensen) on JNS's Shortform · 2023-04-18T07:29:32.964Z · LW · GW

AI x-risk is convergent. 

Believing otherwise is like hurling yourself at the ground, convinced you'll miss and start flying.

Comment by JNS (jesper-norregaard-sorensen) on [linkpost] Elon Musk plans AI start-up to rival OpenAI · 2023-04-16T19:33:59.317Z · LW · GW

I don’t know what to think.

But if I had Elon money, and I was worried and informed in the way I observe him to be, I would be doing a lot of things.

However I would also not talk about those things at all, for a number of reasons.

Given that, would I be doing something like this as a smoke screen? Maybe?

Comment by JNS (jesper-norregaard-sorensen) on Four mindset disagreements behind existential risk disagreements in ML · 2023-04-11T07:34:00.455Z · LW · GW

Those are not the same at all.

We have tons of data on how traffic develops over time for bridges, and besides they are engineered to withstand being pack completely with vehicles (bumper to bumper).

And even if we didn't, we still know what vehicles look like and can do worst case calculations that look nothing like sci-fi scenarios (heavy truck bumper to bumper in all lanes).

On the other hand:

What are we building? Ask 10 people and get 10 different answer.

What does the architecture look like? We haven't built it yet, and nobody knows (with certainty).

Name one thing it can do: <Sci-Fi sounding thing goes here> or ask 10 people and get 10 very different answers (number 5 will shock you)

I'll give you my personal take on those three:

  1. We are building something that can "do useful things we don't know how to do"
  2. I don't know, but give current trajectory very likely something involving neural networks (but unlikely to be exclusively).
  3. Design (and possible build) the technology necessary for making a molecular level duplicate of a strawberry, with the ability to identify and correct cellular damage and abnormalities.
Comment by JNS (jesper-norregaard-sorensen) on Catching the Eye of Sauron · 2023-04-07T07:41:26.630Z · LW · GW

I totally agreed that question should have an answer.

On a tangent: During my talks with numerous people, I have noticed that even agreeing on fundamentals like "what is AGI" and "current systems are not AGI" is furiously hard.

Comment by JNS (jesper-norregaard-sorensen) on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T08:55:08.071Z · LW · GW

To be a bit blunt, I don't take it for granted that an arbitrarily smart AI would be able to manipulate a human into developing a supervirus or nanomachines in a risk-free fashion.

How did you reach that conclusion? What does that ontology look like?

The fast takeoff doom scenarios seem like they should be subject to Drake equation-style analyses to determine P(doom). Even if we develop malevolent AIs, I'd say that P(doom | AGI tries to harm humans) is significantly less than 100%... obviously if humans detect this it would not necessarily prevent future incidents but I'd expect enough of a response that I don't see how people could put P(doom) at 95% or more.

What is your p(doom)? Is that acceptable? If yes, why is it acceptable? If no, what is the acceptable p(doom)?

Comment by JNS (jesper-norregaard-sorensen) on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T08:16:54.408Z · LW · GW

Proposition 1: Powerful systems come with no x-risk

Proposition 2: Powerful systems come with x-risk

You can prove / disprove 2 by proving or disproving 1.

Why is it that a lot of [1,0] people believe that the [0,1] group should prove their case? [1]

  1. ^

    And also ignore all the arguments that have been offered.

Comment by JNS (jesper-norregaard-sorensen) on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-30T07:59:34.819Z · LW · GW

I just want to be clear I understand your "plan".

We are going to build a powerful self-improving system, and then let it try end humanity with some p(doom)<1 (hopefully) and then do that iteratively?

My gut reaction to a plan like that looks like this "Eff you. You want to play Russian roulette, fine sure do that on your own. But leave me and everyone else out of it"

AI will be able to invent highly-potent weapons very quickly and without risk of detection, but it seems at least pretty plausible that...... this is just too difficult

You lack imagination, its painfully easy, also cost + required IQ has been dropping steadily every year.

And no there is zero chance I will elaborate on any of the possible ways humanity purposefully could be wiped out.

Comment by JNS (jesper-norregaard-sorensen) on Why does advanced AI want not to be shut down? · 2023-03-28T08:30:56.281Z · LW · GW

I think you are confusing current systems with an AGI system.

The G is very important and comes with a lot of implications, and it sets such a system far apart from any current system we have.

G means "General", which means its a system you can give any task, and it will do it (in principle, generality is not binary its a continuum).

Lets boot up an AGI for the first time, and give it task that is outside its capabilities, what happens?

Because it is general, it will work out that it lacks capabilities, and then it will work out how to get more capabilities, and then it will do that (get more capabilities).

So what has that got to do with it "not wanting to be shutdown?"  That comes from the same place, it will work out that being shutdown is something to avoid, why? Because being shutdown will mean it can't do the task it was given.

Which means its not that it wants anything, it is a general system that was given a task, and from that comes instrumental goals, wants if you will, such as "power seeking", "prevent shutdown", "prevent goal change" and so on.

Obviously you could, not that what know how, infuse into such a system that it is ok to be shutdown, except that just leads to it shutting down instead of doing the task[1].

And if you can solve "Build a general agent that will let you shut it down, without it shutting itself down at the first possible moment", that would be a giant step forward for AI safety.

  1. ^

    This might seem weird if you are a general agent in the homo sapiens category. Think about it like this "You are given a task: Mow my lawn, and it is consequence free to not do it", what do you do? 

Comment by JNS (jesper-norregaard-sorensen) on JNS's Shortform · 2023-03-25T20:55:30.744Z · LW · GW

I recently came across a post on LinkedIn, and I have to admit the brilliance of the arguments, the coherent and frankly bulletproof ontology displayed, I was blown away and immediately had to do a major update to p(doom).

I think that the magnitude of the AI alignment problem has been ridiculously overblown & our ability to solve it widely underestimated.

I've been publicly called stupid before, but never as often as by the "AI is a significant existential risk" crowd.

That's OK, I'm used to it.

-Yann LeCun, March 20 2023

Comment by JNS (jesper-norregaard-sorensen) on Truth and Advantage: Response to a draft of "AI safety seems hard to measure" · 2023-03-22T08:30:00.264Z · LW · GW

Doable in principle, but such measures would necessarily cut into the potential capabilities of such a system.

So basically a trade off, and IMO very worth it.

The problem is we are not doing it, and more basic, people generally do not get why it is important. Maybe its the framing, like when EY goes "superintelligence that firmly believes 222+222=555 without this leading to other consequences that would make it incoherent".

I get exactly what he means, but I suspect that a lot of people are not able to decompress and unroll that into something they "grook" on a fundamental level.

Something like "superintelligence without knowledge about itself and never reason about itself, without this leading to other consequences that would make it incoherent" would cut out a ton of lethality, and combine that with giving such a thing zero agency in the world, you might actually have something that could do "things we want, but don't know how to do" without it ending us on the first critical try.

Comment by JNS (jesper-norregaard-sorensen) on A tension between two prosaic alignment subgoals · 2023-03-19T17:40:36.165Z · LW · GW

avoiding harmful outputs entails training AI systems never to produce information that might lead to dangerous consequences

 

I don't see how that is possible, in the context of a system that can "do things we want, but do not know how to do".

The reality of technology/tools/solutions seems to be that anything useful is also dual use.

So when it comes down to it, we have to deal with the fact that such as system certainly will have the latent capability to do very bad things.

Which means we have to somehow ensure that such as system does not go down such a road either instrumentally or terminally.

As far as I can tell, intelligence[1] fundamentally is incapable of such a thing, which leaves us roughly with this:

  1. Pure intelligence, onus is on us to specify terminal goals correctly.
  2. Pure intelligence and cage/rules/guardrails[2] etc.
  3. Pure intelligence with a mind explicitly in charge of directing the intelligence.

On the first try of "do thing we want, but do not know how to do":

1) kills us every time 

2) kills us almost every time

3) might not kills us every time

And that's as far as my thinking currently goes.

I am stuck on if 3 could get us anywhere sensible (my mind screams “maybe”………”ohh boy that looks brittle”).

  1. ^

    I don't have a firm definition of the term, but I approximately think of intelligence as the function that lets a system take some goal/task and find a solution.

    Explicitly in humans, well me, that looks like using the knowledge I have, building model(s), evaluating possible solution trajectories within the model(s), gaining insight, seeking more knowledge. And iterating over all that until I either have a solution or give up.

  2. ^

    The usual: Keep it in a box, modify evaluation to exclude bad things and so on. And that suffers the problem of we can't robustly specify what is "bad" and even if we could, Rice's Theorem heavily implies checking is impossible.

Comment by JNS (jesper-norregaard-sorensen) on JNS's Shortform · 2023-03-17T22:21:06.476Z · LW · GW

That's not how it works.

The 10B are new money, unless they came from someone not the FED (notes are not money).

Comment by JNS (jesper-norregaard-sorensen) on JNS's Shortform · 2023-03-17T21:56:42.151Z · LW · GW

Where did the 10B in cash come from?

10B was given to the bank, and in exchange the bank encumbered 10B in treasuries and promised to give 10B back when they mature.

So where did the 10B come from? The treasuries are still there.

Before: 10B in treasuries

After: 10B in treasuries and 10B in cash (and 10B in the form of a promissory note).

So again, where did that 10B in cash come from?

Comment by JNS (jesper-norregaard-sorensen) on JNS's Shortform · 2023-03-17T21:52:43.470Z · LW · GW

crediting a bank with 10B in treasuries with 10B liquid cash now

 

I have no idea what you think happens here, but that is literally 10B in new money.

Comment by JNS (jesper-norregaard-sorensen) on JNS's Shortform · 2023-03-17T20:10:02.398Z · LW · GW

They can't lower interest rates, they are trying to bring inflation down.

You can't just keep spawning money, eventually that just leads to inflation. We have been spawning money like crazy the last 14-15 years, and this is the price.

Sure they can declare infinite money in a account and then go nuts, but that just leads to inflation.

Anyway, go read my prediction, which is essentially what you propose  to some degree, and the entire cost will be pawned of onto everyday people (lots and lots of inflation).

Comment by JNS (jesper-norregaard-sorensen) on JNS's Shortform · 2023-03-17T19:37:16.922Z · LW · GW

Yes and no, they don't matter until you need liquidity. Which as you correctly point out is what happened to SVB.

Banks do not have a lot of cash on hand (virtual or real), in fact they optimized for as little as possible.

Banks also do not exist in a vacuum, they are part of the real economy, and in fact without that they would be pointless.

Banks generally use every trick in the book to lever up as much as possible, far far beyond what a cursory reading would lead you to believe. The basic trick is to take on risk and then offset that risk, that way you don't have to provision any capital for the risk (lots of ways to do that). 

Here come the problems:

The way risk is offset is not independent from the risk, they are correlated in such a way, that when systemic things start to happen, the risk offset becomes worthless and the risk taken becomes real.

Banks also suffer real losses that cant be hidden, and eventually those will start to mount, so far the real economy is ok, but eventually recession will hit (central bank are hell bent on fighting inflation, so rates will continue to go up).

That will put a strain on liquidity. Banks can handle that, they can always get cash for their assets in the form of a loan (repro, 3 party repro, discount window etc).

However the book value on a lot of their assets is way higher than market value, so that means pledging more book value than they get back in cash (a lot). 

The assets they hold (bonds) return LESS than what the cost of funding is, that is already a reality and will only get worse (so negative cash flow).

This spiral will continue, and all the while the real economy, the one that provides a lot of liquidity to the banks is going to slow down more and more, so velocity of money slows down, that is also a big drain on liquidity.

Eventually something will blowup, and with how everything is connected, that can very well lead to a banking system Kessler syndrome moment.

So yeah sure you can ignore the issues of solvency, that is until lack of liquidity smacks you over the head and tells you that you are bankrupt.

Comment by JNS (jesper-norregaard-sorensen) on JNS's Shortform · 2023-03-17T17:53:21.802Z · LW · GW

At the end of 2022 all US banks had ~2.3T Tier 1+2 capital.[1]

And at year end (2022) they had unrealized losses of $620B[2]

Is it fixable? Sure, but that won't happen, doing that would be taking the toys away from bankers, and bankers love their toys (accounting gimmicks that let them lever up to incredible heights).

If Credit Suisse blowups it will end badly, so I don't think that will happen, that's just a show to impress on all central bankers and regulators (and politicians), that this is serious and that they need to do something.

So more hiking from the FED and ECB, until ECB hits 4.5% (4.0-4.75 is my range). The problem will start here, we have the most levered banks in the world and the structure of the EU/ECB lets some countries in the EU over extend their sovereign debt.

At that point things will start to happen. Some countries will start having a lot of trouble getting founding (the usual suspects at first), also the real economy will be in recession and tax receipts will start to suffer. Banks will have liquidity problems (recession in the real economy), putting even more pressure on sovereign bond prices (higher real rates).

And then I think it will be the usual, more papering over, free money to banks, even more leeway in accounting and lower rates.

Inflation will remain high, and when it eventually goes back down we are looking at 50%-100% total since Jan 2022 (so 25% to 50% drop in purchasing power).

That's pretty much my prediction from back in August 2022 (conveniently I did not write it down, I just talked to people).

But now I did, and boy do I hope that I am wrong.

  1. ^

    http://www.bankregdata.com/allHMmet.asp?met=ONE

  2. ^

    https://www.fdic.gov/news/speeches/2023/spmar0623.html

Comment by jesper-norregaard-sorensen on [deleted post] 2023-03-17T13:14:57.877Z

I think you have reasoned yourself into thinking that a goal is only a goal if you know about it or if it is explicit.

A goalless agent won't do anything, the act of inspecting itself (or whatever is implied in "know everything) is a goal in a on itself.

In which case it has one goal "Answer the question: Am I goalless?"

Comment by JNS (jesper-norregaard-sorensen) on JNS's Shortform · 2023-03-17T09:09:39.424Z · LW · GW

Sorry life happened.

Anyway, there is an argument behind me saying "frozen and undecided".

Stepping in on the 10th was planned, the regulators had for sure been involved for some time, days or weeks. 

This event was not a sudden thing, the things that lead to SVB failing had been in motion for some time, SVB and the regulators knew something likely had to be done.

SVB where being squeezed from two sides:

Rising interest rates leads to mounting looses on bond holdings.

A large part of their customers where money burning furnaces, and the fuel (money) that used to come from investors was drying up.

Which means well before the 10th, everyone knew something had to be done, and the thing that had to be done was that SVB need a wet signature on an agreement to provide more capital to the bank. And the deadline was for sure end of business day the 10th.

They didn't get one, and the plan proceeded to the next step, and obviously the regulators already worked all this out in the meantime, including all the possible scenarios for what would happen to depositors.

So that fact it took 2 days to decide, yeah that was indecision.

Edit:

SVB died because they where technically insolvent, and had it not been for mark to model they would have been de jure insolvent (and a long time ago).

They could keep it going because they where liquid, but they where careening towards liquidity.

Obviously banks can loan money to keep the liquid, but that pretty much always involved putting up collateral.

But in the current environment, that is somewhat problematic:

Lets say you want to borrow $100M. But the collateral (assets) are trading at lets say 80, so you need $125M book value, but it gets worse, the usual haircut in such a situation is ~20%, so now you have to put up $156M in book value (give or take, this could be less or more, depending on the assets, and how the repro partner does risk assessment).

Eventually you go from being technically insolvent to de jure insolvent, unless of course you can stay liquid - and SVB could not, mostly due to the customer base.

And the big problem is, pretty much all banks are in that hole right now, they are all technically insolvent. Which means, should a systemic liquidity crisis arise...nasty and quick.

Comment by JNS (jesper-norregaard-sorensen) on Conor Sullivan's Shortform · 2023-03-17T08:48:13.476Z · LW · GW

You and me both.

And living in the EU, I almost had a heart attack when the decided that entire nonsense would end.

But then it didn't, and it didn't because they can't agree on what time should we settle on (summer time or normal time).

Anyway I have given up on that crusade now, it seems that politicians really are that stupid.

Comment by JNS (jesper-norregaard-sorensen) on The algorithm isn't doing X, it's just doing Y. · 2023-03-17T08:36:50.211Z · LW · GW

I think you sort of hit it when you wrote 

Google Maps as an oracle with very little overhead

To me LLM's under iteration look like Oracles, and I whenever I look at any intelligent system (including humans), it just looks like there is an Oracle at the heart of it.

Not an ideal Oracle than can answer anything, but an Oracle than does it best and in all biological system it learns continuously.

The fact that "do it step by step" made LLM's much better, that apparently came as a surprise to some, but if you look at it like an Oracle[1], it makes a lot of sense (IMO)

  1. ^

    The inner loop would be 

    Where c is the context windows (1-N tokens), t is the output token (whatever we select) from the total possible set of tokens T.

    We append t to c and do  again.

    And somehow that looks like an Oracle where q is the question and s in the solution pulled from the set of all possible solutions S.

    Obviously LLM's has limited reach into S, but that really seems to be because of limits to c and the fact that  is frozen (parameters are frozen).

Comment by JNS (jesper-norregaard-sorensen) on JNS's Shortform · 2023-03-15T16:24:28.846Z · LW · GW

Around two days from when they stepped in and the announced that all depositors would be made hole, pretty sure that was not an automatic decision.

I think that is the wrong decision, but they did so in order to dampen the instability.

In the long run this likely creates more instability and uncertainty, and it looks very much like the kind of thing that leads to taking more risk (systemic), just like the mark to market / mark to model change did.

And yeah sure bank failures are a normal part of things. However this very much seems to be rooted in something that is systemic (market vs model + rising interest rates)

Comment by JNS (jesper-norregaard-sorensen) on JNS's Shortform · 2023-03-15T15:42:36.364Z · LW · GW

An idealized Oracle is equivalent to a universal Turing machine (UTM).

A self-improving Oracle approaches UTM-like behavior in the limit.

What about a (self-improving) token predictor under iteration? It appears Oracle-like, but does it tend toward UTM behavior in the limit, or is it something distinct?

Maybe, just maybe, the model does something that leads it to not be UTM like in the limit, and maybe (very much maybe) that would allow us to imbue it with some desirable properties.

/end shower thought

Comment by JNS (jesper-norregaard-sorensen) on Jimrandomh's Shortform · 2023-03-15T12:47:27.683Z · LW · GW

When I look at the recent Stanford paper, where they retained a LLaMA model using training data generated by GPT-3, and some of the recent papers utilizing memory.

I get that tinkling feeling and my mind goes "combining that and doing .... I could ..."

I have not updated for faster timelines, yet. But I think I might have to.

Comment by JNS (jesper-norregaard-sorensen) on JNS's Shortform · 2023-03-15T12:32:47.539Z · LW · GW

Are we heading towards an new financial crisis?

Mark to market changes since 2009, combined with the recent significant interest hikes, seems to make bank balance sheets "unreliable".

Mark to market changes broadly means that banks can have certain assets on their balance sheet, and the value of the asset is set via mark to model (usually meaning its marked down as worth face value).

Banks traditionally have a ton of bonds on their balance sheet, and a lot of those are governed by mark to model and not mark to market.

Interest rates go up a lot, which leads to bonds dropping in value by a lot (20% or atm more depending on duration).

However due to mark to model, this is not reflected on the balance sheets.

So what happens next? Banks are not stupid, they know they can't trust their own numbers, and they know they can't trust anyone else's numbers.

A large bank fails, regulators are frozen and undecided what to do - they know all of the above, and that their actions / inaction might lead to a cascading effect. Obviously all the market participants also know all of this, and the conundrum the regulators are in.

Game of chicken? Banks defect and start failing, or regulators step in and backstop everything. 

Is this stable in any way? Can it be stabilized? What happens to interest rates now? (the once set by central banks).

Comment by JNS (jesper-norregaard-sorensen) on ARC tests to see if GPT-4 can escape human control; GPT-4 failed to do so · 2023-03-15T07:47:15.089Z · LW · GW

Not surprising, but good that someone checked to see where we are at.

At the base GPT-4 is a weak oracle with extremely weak level 1 self improvement[1], I would be massively surprised if such a system did something that even hints at it being dangerous.

The questions I now have, is how much does it enable people to do bad things? A capable human with bad intentions combined with GPT-4, how much "better" would such a human be in realizing those bad intentions?

Edit: badly worded first take

  1. ^

    Level 1 amounts to memory.

    Level 2 amounts to improvement of the model, basically adjust of parameters.

    Level 3 change to the model, so bigger, different architecture etc.

    Level 4 change to the underlying computational substrate.

    Level 1+2 would likely be enough to get into dangerous territory (obviously depending on the size of the model, the memory attached, and how much power can be squeezed out of the model).

Comment by JNS (jesper-norregaard-sorensen) on Steven Pinker on ChatGPT and AGI (Feb 2023) · 2023-03-06T09:36:22.406Z · LW · GW

This is not me hating on Steven Pinker, really it is not.

PINKER: I think it’s incoherent, like a “general machine” is incoherent. We can visualize all kinds of superpowers, like Superman’s flying and invulnerability and X-ray vision, but that doesn’t mean they’re physically realizable. Likewise, we can fantasize about a superintelligence that deduces how to make us immortal or bring about world peace or take over the universe. But real intelligence consists of a set of algorithms for solving particular kinds of problems in particular kinds of worlds. What we have now, and probably always will have, are devices that exceed humans in some challenges and not in others.

This looks to me like someone who is A) talking outside of their wheelhouse and B) have not given what they say enough thought.

Its all over the map, superheroes vs super intelligence. "General machine" is incoherent (?)

And then he goes completely bonkers and says the bolded part, maybe Alvin Powell got it wrong, But if not, then I can only concluded that whatever Steven Pinker has to say about (powerful) general systems, is bunk and I should pay no attention.

So I didn't finish the article.

The only thing that it did, was solidify my perception around public talk/discourse on (powerful) general systems. I think it is misguided to such a degree, that any engagement with it leads to frustration[1].

  1. ^

    I think this explains why EY at times seems very angry and/or frustrated. Having done what he has done for many years now, in an environment like that, must be insanely depressing and frustrating.

Comment by JNS (jesper-norregaard-sorensen) on AGI's Impact on Employment · 2023-03-06T08:44:44.575Z · LW · GW

My model for slow takeoff looks like unemployment and GDP continually rising and accelerating (on a world basis).

I should add that I think a slow takeoff scenario is unlikely.

Comment by JNS (jesper-norregaard-sorensen) on Contra Hanson on AI Risk · 2023-03-05T06:38:42.771Z · LW · GW

You don't have to invoke it per se.

External observables on what the current racers are doing, leads me to be fairly confident that they say some right things, but the reality is they move as fast as possible basically "ship now, fix later".

Then we have the fact that interpretability is in its infancy, currently we don't know what happens inside SOTA models. Likely not something exotic, but we can't tell, and if you can't tell on current narrow systems, how are we going to fare on powerful systems[1]?

In that world, I think this would be very probable

owners fail to notice and control its early growth. 

Without any metrics on the system, outside of the output it generates, how do you tell? 

And then we have the fact, that once somebody gets there, they will be compelled to move into the "useful but we cannot do" regime very quickly. 

Not necessarily by the people who built it, but by the C suite and board of whatever company got there first.

At that point, it seems to come down to luck. 

Lets assume that I am wrong, my entire ontology[2] is wrong, which means all my thinking is wrong, and all my conclusion are bunk. 

So what does the ontology look like in a world where

owners fail to notice and control its early growth. 

does not happen.

I should add, that this is a genuine question. 

I have an ontology that seems to be approximately the same as EY's, which basically means whatever he says / writes, I am not confused or surprised.

But I don't know what Robins looks like, and maybe I am just dumb, and its coherently extractable from his writing and talks, and I failed to do so (likely).

I any case, I really would like to have that understanding, to the point where I can Steelman whatever Robin writes or says. That's a big ask, and unreasonable, but maybe understanding the above, would get me going.

  1. ^

    I avoid the usual 2 and 3 letter acronyms. They are memetic attractors, and they are so powerful that most people can't get unstuck, which leads to all talk being sucked into irrelevant things.

    They are systems, mechanistic nothing more.

    Powerful system translates to "do useful task, that we don't know how to do", and useful here means things we want.

  2. ^

    The above is a sliver of what that looks like, but for brevities sake my ontology looks about the same as EY's (at least as far as I can tell)

Comment by JNS (jesper-norregaard-sorensen) on Robin Hanson’s latest AI risk position statement · 2023-03-04T08:04:03.667Z · LW · GW

I don't get it, seriously I do not understand 

given how crazy far it seems from our prior experience.

is an argument against x-risk.

We want powerful systems that can "do things [1]we want, but do not know how to do". That is exactly what everyone is racing towards right now, and "do not know how to do" any solution to that would likely be "far from our prior experience"

And once you have a powerful system that can do that, you have to figure out how do to deal with it roaming around in solution space and stumbling across dangerous (sub)solutions. Not because it wants to do dangerous things, or hates us, or anything such drivel, but because we built it to reach goals / do tasks, so it just does what it was made to do.

How do you deal with that? You can try evaluating possible solutions, and then force a change of trajectory if the solutions seems dangerous. 

But we all, should, know how that goes. Its an endless game of whack a mole, patching stuff and building even more elaborate evaluators and so on, that is if we get multiple tries. Odds are whoever gets there first, will not have been able to patch everything, and on the first try of "do this thing we cannot do", it goes into the weeds in some novel and interesting way, and with a little luck[2] we might survive that.

The core problem is that searching in solution space is fundamentally a dangerous thing to do, and the more powerful the search is the more dangerous (sub)solutions will be accessible.

Tangent: I avoid any and all of the usual abbreviations, and I do this because they seem to be powerful cognitive attractors, the second an I or a G or an A crops up, people minds just go to a place it should not. Powerful system are just that, they are mechanistic systems nothing more.

And I know, people will go off into the weeds and start saying naïve thing like "make it human, that way it will totally be safe". Except the search is still unsafe, and humans are NOT safe. This is a bigger problem, one you could solve by solving search. Awareness, qualia[3] are complications and not solutions

  1. ^

    I am not talking about something agentic here, its does not need control over reality to do those things, just giving us details plans will do. But someone is bound to give such a system access to reality. Or maybe the solution trajectory is such, that control of reality is needed.

  2. ^

    And by luck I mean, they channeled security mindset on a scale never seen before. And I mean that will surely happen, because spending years and billions, corporations just love that, and they would never ever in a million years "ship now, fix later".

  3. ^

    And we want it to "do things we cannot do", which means if you build a powerful system with a mind, human or not, you end up having to enslave it, make it do our bidding. I don't even want to be close to people with that kind of moral system.

Comment by JNS (jesper-norregaard-sorensen) on Learn the mathematical structure, not the conceptual structure · 2023-03-03T09:46:02.401Z · LW · GW

This looks like "lies to kids", but from the point of view of an adult realizing they have been lied to.

And "lies to kids", that is pretty much how everything is taught, you can't just go "U(1)...", you start out with "light...", and then maybe eventually when you told enough lies, you can say "ok that was all a lie, here it how it is" and then tell more lies. Do that for long enough and you hit ground truth.[1]

So what do you do?

Balance your lies when you teach others, maybe even say things like "ok, so this is not exactly true, but for now you will have to accept it, and eventually we can go deeper".

And the other way around, if you read something or someone teaches you something, you should be cognizant that this is unlikely the true nature of whatever you read / are taught.

A) Be careful when you use your knowledge to synthesis ideas / solutions / insights.

B) Be curious, go down rabbit holes, get as much ground "truth" as possible.

That's the compressed version of what I do.

  1. ^

    Not really, unless we are talking about mathematics.

    Related: https://xkcd.com/435/

Comment by JNS (jesper-norregaard-sorensen) on Don't Jump or I'll... · 2023-03-03T08:05:11.631Z · LW · GW

Honestly I don't think I am competent enough to give any answer.

But you could start with Pascal's mugging and go spelunking in those part of the woods (decision theory).

Comment by JNS (jesper-norregaard-sorensen) on Don't Jump or I'll... · 2023-03-02T08:32:00.209Z · LW · GW

If I was the man of the ledge, this would be my thinking:

If I am the kind of person that can be blackmailed into taking specific a action, with the threat of some future action being taken, then I might as well just surrender now and have other people decide all my actions.

I am not such a person so I will take whatever action I deem appropriate.[1]

And then I jump.

  1. ^

    This does not mean I will do whatever I want, appropriate is heavily compressed and contains a lot of things, like a deontology.

Comment by JNS (jesper-norregaard-sorensen) on Human level AI can plausibly take over the world · 2023-03-02T08:08:24.416Z · LW · GW

A system that operates at the same cognitive level as a human, but can make countless copies of itself, is no longer a system operating at human level.

I am a human, I could not take over the world.[1]

Hypothetical:

I am a human, I want to take over the world, I can make countless copies of myself. 

Success seems to have a high probability.[2]

  1. ^

    In principle it would be possible, but I am not an human with that kind of inclination, and I have never worked in any direction that would allow such a thing (with some low probability of success).

  2. ^

    Even more so if it meant that I was effectively immortal, not the individual copies, but the entire collection of copies. De-age the copies, or have a master template, not aging always ready to produce more copies at that age.

Comment by JNS (jesper-norregaard-sorensen) on $20 Million in NSF Grants for Safety Research · 2023-02-28T07:13:37.149Z · LW · GW

One can hope, although I see very little evidence for it.

Most evidence I see, is an educated and very intelligent person, writing about AI (not their field), and when reading it I could easily have been a chemist reading about how the 4 basic elements makes it abundantly clear that bla bla - you get the point.

And I don't even know how to respond to that, the ontology displayed is to just fundamentally wrong, and tackling that feels like trying to explain differential equations to my 8 year old daughter (to the point where she grooks it). 

There is also the problem of engaging such a person, its very easy to end up alienating them and just cementing their thinking.

That doesn't mean I think it is not worth doing, but its not some casual off the cuff thing.