Posts

How I Wrought a Lesser Scribing Artifact (You Can, Too!) 2024-08-02T03:35:00.972Z
Lorxus's Shortform 2024-05-18T17:57:19.721Z
(Geometrically) Maximal Lottery-Lotteries Are Probably Not Unique 2024-05-10T16:00:08.217Z
(Geometrically) Maximal Lottery-Lotteries Exist 2024-05-03T19:29:01.775Z
My submission to the ALTER Prize 2023-09-30T16:07:35.190Z
Untangling Infrabayesianism: A redistillation [PDF link; ~12k words + lots of math] 2023-08-01T12:42:35.744Z

Comments

Comment by Lorxus on Internal music player: phenomenology of earworms · 2024-11-16T18:52:18.905Z · LW · GW

Do you know when you started experiencing having an internal music player? I recall that that started for me when I was about 6. Also, do you know whether you can deliberately pick a piece of music, or other nonmusical sonic experiences, to playback internally? Can you make them start up from internal silence? Under what conditions can you make them stop? Do you ever experience long stretches where you have no internal music at all?

Comment by Lorxus on Ayn Rand’s model of “living money”; and an upside of burnout · 2024-11-16T17:09:04.536Z · LW · GW

Sure - I can believe that that's one way a person's internal quorum can be set up. In other cases, or for other reasons, they might be instead set up to demand results, and evaluate primarily based on results. And that's not great or necessarily psychologically healthy, but then the question becomes "why do some people end up one way and other people the other way?" Also, there's the question of just how big/significant the effort was, and thus how big of an effective risk the one predictor took. Be it internal to one person or relevant to a group of humans, a sufficiently grand-scale noble failure will not generally be seen as all that noble (IME).

Comment by Lorxus on Ayn Rand’s model of “living money”; and an upside of burnout · 2024-11-16T16:49:08.978Z · LW · GW

This makes some interesting predictions re: some types of trauma: namely, that they can happen when someone was (probably even correctly!) pushing very hard towards some important goal, and then either they ran out of fuel just before finishing and collapsed, or they achieved that goal and then - because of circumstances, just plain bad luck, or something else - that goal failed to pay off in the way that it usually does, societally speaking. In either case, the predictor/pusher that burned down lots of savings in investment doesn't get paid off. This is maybe part of why "if trauma, and help, you get stronger; if trauma, and no help, you get weaker".

Comment by Lorxus on D&D.Sci Coliseum: Arena of Data Evaluation and Ruleset · 2024-11-11T21:10:45.050Z · LW · GW

I didn't enjoy this one as much, but that's likely down to not having had the time/energy to spend on thinking this through deeply. That said... I did not in fact enjoy it as much and I mostly feel like garbage for having done literally worse than chance, and I feel like it probably would have been better if I hadn't participated at all.

Comment by Lorxus on Some Rules for an Algebra of Bayes Nets · 2024-11-07T04:10:58.904Z · LW · GW

Let me see if I've understood point 3 correctly here. (I am not convinced I have actually found a flaw, I'm just trying to reconcile two things in my head here that look to conflict, so I can write down a clean definition elsewhere of something that matters to me.)

 factors over . In  were conditionally independent of each other, given . Because  factors over  and because in  were conditionally independent of each other, given , we can very straightforwardly show that  factors over , too. This is the stuff you said above, right?

But if we go the other direction, assuming that some arbitrary  factors over , I don't think that we can then still derive that  factors over  in full generality, which was what worried me. But that break of symmetry (and thus lack of equivalence) is... genuinely probably fine, actually - there's no rule for arbitrarily deleting arrows, after all.

That's cleared up my confusion/worries, thanks!

Comment by Lorxus on Some Rules for an Algebra of Bayes Nets · 2024-11-06T21:55:48.353Z · LW · GW

We’ll refer to these as “Bookkeeping Rules”, since they feel pretty minor if you’re already comfortable working with Bayes nets. Some examples:

  • We can always add an arrow to a diagram (assuming it doesn’t introduce a loop), and the approximation will get no worse.

Here's something that's kept bothering me on and off for the last few months: This graphical rule immediately breaks Markov equivalence. Specifically, two DAGs are Markov-equivalent only if they share an (undirected) skeleton. (Lemma 6.1 at the link.)

If the major/only thing we care about here regarding latential Bayes nets is that our Grand Joint Distribution  factorize over (that is, satisfy) our DAG  (and all of the DAGs we can get from it by applying the rules here), then by Thm 6.2 in the link above,  is also globally/locally Markov wrt . This holds even when  is not guaranteed for some of the possible joint states in , unlike Hammersley-Clifford would require.

That in turn means that (Def 6.5) there's some distributions  can be such that  factors over , but not  (where  trivially has the same vertices as  does); specifically, because  don't (quite) share a skeleton, they can't be Markov-equivalent, and because they aren't Markov-equivalent,  no longer needs to be (locally/globally) Markov wrt  (and in fact there must exist some  which explicitly break this), and because of that, such  need not factor over . Which I claim we should not want here, because (as always) we care primarily about preserving which joint probability distributions factorize over/satisfy which DAGs, and of course we probably don't get to pick whether our  is one of the ones where that break in the chain of logic matters.

Comment by Lorxus on D&D Sci Coliseum: Arena of Data · 2024-10-22T18:49:51.989Z · LW · GW

I'm going to start by attacking this a little on my own before I even look much at what other people have done.

Some initial observations from the SQL+Python practice this gave me a good excuse to do:

  • Adelon looks to have rough matchups against Elf Monks. Which we don't have. They are however soft to even level 3-4 challengers sometimes. Maybe Monks and/or Fencers have an edge on Warriors?
  • Bauchard seems to have particularly strong matchups against other Knights, so we don't send Velaya there. They seem a little soft to Monks and to Dwarf Ninjas and especially to Knights, so maybe Zelaya? Boots should help here.
  • Cadagal has precious few defeats, but one of them might be to a level 2(!) Human Warrior with fancy +3 Gauntlets. Though it seems like there's a lot of combats where some Cadagal-like fighter has +4 Boots instead? Not sure if that's the same guy.
    • And on that note, the max level is 7, and the max bonus for Boots and Gauntlets both is +4.
    • Max Boots (+4) is always on a level 7 Elf Ninja with +3 Gauntlets (but disappears altogether most of the way through the dataset).
    • Max Gauntlets (+4) is on either a level 7 Dwarf Monk who upgraded from +1 Boots to +3 Boots halfway through, or else there's two of them. Thankfully we're not facing them.
  • Deepwrack poses problems. They have just as few defeats, and one of them even contradicts the ordering I derived below! Ninjas are meant to lose to Monks. Maybe the speed matters a lot in that case?
  • It looks like a strict advantage in level or gear - holding all else constant - means you win every time. If everything is totally identical, you win about half the time. (Which seems obvious but worth checking.)
  • Looking through upsets - bouts where the classes are different, the losing fighter had at least 2 levels on the winner, and the loser's gear was no better than the winner's - we generally see that:
    • Fencers beat Monks and Rangers and lose to Knights, Ninjas, and Warriors
    • Knights beat Fencers and Ninjas, tie(???) with Monks and Warriors, and lose (weakly) to Rangers
    • Monks beat Ninjas, Rangers, and maybe Warriors, tie (?) with Knights, and lose to Fencers
    • Ninjas beat Fencers and (weakly) Rangers, and lose to Knights, Monks, and Warriors
    • Rangers beat Knights (weakly), Ninjas, and Warriors, tie with Fencers, and lose to Monks
    • Warriors beat Fencers, Ninjas, tie(?) with Knights, and lose to Rangers and maybe Monks

So my current best guess (pending understanding which gear is best for which class/race) is: 

Willow v Adelon, Varina v Bauchard, Xerxes v Cadagal, Yalathinel v Deepwrack.

If I had to guess what gear to give to who: Warrior v Knight is a rough matchup, so Varina's going to need the help; the rest of my assignments are based thus far on ~vibes~ for whether speed or power will matter more for the class. Thus:

Willow gets +2 Boots and +1 Gauntlets, Varina gets +4 Boots and +3 Gauntlets, Xerxes gets +1 Boots and +2 Gauntlets, and Yalathinel gets +3 Boots.

Some theories I need to test:

  • Race affects how good you are at a class. Elves might be best at rangering, say.
  • Race and/or class affect how much benefit you get out of boots and/or gauntlets. Being a warrior might mean you get full benefit from gauntlets but none from boots.
  • Color might affect how well classes do. Ninjas wearing red might win way less often.
    • The color does not actually seem to affect ninjas all that much if at all - 6963 vs 6762 wins. Could still be a tiebreaker?
    • Color doesn't affect things much overall either: 40136 vs 39961 wins.
  • There's some rank-ordering of class+race+level matchups, maybe an additive one.
    • Alternatively there could be some nontransitive thing going on with tiebreaks sometimes from levels, races, and gear?
    • On further reflection that totally seems to be what's going on here.
    • Maybe there's something about the matchup ordering being sorted over (race, class)? D's loss (as a L6 Dwarf Monk) to a L4 Dwarf Ninja is... unexpected to say the least!

Wild speculation:

  • If you [use the +4 Boots in combat and beat Cadagal then they'll know you were] responsible [for] ????? ?????? [Boots from his/her/the] House.  [You will gain its] lasting enmity, [and] [people? will?] ???????? ???? ??? ???? ?? ??? ???? ??????? ?? ?? ????? ?? ????????? ?? ??? [upon] your honor [if] ????????? ???? ?? ???? ??? ???? ??? ??? friendship ???? ?? ??? ???? ?? ??? ?????? ??????? ?? ?? ?????.
    • So maybe we're OK to use the +4 Boots as long as it's not against Cadagal?
    • No idea how to even guess at what's going on in that second sentence apart from "bad things will happen and everyone will hate you, you dirty thief".
Comment by Lorxus on Why Large Bureaucratic Organizations? · 2024-08-29T15:14:24.256Z · LW · GW

I'm gonna leave my thoughts on the ramifications for academia, where a major career step is to repeatedly join and leave different large bureaucratic organizations for a decade, as an exercise to the reader.

Like, in a world where the median person is John Wentworth (“Wentworld”), I’m pretty sure there just aren’t large organizations of the sort our world has.

I have numerous thoughts on how Lorxusverse Polity handles this problem but none of it is well-worked out enough to share. In sum though: Probably cybernetics (in the Beer sense) got discovered way earlier and actually ever used as stated and that was that, no particular need for dominance-status as glue or desire for it as social-good. (We'd be way less social overall, though, too, and less likely to make complex enduring social arrangements. There would be careful Polity-wide projects for improving social-contact and social-nutrition. They would be costly and weird. Whether that's good or bad on net, I can't say.)

Comment by Lorxus on Shifting Headspaces - Transitional Beast-Mode · 2024-08-28T15:22:01.563Z · LW · GW

Sure, but you obviously don't (and can't even in principle) turn that up all the way! The key is to make sure that that mode still exists and that you don't simply amputate and cauterize it.

Comment by Lorxus on Wei Dai's Shortform · 2024-08-28T15:16:34.561Z · LW · GW

[2.] maybe one could go faster by trying to more directly cleave to the core philosophical problems.

...

An underemphasized point that I should maybe elaborate more on: a main claim is that there's untapped guidance to be gotten from our partial understanding--at the philosophical level and for the philosophical level. In other words, our preliminary concepts and intuitions and propositions are, I think, already enough that there's a lot of progress to be made by having them talk to each other, so to speak.

OK but what would this even look like?\gen

Toss away anything amenable to testing and direct empirical analysis; it's all too concrete and model-dependent.

Toss away mathsy proofsy approaches; they're all too formalized and over-rigid and can only prove things from starting assumptions we haven't got yet and maybe won't think of in time.

Toss away basically all settled philosophy, too; if there were answers to be had there rather than a few passages which ask correct questions, the Vienna Circle would have solved alignment for us.

What's left? And what causes it to hang together? And what causes it not to vanish up its own ungrounded self-reference?

Comment by Lorxus on Wei Dai's Shortform · 2024-08-28T15:09:08.545Z · LW · GW

Clearly academia has some blind spots, but how big? Do I just have a knack for finding ideas that academia hates, or are the blind spots actually enormous?

From someone who left a corner of it: the blindspots could be arbitrarily large as far as I know, because there seemed to me to be no real explicit culture of Hamming questions/metalooking for anything neglected. You worked on something vaguely similar/related to your advisor's work, because otherwise you can't get connections to people who know how to attack the problem.

Comment by Lorxus on Wei Dai's Shortform · 2024-08-28T14:57:58.970Z · LW · GW

As my reacts hopefully implied, this is exactly the kind of clarification I needed - thanks!

Like, bro, I'm saying it can't think. That's the tweet. What thinking is, isn't clear, but That thinking is should be presumed, pending a forceful philosophical conceptual replacement!

Sure, but you're not preaching to the choir at that point. So surely the next step in that particular dance is to stick a knife in the crack and twist?

That is - 

"OK, buddy:

Here's property P (and if you're good, Q and R and...) that [would have to]/[is/are obviously natural and desirable to]/[is/are pretty clearly a critical part if you want to] characterize 'thought' or 'reasoning' as distinct from whatever it is LLMs do when they read their own notes as part of a new prompt and keep chewing them up and spitting the result back as part of the new prompt for itself to read.

Here's thing T (and if you're good, U and V and...) that an LLM cannot actually do, even in principle, which would be trivially easy for (say) an uploaded (and sane, functional, reasonably intelligent) human H could do, even if H is denied (almost?) all of their previously consolidated memories and just working from some basic procedural memory and whatever Magical thing this 'thinking'/'reasoning' thing is."

And if neither you nor anyone else can do either of those things... maybe it's time to give up and say that this 'thinking'/'reasoning' thing is just philosophically confused? I don't think that that's where we're headed, but I find it important to explicitly acknowledge the possibility; I don't deal in more than one epiphenomenon at a time and I'm partial to Platonism already. So if this 'reasoning' thing isn't meaningfully distinguishable in some observable way from what LLMs do, why shouldn't I simply give in?

Comment by Lorxus on Announcing ILIAD — Theoretical AI Alignment Conference · 2024-08-25T06:02:04.883Z · LW · GW

> https://www.lesswrong.com/posts/r7nBaKy5Ry3JWhnJT/announcing-iliad-theoretical-ai-alignment-conference#whqf4oJoYbz5szxWc

you didn't invite me so you don't get to have all the nice things, but I did leave several good artifacts and books I recommend lying around. I invite you to make good use of them!

Comment by Lorxus on Dialogue on What It Means For Something to Have A Function/Purpose · 2024-08-15T18:01:36.270Z · LW · GW

(Minor quibble: I’d be careful about using “should” here, as in “the heart should pump blood”, because “should” is often used in a moral sense. For instance, the COVID-19 spike protein presumably has some function involving sneaking into cells, it “should” do that in the teleological sense, but in the moral sense COVID-19 “should” just die out. I think that ambiguity makes a sentence like “but it might be another thing to say, that the heart should pump blood” sound deeper/more substantive than it is, in this context.

This puts me in mind of what I've been calling "the engineer's 'should'" vs "the strategist's 'should'" vs "the preacher's 'should'". Teleological/mechanistic, systems-predictive, is-ought. Really, these ought to all be different words, but I don't really have a good way to cleanly/concisely express the difference between the first two.

Comment by Lorxus on Shifting Headspaces - Transitional Beast-Mode · 2024-08-15T07:15:03.039Z · LW · GW

To paraphrase:

Want and have. See and take. Run and chase. Thirst and slake. And if you're thwarted in pursuit of your desire… so what? That's just the way of things, not always getting what you hunger for. The desire itself is still yours, still pure, still real, so long as you don't deny it or seek to snuff it out.

Comment by Lorxus on tlevin's Shortform · 2024-08-14T17:55:58.349Z · LW · GW

@habryka Forgot to comment on the changes you implemented for soundscape at LH during the mixer - possibly you may want to put a speaker in the Bayes window overlooking the courtyard firepit. People started congregating/pooling there (and notably not at the other firepit next to it!) because it was the locally-quietest location, and then the usual failure modes of an attempted 12-person conversation ensued.

Comment by Lorxus on A rough and incomplete review of some of John Wentworth's research · 2024-08-08T19:22:35.644Z · LW · GW

any finite-entropy function 

Uh...

  1.  .
  2. By "oh, no, the s have to be non-repeating",  Thus by the nth term test 
  3. By properties of logarithms,  has no upper bound over . In particular,  has no upper bound over .
  4. I'm not quite clear on how @johnswentworth defines a "finite-entropy function", but whichever reasonable way he does that, I'm pretty sure that the above means that the set of all such functions over our  as equipped with distribution  of finite entropy is in fact the empty set. Which seems problematic. I do actually want to know how John defines that. Literature searches are mostly turning up nothing for me. Notably, many kinds of reasonable-looking always-positive distributions over merely countably-large sample spaces have infinite Shannon entropy.

(h/t to @WhatsTrueKittycat for spotlighting this for me!)

Comment by Lorxus on We’re not as 3-Dimensional as We Think · 2024-08-06T17:36:41.083Z · LW · GW

most of them are small and probably don’t have the mental complexity required to really grasp three dimensions

Foxes and ferrets strike me as two obvious exceptions here, and indeed, we see both being incredibly good at getting into, out of, and around spaces, sometimes in ways that humans might find unexpected.

Comment by Lorxus on Davidad's Bold Plan for Alignment: An In-Depth Explanation · 2024-08-02T16:46:33.048Z · LW · GW

, and here]

This overleaf link appears to be restricted-access-only?

Comment by Lorxus on tlevin's Shortform · 2024-07-31T19:03:44.450Z · LW · GW

As someone who's spent meaningful amounts of time at LH during parties, absolutely yes. You successfully made it architecturally awkward to have large conversations, but that's often cashed out as "there's a giant conversation group in and totally blocking [the Entry Hallway Room of Aumann]/[the lawn between A&B]/[one or another firepit and its surrounding walkways]; that conversation group is suffering from the obvious described failure modes, but no one in it is sufficiently confident or agentic or charismatic to successfully break out into a subgroup/subconversation.

I'd recommend quiet music during parties? Or maybe even just a soundtrack of natural noises - birdsong and wind? rain and thunder? - to serve the purpose instead.

Comment by Lorxus on Whiteboard Pen Magazines are Useful · 2024-07-31T05:24:59.662Z · LW · GW

I liked this post so much that I made my own better Lesser Scribing Artifact and I'm preparing a post meant to highlight the differences between my standard and yours. Cheers!

Comment by Lorxus on Koan: divining alien datastructures from RAM activations · 2024-07-21T22:10:58.819Z · LW · GW

Why do you need to be certain? Say there's a screen showing a nice "high-level" interface that provides substantial functionality (without directly revealing the inner workings, e.g. there's no shell). Something like that should be practically convincing.

Then whatever that's doing is a constraint in itself, and I can start off by going looking for patterns of activation that correspond to e.g. simple-but-specific mathematical operations that I can actuate in the computer.

I'm unsure about that, but the more pertinent questions are along the lines of "is doing so the first (in understanding-time) available, or fastest, way to make the first few steps along the way that leads to these mathematically precise definitions? The conjecture here is "yes".

Maybe? But I'm definitely not convinced. Maybe for idealized humanesque minds, yes, but for actual humans, if your hypothesis were correct, Euler would not have had to invent topology in the 1700s, for instance.

Comment by Lorxus on A simple model of math skill · 2024-07-21T19:50:04.250Z · LW · GW

I don't have much to say except that this seems broadly correct and very important in my professional opinion. Generating definitions is hard, and often depends subtly/finely on the kinds of theorems you want to be able to prove (while still having definitions that describe the kind of object you set out to describe, and not have them be totally determined by the theorem you want - that would make the objects meaningless!). Generating frameworks out of whole cloth is harder yet; understanding them is sometimes easiest of all.

Comment by Lorxus on Koan: divining alien datastructures from RAM activations · 2024-07-21T04:57:27.425Z · LW · GW

Thinking about it more, I want to poke at the foundations of the koan. Why are we so sure that this is a computer at all? What permits us this certainty, that this is a computer, and that it is also running actual computation rather than glitching out?

B: Are you basically saying that it's a really hard science problem?

From a different and more conceit-cooperative angle: it's not just that this is a really hard science problem, it might be a maximally hard science problem. Maybe too hard for existing science to science at! After all, hash functions are meant to be maximally difficult, computationally speaking, to invert (and in fact impossible in the general case but merely very hard to generate hash collisions).

Another prompt: Suppose you leave Green alone for six months, and when you come back, it turns out ze's figured out what hash tables are. What do you suppose might have happened that led to zer figuring out hash tables?

That Green has figured out how to probe the RAM properly, and how to assign meaning to the computations, and that zer Alien Computer is doing the same-ish thing that mine is?

Although you never do figure out what algorithm is running on the alien computer, it happens to be the case that in the year 3000, the algorithm will be called "J-trees".

It would follow, to me, that I should be looking for treelike patterns of activation, and in particular that maybe this is some application of the principles inherent to hash sort or radix sort to binary self-balancing trees, likely in memory address assignment, as might be necessary/worthwhile in a computer of a colossal scale such as we won't even get until Y3k?

B: It sounds nice, but it kind of just sounds like you're recommending mindfulness or something.

I'd disagree with Blue here! To clean and oil a machine and then run a quick test of function than setting it running to carefully watch it do its thing!

...However, we can put the metaphysicist's ramblings in special quotes:

Doing so still never gets you to the idea of a homology sphere, and it isn't enough to point towards the mathematically precise definition of an infinite 3-manifold without boundary.

Comment by Lorxus on Lorxus's Shortform · 2024-07-20T21:20:52.646Z · LW · GW

EDIT: I and the person who first tried to render this SHAPE for me misunderstood its nature.

Comment by Lorxus on Lorxus's Shortform · 2024-07-20T01:39:59.178Z · LW · GW

You maybe got stuck in some of the many local optima that Nurmela 1995 runs into. Genuinely, the best sphere code for 9 points in 4 dimensions is known to have a minimum angular separation of ~1.408 radians, for a worst-case cosine similarity of about 0.162.

You got a lot further than I did with my own initial attempts at random search, but you didn't quite find it, either.

Comment by Lorxus on Lorxus's Shortform · 2024-07-19T21:24:58.210Z · LW · GW

On @TsviBT's recommendation, I'm writing this up quickly here.

re: the famous graph from https://transformer-circuits.pub/2022/toy_model/index.html#geometry with all the colored bands, plotting "dimensions per feature in a model with superposition", there look to be 3 obvious clusters outside of any colored band and between 2/5 and 1/2, the third of which is directly below the third inset image from the right. All three of these clusters are at 1/(1-S) ~ 4.

A picture of the plot, plus a summary of my thought processes for about the first 30 seconds of looking at it from the right perspective:

In particular, the clusters appear to correspond to dimensions-per-feature of about 0.44~0.45, that is, 4/9. Given the Thomson problem-ish nature of all the other geometric structures displayed, and being professionally dubious that there should be only such structures of subspace dimension 3 or lower, my immediate suspicion since last week when I first thought about this is that the uncolored clusters should be packing 9 vectors as far apart from each other as possible on the surface of a 3-sphere in some 4D subspace.

In particular, mathematicians have already found a 23-celled 4-tope with 9 vertices (which I have made some sketches of) where the angular separation between vertices is ~80.7° : http://neilsloane.com/packings/index.html#I . Roughly, the vertices are: the north pole of S^3; on a slice just (~9°) north of the equator, the vertices of a tetrahedron "pointing" in some direction; on a slice somewhat (~19°) north of the south pole, the vertices of a tetrahedron "pointing" dually to the previous tetrahedron. The edges are given by connecting vertices in each layer to the vertices in the adjacent layer or layers. Cross sections along the axis I described look like growing tetrahedra, briefly become various octahedra as we cross the first tetrahedon, and then resolve to the final tetrahedron before vanishing.

I therefore predict that we should see these clusters of 9 embedding vectors lying roughly in 4D subspaces taking on pretty much exactly the 23-cell shape mathematicians know about, to the same general precision as we'd find (say) pentagons or square antiprisms, within the model's embedding vectors, when S ~ 3/4.

Potentially also there's other 3/f, 4/f, and maybe 5/f; given professional experience I would not expect to see 6+/f sorts of features, because 6+ dimensions is high-dimensional and the clusters would (approximately) factor as products of lower-dimensional clusters already listed. There's a few more clusters that I suspect might correspond to 3/7 (a pentagonal bipyramid?) or 5/12 (some terrifying 5-tope with 12 vertices, I guess), but I'm way less confident in those.

A hand-drawn rendition of the 23-cell in whiteboard marker:

Comment by Lorxus on Natural Latents: The Math · 2024-07-18T06:09:44.787Z · LW · GW

As I also said in person, very much so!

Comment by Lorxus on Natural Latents: The Math · 2024-07-17T18:04:54.002Z · LW · GW

Probabilities of zero are extremely load-bearing for natural latents in the exact case...

Dumb question: Can you sketch out an argument for why this is the case and/or why this has to be the case? I agree that ideally/morally this should be true, but if we're already accepting a bounded degree of error elsewhere, what explodes if we accept it here?

Comment by Lorxus on [deleted post] 2024-07-13T01:06:14.834Z

Yeah. I agree that it's a huge problem that I can't immediately point to what the output might be, or why it might cause something helpful downstream.

Comment by Lorxus on [deleted post] 2024-07-13T00:10:19.459Z

I'm in a weird situation here: I'm not entirely sure whether the community considers the Learning Theory Agenda to be the same alignment plan as The Plan (which is arguably not a plan at all but he sure thinks about value learning!), and whether I can count things like the class of scalable oversight plans which take as read that "human values" are a specific natural object. Would you at least agree that those first two (or one???) rely on that?

Comment by Lorxus on [deleted post] 2024-07-13T00:06:34.220Z

No; removed.

Comment by Lorxus on [deleted post] 2024-07-13T00:05:57.871Z

I guess in that case I'd worry that you go and look at the features and come away with some impression of what those features represent and it turns out you're totally wrong? I keep coming back to the example of a text-classifier where you find """the French activation directions""" except it turns out that only one of them is for French (if any at all) and the others are things like "words ending in x and z" or "words spoken by fancy people in these novels and quotes pages".

Comment by Lorxus on [deleted post] 2024-07-12T23:56:05.033Z

Like, you might think the more things you know about smart AIs, the easier it would be to build them - where does this argument break?

I mean... it doesn't? I guess I mostly think that either what I'm working on is totally off the capabilities pathway, or if it's somehow on one, then I don't think whatever minor framework improvement or suggestion for a mental frame that I come up with is going to push things all that far? Which I agree is kind of a depressing thing to expect of your work, but I argue that that's the most likely two outcomes here. Does that address that?

Comment by Lorxus on [deleted post] 2024-07-12T23:45:48.176Z

Almost certainly this is way too ambitious for me to do, but I don't know what "starting a framework" would look like. I guess I don't have as full an understanding as I'd like of what MATS expects me to come up with/what's in-bounds? I'd want to come up with a paper or something out of this but I'm also not confident in my ability to (for instance) fully specify the missing pieces of John's model. Or even one of his missing pieces.

Comment by Lorxus on [deleted post] 2024-07-12T23:43:24.683Z

I had thought that that would be implicit in why I'm picking up those skills/that knowledge? I agree that it's not great that I'm finding that some of my initial ideas for things to do are infeasible or unhelpful such that I don't feel like I have concrete theorems to want to try to prove here, or specific experiments I expect to want to run. I think a lot of next week is going to be reading up on natural latents/abstractions even more deeply than before when I was learning about them previously and trying to find somewhere a proof needs to go.

Comment by Lorxus on [deleted post] 2024-07-12T23:37:50.023Z

My problem here is that the sketched-out toy model in the post is badly badly underspecified. AFAIK John hasn't, for instance, thought about whether a different clustering model might be a better pick, and the entire post is a subproblem of trying to figure out how interoperable world-models would have to work. "Stress-test" is definitely not the right word here. "Specify"? "Fill in"? "Sketch out"? "Guess at"? Kind of all of it needs fleshing out.

Comment by Lorxus on [deleted post] 2024-07-12T23:34:49.193Z

This is helpful. I'm going to make a list of things I think I could get done in somewhere between a few days and like 2 weeks that I think would advance my desire to put together a more complete+rigorous theory of semantics.

Comment by Lorxus on [deleted post] 2024-07-12T23:32:49.077Z

Fixed but I'm likely removing that part anyway.

Comment by Lorxus on [deleted post] 2024-07-12T23:31:53.579Z

I kept trying to rewrite this part and it kept coming out too long. Basically - I would want the alife agents to be able to definitely agree on spacetime nearness and the valuableness of some objects (like food) and for them to be able to communicate (?in some way?) and to have clusterer-powered ontologies that maybe even do something like have their initializations inherited by subsequent generations of the agents.

That said like I'm about to say on another comment that project is way too ambitious.

Comment by Lorxus on [deleted post] 2024-07-12T23:27:13.390Z

Makes sense. That's also not ideal because for personal reasons you already know of I have no idea what my pace of work on this generally will be.

Comment by Lorxus on [deleted post] 2024-07-12T23:25:41.271Z

I agree that those three paragraphs are bloated. My issue is this - I don't yet know which of those three branches is true (natural abstractions exist all the time vs. NAs can exist but only if you put them there vs. NAs do not, in general, exist, and they break immediately) but whichever it is, I think a better theory of semantics would help tell us which one it is, and then also be a necessary prerequisite to the obvious resulting plan.

Comment by Lorxus on [deleted post] 2024-07-12T23:23:42.330Z

I realized I wasn't super clear about which part was which. I agree that "is scaling enough" is a major crux for me and I'd be way way more afraid if it looked like scaling were sufficient on its own; that part, however, is about "do we actually need to get alignment basically exactly right". Does that change your understanding?

Comment by Lorxus on [deleted post] 2024-07-12T23:17:54.336Z

writing a bit about this now.

Comment by Lorxus on [deleted post] 2024-07-12T23:11:36.852Z

added

Comment by Lorxus on [deleted post] 2024-07-12T23:08:43.384Z

I was trying to address the justification for why I'm here doing this instead of someone else doing something else? I might have been reading something about neglectedness from the old rubric. I could totally just cut it.

Comment by Lorxus on [deleted post] 2024-07-12T23:04:06.804Z

should be more clear, yeah, something like "not only human values but also how we'd check that..."

Comment by Lorxus on [deleted post] 2024-07-12T23:03:17.709Z

For 1., we could totally find out that our AGI just plain cannot pick up on what a car or a dog is, and only classify/recognize their parts (or by halves, or just always misclassify them) but then not have any sense of what's going on to cause it or how to fix it.

For 2. ... I have no idea? I feel like that might be out of scope for what I want to think about. I don't even know how I'd start attacking that problem in full generality or even in part.

Comment by Lorxus on [deleted post] 2024-07-12T22:52:43.257Z

I think I'm missing something. What does the story look like, where we have some feature we're totally unsure of what it signifies, but we're very sure that the model is using it?

Or from the other direction, I keep coming back to Jacob's transformer with like 200 orthogonal activation directions that all look to make the model write good code. They all seemed to be producing about the exact same activation pattern 8 layers on. It didn't seem like his model was particularly spoiled for activation space - so what is it all those extra directions were actually picking up on?

Comment by Lorxus on [deleted post] 2024-07-12T22:48:00.437Z

It seems to me like asking too much, to think that there won't be shared natural ontologies between humans (construed broadly) and ML models but we can still make sure that with the right pretraining regiment/dataset choice/etc the model will end up with a human ontology and also this process is something that admits any amount of error and also this can be done in a way that's not trivially jailbreakable.