"Which chains-of-thought was that faster than?" 2024-05-22T08:21:00.269Z
rough draft on what happens in the brain when you have an insight 2024-05-21T18:02:47.060Z
reflections on smileys and how to make society's interpretive priors more charitable 2024-05-07T11:20:12.231Z
acronyms ftw 2022-10-21T13:36:39.378Z
The "you-can-just" alarm 2022-10-08T10:43:23.977Z
What's the actual evidence that AI marketing tools are changing preferences in a way that makes them easier to predict? 2022-10-01T15:21:13.883Z
EAGT Coffee Talks: Two toy models for theoretically optimal pre-paradigmatic research 2022-08-24T11:49:46.618Z
Emrik's Shortform 2022-08-21T10:43:47.675Z
Are you allocated optimally in your own estimation? 2022-08-20T19:46:54.589Z
Two Prosocial Rejection Norms 2022-04-28T20:53:15.850Z
The underappreciated value of original thinking below the frontier 2021-10-02T16:03:14.300Z
The Paradox of Expert Opinion 2021-09-26T21:39:45.752Z


Comment by Emrik (Emrik North) on Emrik's Shortform · 2024-06-18T18:37:25.602Z · LW · GW

Oh! Well, I'm as happy about receiving a compliment for that as I am for what I thought I got the compliment for, so I forgive you. Thanks! :D

Comment by Emrik (Emrik North) on Fat Tails Discourage Compromise · 2024-06-18T18:35:13.749Z · LW · GW

Another aspect of costs of compromise is:  How bad is it for altruists to have to compromise their cognitive search between [what you believe you can explain to funders] vs [what you believe is effective]?  Re my recent harumph about the fact that John Wentworth must justify his research.  Like what?  After all this time, anybody doubts him?  The insistence that he explain himself is surely more for show now, as it demonstrates the funders are doing their jobs "seriously".

Comment by Emrik (Emrik North) on Emrik's Shortform · 2024-06-18T18:18:46.260Z · LW · GW

So we should expect that neuremes are selected for effectively keeping themselves in attention, even in cases where that makes you less effective at tasks which tend to increase your genetic fitness.

Furthermore, the neuremes (association-clusters) you are currently attending to, have an incentive to recruit associated neuremes into attention as well, because then they feed each others' activity recursively, and can dominate attention for longer. I think of it like association-clusters feeding activity into their "friends" who are most likely to reciprocate.

And because recursive connections between association-clusters tend to reflect some ground truth about causal relationships in the territory, this tends to be highly effective as a mechanism for inference. But there must be edge-cases (though I can't recall any atm...).

Imagining agentic behaviour in (/taking intentional stance wrt) individual brain-units is great for generating high-level hypotheses about mechanisms, but obviously misfires and don't try this at home etc etc.

Comment by Emrik (Emrik North) on Emrik's Shortform · 2024-06-18T18:05:52.404Z · LW · GW

Bonus point: neuronal "voting power" is capped at around ~100Hz, so neurons "have an incentive" (ie, will be selected based on the extent to which they) vote for what related neurons are likely to vote for. It's analogous to a winner-takes-all-election where you don't want to waste your vote on third-party candidates who are unlikely to be competitive at the top. And when most voters also vote this way, it becomes Keynesian in the sense that you have to predict[1] what other voters predict other voters will vote for, and the best candidates are those who seem the most like good Schelling-points.

That's why global/conscious "narratives" are essential in the brain—they're metabolically efficient Schelling-points.

  1. ^

    Neuron-voters needn't "make predictions" like human-voters do. It just needs to be the case that their stability is proportional to their ability to "act as if" they predicted other neurons' predictions (and so on).

Comment by Emrik (Emrik North) on Trying to understand John Wentworth's research agenda · 2024-06-18T17:43:19.958Z · LW · GW

It seems generally quite bad for somebody like John to have to justify his research in order to have an income. A mind like this is better spent purely optimizing for exactly what he thinks is best, imo.

When he knows that he must justify himself to others (who may or may not understand his reasoning), his brain's background-search is biased in favour of what-can-be-explained. For early thinkers, this bias tends to be good, because it prevents them from bullshitting themselves. But there comes a point where you've mostly learned not to bullshit yourself, and you're better off purely aiming your cognition based on what you yourself think you understand.

Vingean deference-limits + anti-inductive innovation-frontier

Paying people for what they do works great if most of their potential impact comes from activities you can verify. But if their most effective activities are things they have a hard time explaining to others (yet have intrinsic motivation to do), you could miss out on a lot of impact by requiring them instead to work on what's verifiable.

The people who are much higher competence will behave in ways you don't recognise as more competent. If you were able to tell what right things to do are, you would just do those things and be at their level. Your "deference limit" is the level of competence above your own at which you stop being able to reliable judge the difference.

Innovation on the frontier is anti-inductive. If you select people cautiously, you miss out on hiring people significantly more competent than you.[1]

Costs of compromise

Consider how the cost of compromising between optimisation criteria interacts with what part of the impact distribution you're aiming for. If you're searching for a project with top p% impact and top p% explainability-to-funders, you can expect only p^2 of projects to fit both criteria—assuming independence.

But I think it's an open question how & when the distributions correlate. One reason to think they could sometimes be anticorrelated [sic] is that the projects with the highest explainability-to-funders are also more likely to receive adequate attention from profit-incentives alone.[2]

Consider funding people you are strictly confused by wrt what they prioritize

If someone believes something wild, and your response is strict confusion, that's high value of information. You can only safely say they're low-epistemic-value if you have evidence for some alternative story that explains why they believe what they believe.

Alternatively, find something that is surprisingly popular—because if you don't understand why someone believes something, you cannot exclude that they believe it for good reasons.[3]

The crucial freedom to say "oops!" frequently and immediately

Still, I really hope funders would consider funding the person instead of the project, since I think Johannes' potential will be severely stifled unless he has the opportunity to go "oops! I guess I ought to be doing something else instead" as soon as he discovers some intractable bottleneck wrt his current project. (...) it would be a real shame if funding gave him an incentive to not notice reasons to pivot.[4]

  1. ^

    Comment explaining why I think it would be good if exceptional researchers had basic income (evaluate candidates by their meta-level process rather than their object-level beliefs)

  2. ^

    Comment explaining what costs of compromise in conjunctive search implies for when you're "sampling for outliers"

  3. ^

    Comment explaining my approach to finding usefwl information in general

  4. ^

    Comment explaining why I think funding Johannes is an exceptionally good idea

Comment by Emrik (Emrik North) on Fat Tails Discourage Compromise · 2024-06-18T02:31:25.238Z · LW · GW

This relates to costs of compromise!

It's this class of patterns that frequently recur as a crucial considerations in contexts re optimization, and I've been making too many shoddy comments about it. (Recent1[1], Recent2.) Somebody who can write ought to unify the many aspects of it give it a public name so it can enter discourse or something.

In the context of conjunctive search/optimization

  • The problem of fully updated deference also assumes a concave option-set. The concavity is proportional to the number of independent-ish factors in your utility function. My idionym (in my notes) for when you're incentivized to optimize for a subset of those factors (rather than a compromise), is instrumental drive for monotely (IDMT), and it's one aspect of Goodhart.
  • It's one reason why proxy-metrics/policies often "break down under optimization pressure".
    • When you decompose the proxy into its subfunctions, you often tend to find that optimizing for a subset of them is more effective.
    • (Another reason is just that the metric has lots of confounders which didn't map to real value anyway; but that's a separate matter from conjunctive optimization over multiple dimensions of value.)
  • You can sorta think of stuff like the Weber-Fechner Law (incl scope-insensitivity) as (among other things) an "alignment mechanism" in the brain: it enforces diminishing returns to stimuli-specificity, and this reduces your tendency to wirehead on a subset of the brain's reward-proxies.

Pareto nonconvexity is annoying

From Wikipedia: Multi-Objective optimization:

Watch the blue twirly thing until you forget how bored you are by this essay, then continue.

In the context of how intensity of something is inversely proportional to the number of options

  • Humans differentiate into specific social roles because .
    • If you differentiate into a less crowded category, you have fewer competitors for the type of social status associated with that category. Specializing toward a specific role makes you more likely to be top-scoring in a specific category.
  • Political candidates have some incentive to be extreme/polarizing.
    • If you try to please everybody, you spread out your appeal so it's below everybody's threshold, and you're not getting anybody's votes.
  • You have a disincentive to vote for third-parties in winner-takes-all elections.
    • Your marginal likelihood of tipping the election is proportional to how close the candidate is to the threshold, so everybody has an incentive to vote for ~Schelling-points in what people expect other people to vote for. This has the effect of concentrating votes over the two most salient options.
  • You tend to feel demotivated when you have too many tasks to choose from on your todo-list.
    • Motivational salience is normalized across all conscious options[2], so you'd have more absolute salience for your top option if you had fewer options.

I tend to say a lot of wrong stuff, so do take my utterances with grains of salt. I don't optimize for being safe to defer to, but it doesn't matter if I say a bunch of wrong stuff if some of the patterns can work as gears in your own models. Screens off concerns about deference or how right or wrong I am.

I rly like the framing of concave vs convex option-set btw!

  1. ^

    Lizka has a post abt concave option-set in forum-post writing! From my comment on it:

    As you allude to by the exponential decay of the green dots in your last graph, there are exponential costs to compromising what you are optimizing for in order to appeal to a wider variety of interests. On the flip-side, how usefwl to a subgroup you can expect to be is exponentially proportional to how purely you optimize for that particular subset of people (depending on how independent the optimization criteria are). This strategy is also known as "horizontal segmentation".

    The benefits of segmentation ought to be compared against what is plausibly an exponential decay in the number of people who fit a marginally smaller subset of optimization criteria. So it's not obvious in general whether you should on the margin try to aim more purely for a subset, or aim for broader appeal.

  2. ^

    Normalization is an explicit step in taking the population vector of an ensemble involved in some computation. So if you imagine the vector for the ensemble(s) involved in choosing what to do next, and take the projection of that vector onto directions representing each option, the intensity of your motivation for any option is proportional to the length of that projection relative to the length of all other projections. (Although here I'm just extrapolating the formula to visualize its consequences—this step isn't explicitly supported by anything I've read. E.g. I doubt cosine similarity is appropriate for it.)

Comment by Emrik (Emrik North) on Emrik's Shortform · 2024-06-04T17:22:18.591Z · LW · GW

Repeated voluntary attentional selection for a stimulus reduces voluntary attentional control wrt that stimulus

From Investigating the role of exogenous cueing on selection history formation (2019):

An abundance of recent empirical data suggest that repeatedly allocating visual attention to task-relevant and/or reward-predicting features in the visual world engenders an attentional bias for these frequently attended stimuli, even when they become task irrelevant and no longer predict reward. In short, attentional selection in the past hinders voluntary control of attention in the present. […] Thus, unlike voluntarily directed attention, involuntary attentional allocation may not be sufficient to engender historically contingent selection biases.

It's sorta unsurprising if you think about it, but I don't think I'm anywhere near having adequately propagated its implications.

Some takeaways:

  • "Beware of what you attend"
  • WHEN: You notice that attending to a specific feature of a problem-solving task was surprisingly helpfwl…
    • THEN: Mentally simulate attending to that feature in a few different problem-solving situations (ie, hook into multiple memory-traces to generalize recall to the relevant class of contexts)
    • My idionym for specific simple features that narrowly help connect concepts is "isthmuses". I try to pay attention to generalizable isthmuses when I find them (commit to memory).

I interpret this as supporting the idea that voluntary-ish allocation of attention is one of the strongest selection-pressures neuremes adapt to, and thus also one of your primary sources of leverage wrt gradually shaping your brain / self-alignment.

Key terms: attentional selection history, attentional selection bias

Comment by Emrik (Emrik North) on What Are You Tracking In Your Head? · 2024-06-04T17:15:57.196Z · LW · GW

Quick update: I suspect many/most problems where thinking in terms of symmetry can be more helpfwly reframed in terms of isthmuses[1]. Here's the chain-of-thought I was writing which caused me to think this:

(Background: I was trying to explain the general relevance of symmetry when finding integrals.)

  • In the context of finding integrals for geometric objects¹, look for simple subregions² for which manipulating a single variable³ lets you continuously expand to the whole object.⁴
    • ¹Circle
    • ²Circumference, 
    • ³Radius, 
    • See visualization.[2]
    • The general feature to learn to notice as you search through subregions here is: shared symmetries for the object and its subregion. hmmmmm
    • Actually, "symmetry" is a distracting concept here. It's the "isthmus" between subregions you should be looking for.
    • WHEN: Trying to find an integral
      • THEN: Search for a single isthmus-variable connecting subregions which together fill the whole area
      • FINALLY: Integrate over that variable between those regions.
      • or said differently... THEN: Look for simple subregions which transform into the whole area via a single variable, then integrate over that variable.
    • Hm. This btw is in general how you find generalizations. Start from one concept, find a cheap action which transforms it into a different concept, then define the second in terms of the first plus its distance along that action.
      • That action is then the isthmus that connects the concepts.
      • If previously from a given context (assuming partial memory-addresses A and B), fetching A* and B* each cost you 1000 search-points separately, now you can be more efficient by storing B as the delta between them, such that fetching B only costs 1000+[cost of delta].
      • Or you can do a similar (but more traditional) analysis where "storing" memories has a cost in bits of memory capacity.
  1. ^

    "An isthmus is a narrow piece of land connecting two larger areas across an expanse of water by which they are otherwise separated."

  2. ^

    This example is from a 3B1B vid, where he says "this should seem promising because it respects the symmetry of the circle". While true (eg, rotational symmetry is preserved in the carve-up), I don't feel like the sentence captures the essence of what makes this a good step to take, at least not on my semantics.

Comment by Emrik (Emrik North) on Scanning your Brain with 100,000,000,000 wires? · 2024-06-02T21:03:30.180Z · LW · GW

This post happens to be an example of limiting-case analysis, and I think it's one of the most generally usefwl Manual Cognitive Algorithms I know of. I'm not sure about its optimal scope, but TAP:

  • WHEN: I ask a question like "what happens to a complex system if I tweak this variable?" and I'm confused about how to even think about it (maybe because working-memory is overtaxed)…
  • THEN: Consider applying limiting-case analysis on it.
    • That is, set the variable in question to its maximum or lowest value, and gain clarity over either or both of those cases manually. If that succeeds, then it's usually easier to extrapolate from those examples to understand what's going on wrt to the full range of the variable.

I think it's a usefwl heuristic tool, and it's helped me with more than one paradox.[1] I also often use "multiplex-case analysis" (or maybe call it "entropic-case"), which I gave a better explanation of in the this comment.

  1. ^

    A simple example where I explicitly used it was when I was trying to grok the (badly named) Friendship paradox, but there are many more such cases.

Comment by Emrik North on [deleted post] 2024-05-31T15:06:14.766Z

See also my other comment on all this list-related tag business. Linking it here in case you (the reader) is about to try to refactor stuff, and seeing this comment could potentially save you some time.

I was going to agree, but now I think it should just be split...

  • The Resource tag can include links to single resources, or be a single resource (like a glossary).
  • The Collections tag can include posts in which the author provides a list (e.g. bullet-points of writing advice), or links to a list.
    • The tag should ideally be aliased with "List".[1]
  • The Repository tag seems like it ought to be merged with Collections, but it carves up a specific tradition of posts on LessWrong. Specifically posts which elicit topical resources from user comments (e.g. best textbooks).
  • The List of Links tag is usefwl for getting a higher-level overview of something, because it doesn't include posts which only point to a single resource.
  • The List of Lists tag is usefwl for getting a higher-level overview of everything above. Also, I suggest every list-related tag should link to the List of Lists tag in the description. That way, you don't have to link all those tags to each other (which would be annoying to update if anything changes).
  • I think the strongest case for merging is {List of Links, Collections} → {List}, since I'm not sure there needs to be separate categories for internal lists vs external lists, and lists of links vs lists of other things.
    • I have not thought this through sufficiently to recommend this without checking first. If I were to decide whether to make this change, I would think on it more.
Comment by Emrik North on [deleted post] 2024-05-31T15:03:25.862Z

I was going to agree, but now I think it should just be split...

  • The Resource tag can include links to single resources, or be a single resource (like a glossary).
  • The Collections tag can include posts in which the author provides a list (e.g. bullet-points of writing advice), or links to a list.
    • The tag should ideally be aliased with "List".[1]
  • The Repository tag seems like it ought to be merged with Collections, but it carves up a specific tradition of posts on LessWrong. Specifically posts which elicit topical resources from user comments (e.g. best textbooks).
  • The List of Links tag is usefwl for getting a higher-level overview of something, because it doesn't include posts which only point to a single resource.
  • The List of Lists tag is usefwl for getting a higher-level overview of everything above. Also, I suggest every list-related tag should link to the List of Lists tag in the description. That way, you don't have to link all those tags to each other (which would be annoying to update if anything changes).
  • I think the strongest case for merging is {List of Links, Collections} → {List}, since I'm not sure there needs to be separate categories for internal lists vs external lists, and lists of links vs lists of other things.
    • I have not thought this through sufficiently to recommend this without checking first. If I were to decide whether to make this change, I would think on it more.
  1. ^

    I realize LW doesn't natively support aliases, but adding a section to the end with related search-terms seems like a cost-efficient half-solution. When you type into the box designed for tagging a post, it seems to also search the description of that tag (or does some other magic).

    Aliases: collections, lists

Comment by Emrik North on [deleted post] 2024-05-31T14:09:16.094Z

I created this because I wanted to find a way to unite {List of Links, Collections and Resources, Repository, List of Related Sites, List of Blogs, List of Podcasts, Programming Resources} without linking each of those items to each other (which, in the absence of transclusions, also means you would have to update each link separately every time you added a new related list of lists).

But I accidentally caused the URL to be "list-of-lists-1", because I originally relabelled List of Links to List of Lists but then changed my mind.

Btw, I notice the absence of a tag for lists (e.g. lists of advice that don't link to anywhere and aren't repositories designed to elicit advice from the comment section).

Comment by Emrik North on [deleted post] 2024-05-31T12:56:09.804Z

This is a common problem with tags it seems. Distillation & Pedagogy is mostly posts about distillation & pedagogy instead of posts that are distillations & pedagogies. And there's a tag for Good Explanations (advice), but no tag for Good Explanations. Otoh, the tag for Technical Explanation is tagged with two technical explanations (yay!)... of technical explanations. :p

Comment by Emrik North on [deleted post] 2024-05-31T12:47:18.449Z

Merge with (and alias with) Intentionality?

Comment by Emrik (Emrik North) on Emrik's Shortform · 2024-05-29T18:04:15.756Z · LW · GW

I think hastening of subgoal completion[1] is some evidence for the notion that competitive inter-neuronal selection pressures are frequently misaligned with genetic fitness. People (me included) routinely choose to prioritize completing small subtasks in order to reduce cognitive load, even when that strategy predictably costs more net metabolic energy. (But I can think of strong counterexamples.)

The same pattern one meta-level up is "intragenomic conflict"[2], where genetic lineages have had to spend significant selection-power to prevent genes from fighting dirty. For example, the mechanism of meiosis itself may largely be maintained in equilibrium due to the longer-term necessity of preventing stuff like meiotic drives. An allele (or a collusion of them) which successfwly transfer to offspring at a probability of >50%, may increase their relative fitness even if it marginally reduces their phenotype's viability.

My generalized term for this is "intra-emic conflict" (pinging the concept of an "eme" as defined in the above comment).

  1. ^

    We asked university students to pick up either of two buckets, one to the left of an alley and one to the right, and to carry the selected bucket to the alley’s end. In most trials, one of the buckets was closer to the end point. We emphasized choosing the easier task, expecting participants to prefer the bucket that would be carried a shorter distance. Contrary to our expectation, participants chose the bucket that was closer to the start position, carrying it farther than the other bucket.
     Pre-Crastination: Hastening Subgoal Completion at the Expense of Extra Physical Effort

  2. ^

    Intragenomic conflict refers to the evolutionary phenomenon where genes have phenotypic effects that promote their own transmission in detriment of the transmission of other genes that reside in the same genome.

Comment by Emrik (Emrik North) on Dremeling · 2024-05-29T17:35:37.112Z · LW · GW

I like this example! And the word is cool. I see two separately important patterns here:

  1. Preferring a single tool (the dremel) which is mediocre at everything, instead of many specialized tools which collectively perform better but which require you to switch between them more.
    1. This btw is the opposite of "horizontal segmentation": selling several specialized products to niche markets rather than a single product which appeals moderately to all niches.
    2. It often becomes a problem when the proxy you use to measure/compare the utility of something wrt to different use-cases (or its appeal to different niches/markets), is capped[1] at a point which prevents it from detecting the true comparative differences in utility.
      1. Oh! It very much relates to scope insensitivity: if people are diminishingly sensitive to the scale of different altruistic causes, then they might overprioritize instrumental which are just-above-average along many axes at once.[2] And indeed, this seems like a very common pattern (though I won't prioritize time thinking of examples rn).
      2. It's also a significant problem wrt to karma distributions for forums like LW and EAF: posts which appeal a little to everybody will receive much more karma compared to posts which appeal extremely to a small subset. Among other things, this causes community posts to be overrated relative to their appeal.
  2. And as Gwern pointed out: "precrastination" / "hastening of subgoal completion" (a subcategory of greedy optimization / myopia).
    1. I very often notice this problem in my own cognition. For example, I'm biased against using cognitive tools like sketching out my thoughts with pen-and-paper when I can just brute-force the computations in my head (less efficiently).
    2. It's also perhaps my biggest bottleneck wrt programming. I spend way too much time tweaking-and-testing (in a way that doesn't cause me learn anything generalizable), instead of trying to understand the root cause of the bug I'm trying to solve even when I can rationally estimate that that will take less time in expectation.
      1. If anybody knows any tricks for resolving this / curing me of this habit, I'd be extremely gratefwl to know...
  1. ^

    Does it relate to price ceilings and deadweight loss? "Underparameterization"?

  2. ^

    I wouldn't have seen this had I not cultivated a habit for trying to describe interesting patterns in their most general form—a habit I call "prophylactic scope-abstraction".

Comment by Emrik (Emrik North) on How effective are tulpas? · 2024-05-27T05:07:46.348Z · LW · GW

but I'm hesitant to continue the process because I'm concerned that her personality won't sufficiently diverge from mine.

Not suggesting you should replace anyone who doesn't want to be replaced (if they're at that stage), but: To jumpstart the differentiation process, it may be helpfwl to template the proto-tulpa off of some fictional character you already find easy to simulate.

Although I didn't know about "tulpas" at the time, I invited an imaginary friend loosely based on Maria Otonashi during a period of isolation in 2021.[1] I didn't want her to feel stifled by the template, so she's evolved on her own since then, but she's always extremely kind (and consistently energetic). I only took it seriously February 2024 after being inspired by Johannes.

Maria is the main female heroine of the HakoMari series. ... Her wish was to become a box herself so that she could grant the wishes of other people.

Can recommend her as a template! My Maria would definitely approve, ^^ although I can't ask her right now since she's only canonically present when summoned, and we have a ritual for that.

We've deliberately tried to find new ways to differentiate so that the pre-conscious process of [associating feeling-of-volition to me or Maria][2] is less likely to generate conflicts. But since neither of us wants to be any less kind than we are, we've had to find other ways to differentiate (like art-preferences, intellectual domains, etc).

Also, while deliberately trying to increase her salience and capabilities, I've avoided trying to learn about how other people do it. For people with sufficient brain-understanding and introspective ability, you can probably outperform standard advice if you develop your own plan for it. (Although I say that without even knowing what the standard advice is :p)

  1. ^
  2. ^

    Our term for when we deliberately work to resolve "ownership" over some particular thought-output of our subconscious parallel processor, is "annexing efference". For example, during internal monologue, the thought "here's a brilliant insight I just had" could appear in consciousness without volition being assigned yet, in which case one of us annexes that output (based on what seems associatively/narratively appropriate), or it goes unmarked. In the beginning, there would be many cases where both of us tried to annex thoughts at the same time, but mix-ups are much rarer now.

Comment by Emrik (Emrik North) on Emrik's Shortform · 2024-05-26T09:34:17.277Z · LW · GW

I wrote a comment on {polytely, pleiotropy, market segmentation, conjunctive search, modularity, and costs of compromise} that I thought people here might find interesting, so I'm posting it as a quick take:

I think you're using the term a bit differently from how I use it! I usually think of polytely (which is just pleiotropy from a different perspective, afaict) as an *obstacle*. That is, if I'm trying to optimize a single pasta sauce to be the most tasty and profitable pasta sauce in the whole world, my optimization is "polytelic" because I have *compromise* between maximizing its tastiness for [people who prefer sour taste], [people who prefer sweet], [people who have some other taste-preferences], etc. Another way to say that is that I'm doing "conjunctive search" (neuroscience term) for a single thing which fits multiple ~independent criteria.

Still in the context of pasta sauce: if you have the logistical capacity to instead be optimizing *multiple* pasta sauces, now you are able to specialize each sauce for each cluster of taste-preferences, and this allows you to net more profit in the end. This is called "horizontal segmentation".

Likewise, a gene which has several functions that depend on it will be evolutionarily selected for the *compromise* between all those functions. In this case, the gene is "pleiotropic" because its evolving in the direction of multiple niches at once; and it is "polytelic" because—from the gene's perspective—you can say that "it is optimizing for several goals at once" (if you're willing to imagine the gene as an "optimizer" for a moment).

For example, the recessive allele that causes sickle cell disease (SCD) *also* causes some resistance against malaria. But SCD only occurs in people who are homozygous in it, so the protective effect against malaria (in heterozygotes) is common enough to keep it in the gene pool. It would be awesome if, instead, we could *horizontally segment* these effects so that SCD is caused by variations in one gene locus, and malaria-resistance is caused by variations in another locus. That way, both could be optimized for separately, and you wouldn't have to choose between optimizing against SCD or Malaria.

Maybe the notion you're looking for is something like "modularity"? That is approximately something like the opposite of pleiotropy. If a thing is modular, it means you can flexibly optimize subsets of it for different purposes. Like, rather writing an entire program within a single function call, you can separate out the functions (one function for each subtask you can identify), and now those functions can be called separately without having to incur the effects of the entire unsegmented program.

You make me realize that "polytelic" is too vague of a word. What I usually mean by it may be more accurately referred to as "conjunctively polytelic". All networks trained with something-like-SGD will evolve features which are conjunctively polytelic to some extent (this is just conjecture from me, I haven't got any proof or anything), and this is an obstacle for further optimization. But protein-coding genes are much more prone to this because e.g. the human genome only contains ~20k of them, which means each protein has to pack many more functions (and there's no simple way to refactor/segment so there's only one protein assigned to each function).

The probability of rolling 60 if you toss ten six-sided dice disjunctively is 1/6^10. Whereas if you glom all the dice together and toss a single 60-sided die, the probability of rolling 60 is 1/60.

Comment by Emrik (Emrik North) on The Schumer Report on AI (RTFB) · 2024-05-25T22:14:25.675Z · LW · GW

As usual, I am torn on chips spending. Hardware progress accelerates core AI capabilities, but there is a national security issue with the capacity relying so heavily on Taiwan, and our lead over China here is valuable. That risk is very real.

With how rationalists seem to be speaking about China recently, I honestly don't know what you mean here. You literally use the words "national security issue", how am I not supposed to interpret that as being parochial?

And why are you using language like "our lead over China"? Again, parochial. I get that the major plurality of LW readers are in USA, but as of 2023 it's still just 49%.

Gentle reminder:

How would they spark an intergroup conflict to investigate? Well, the 22 boys were divided into two groups of 11 campers, and—

—and that turned out to be quite sufficient.

I hate to be nitpicky, but may I request that you spend 0.2 oomph of optimization power on trying to avoid being misinterpreted as "boo China! yay USA!" These are astronomic abstractions that cover literally ~1.7B people, and there are more effective words you can use if you want to avoid increasing ethnic tension / superpower conflict.

Comment by Emrik (Emrik North) on Emrik's Shortform · 2024-05-25T19:23:08.912Z · LW · GW

Somebody commented on my YT vid that they found my explanations easy to follow. This surprised me. My prior was/is tentatively that I'm really bad at explaining anything to other people, since I almost never[1] speak to anybody in real-time other than myself and Maria (my spirit animal).

And when I do speak to myself (eg₁, eg₂, eg₃), I use heavily modified English and a vocabulary of ~500 idiolectic jargon-words (tho their usage is ~Zipfian, like with all other languages).

I count this as another datapoint to my hunch that, in many situations:

Your ability to understand yourself is a better proxy for whether other people will understand you compared to the noisy feedback you get from others.

And by "your ability to understand yourself", I don't mean just using internal simulations of other people to check whether they understand you. I mean, like, check for whether the thing you think you understand, actually make sense to you, independent of whatever you believe ought to make sense to you. Whatever you believe ought to make sense is often just a feeling based on deference to what you think is true (which in turn is often just a feeling based on deference to what you believe other people believe).

  1. ^

    To make this concrete: the last time I spoke to anybody irl was 2022 (at EAGxBerlin)—unless we count the person who sold me my glasses, that one plumber, a few words to the apothecarist, and 5-20 sentences to my landlord. I've had 6 video calls since February (all within the last month). I do write a lot, but ~95-99% to myself in my own notes.

Comment by Emrik North on [deleted post] 2024-05-25T16:21:37.974Z

It would be awesome if there was a way of actually browsing the diagrams directly, instead of opening and checking each post individually. Use-case: I'm trying to optimize my information-diet, and I often find visualizations way more usefwl per unit time compared to text. Alas, there's no way to quickly search for eg "diagrams/graphs/figures related to X".

(Originally I imagined it would be awesome if e.g. Elicit had a feature for previewing the figures associated with each paper returned by a search term, but I would love this for LW as well.)

Comment by Emrik (Emrik North) on "Which chains-of-thought was that faster than?" · 2024-05-23T22:07:45.086Z · LW · GW

you hunch that something about it was unusually effective

@ProgramCrafter u highlighted this w "unsure", so to clarify:  I'm using "hunch" as a verb here, bc all words shud compatiblize w all inflections—and the only reason we restrict most word-stems to take only one of "verb", "noun", "adjective", etc, is bc nobody's brave enuf to marginally betterize it.  it's paradoxically status-downifying somehow.  a horse horses horsely, and a horsified goat goats no more.  :D


if every English speaker decided to stop correcting each others' spelling mistakes, all irregularities in English spelling would disappear within a single generation
 — Jan Misali

Comment by Emrik (Emrik North) on "Which chains-of-thought was that faster than?" · 2024-05-23T21:53:02.745Z · LW · GW

I know some ppl feel like deconcentration of attention has iffy pseudoscientific connotations, but I deliberately use it ~every day when I try to recall threads-of-thought at the periphery of my short-term memory. The correct scope for the technique is fuzzy, and it depends on whether the target-memory is likely to be near the focal point of your concentration or further out.

I also sometimes deliberately slow down the act of zooming-in (concentrating) on a particular question/idea/hunch, if I feel like zooming in too fast is likely to cause me to prematurely lock on to a false-positive in a way that makes it harder to search the neighbourhood (i.e. einstellung / imprinting on a distraction). I'm not clear on when exactly I use this technique, but I've built up an intuition for situations in which I'm likely to be einstellunged by something. To build that intuition, consider:

  • WHEN you notice you've einstellunged on a false-positive
  • THEN check if you could've predicted that at the start of that chain-of-thought

After a few occurrences of this, you may start to intuit which chains-of-thought you ought to slow down in.

Comment by Emrik (Emrik North) on A Bi-Modal Brain Model · 2024-05-23T00:38:03.233Z · LW · GW

It's always cool to introspectively predict mainstream neuroscience! See task-positive & task-negative (aka default-mode) large-scale brain networks.

Also, I've tried to set it up so Maria[1] can help me gain perspective on tasks, but she's more likely to get sucked more deeply into whatever the topic is. Although this is good, because it means I can delegate specific tasks to her,[2] and she'll experience less salience normalization.

  1. ^

    My spirit-animal, because I can never be sure what other people mean by "tulpa", and I haven't seen/read any guides on it except yours.

  2. ^

    She explicitly asked me to delegate, since she wants to be usefwl, but (maybe) doesn't have the large-scale perspective to contribute to prioritization.

Comment by Emrik (Emrik North) on Emrik's Shortform · 2024-05-22T23:18:26.847Z · LW · GW

Selfish neuremes adapt to prevent you from reprioritizing

  • "Neureme" is my most general term for units of selection in the brain.[1] 
    • The term is agnostic about what exactly the physical thing is that's being selected. It just refers to whatever is implementing a neural function and is selected as a unit.
    • So depending on use-case, a "neureme" can semantically resolve to a single neuron, a collection of neurons, a neural ensemble/assembly/population-vector/engram, a set of ensembles, a frequency, or even dendritic substructure if that plays a role.
  • For every activity you're engaged with, there are certain neuremes responsible for specializing at those tasks.
  • These neuremes are strengthened or weakened/changed in proportion to how effectively they can promote themselves to your attention.
    • "Attending to" assemblies of neurons means that their firing-rate maxes out (gamma frequency), and their synapses are flushed with acetylcholine, which is required for encoding memories and queuing them for consolidation during sleep.
  • So we should expect that neuremes are selected for effectively keeping themselves in attention, even in cases where that makes you less effective at tasks which tend to increase your genetic fitness.
  • Note that there's hereditary selection going on at the level of genes, and at the level of neuremes. But since genes adapt much slower, the primary selection-pressures neuremes adapt to arise from short-term inter-neuronal competitions. Genes are limited to optimizing the general structure of those competitions, but they can only do so in very broad strokes, so there's lots of genetically-misaligned neuronal competition going on.
    • A corollary of this is that neuremes are stuck in a tragedy of the commons: If all neuremes "agreed to" never develop any misaligned mechanisms for keeping themselves in attention—and we assume this has no effect on the relative proportion of attention they receive—then their relative fitness would stay constant at a lower metabolic cost overall. But since no such agreement can be made, there's some price of anarchy wrt the cost-efficiency of neuremes.
  • Thus, whenever some neuremes uniquely associated with a cognitive state are *dominant* in attention, whatever mechanisms they've evolved for persisting the state are going to be at maximum power, and this is what makes the brain reluctant to gain perspective when on stimulants.

A technique for making the brain trust prioritization/perspectivization

So, in conclusion, maybe this technique could work:

  • If I feel like my brain is sucking me into an unproductive rabbit-hole, set a timer for 60 seconds during which I can check my todo-list and prioritize what I ought to do next.
  • But, before the end of that timer, I will have set another timer (e.g. 10 min) during which I commit to the previous task before I switch to whatever I decided.
  • The hope is that my brain learns to trust that gaining perspective doesn't automatically mean we have to abandon the present task, and this means it can spend less energy on inhibiting signals that try to gain perspective.

By experience, I know something like this has worked for:

  • Making me trust my task-list
    • When my brain trusts that all my tasks are in my todo-list, and that I will check my todo-list every day, it no longer bothers reminding me about stuff at random intervals.
  • Reducing dystonic distractions
    • When I deliberately schedule stuff I want to do less (e.g. masturbation, cooking, twitter), and committing to actually *do* those things when scheduled, my brain learns to trust that, and stops bothering me with the desires when they're not scheduled.

So it seems likely that something in this direction could work, even if this particular technique fails.

  1. ^

    The "-eme" suffix inherits from "emic unit", e.g. genes, memes, sememes, morphemes, lexemes, etc. It refers to the minimum indivisible things that compose to serve complex functions. The important notion here is that even if the eme has complex substructure, all its components are selected as a unit, which means that all subfunctions hitchhike on the net fitness of all other subfunctions.

Comment by Emrik (Emrik North) on "How could I have thought that faster?" · 2024-05-22T08:26:19.929Z · LW · GW

Made a post with my reply:

While obviously both heuristics are good to use, the reasons I think asking "which chains-of-thought was that faster than?" tends to be more epistemically profitable than "how could I have thought that faster?" include:

  • It is easier to find suboptimal thinking-habits to propagate an unusually good idea into, than to find good ideas for improving a particular suboptimal thinking-habit.
    • Notice that in my technique, the good idea is cognitively proximal and the suboptimal thinking-habits are cognitively distal, whereas in Eliezer's suggestion it's the other way around.
    • A premise here is that good ideas are unusual (hard-to-find) and suboptimal thinking-habits are common (easy-to-find)—the advice flips in domains where it's the opposite.
    • It relates to the difference between propagating specific solutions to plausible problem-domains, vs searching for specific solutions to a specific problem.
      • The brain tends to be biased against the former approach because it's preparatory work with upfront cost ("prophylaxis"), whereas the latter context sort of forces you to search for solutions.
Comment by Emrik (Emrik North) on rough draft on what happens in the brain when you have an insight · 2024-05-21T19:05:59.313Z · LW · GW

I don't really know what psychedelics do in the brain, so I don't have a good answer. I'd note that, if psychedelics increases your brain's sensitivity wrt amplifying discrepancies, then this seems like a promising way to counterbalance biases in the negative direction (e.g. being too humble to think anything novel), even if it increases your false-positives.

I think psychedelics probably don't work this way, but I'd like to try it anyway (if it were cheap) while thinking about specific topics I fear I might be tempted to fool myself about. I'd first spend some effort getting into the state where my brain wants to discover those discrepancies in the first place, and I'm extra-sceptical the drugs would work on their own without some mental preparation.

Comment by Emrik (Emrik North) on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate University · 2024-05-21T17:10:33.243Z · LW · GW

Edit: made it a post.

On my current models of theoretical[1] insight-making, the beginning of an insight will necessarily—afaict—be "non-robust"/chaotic. I think it looks something like this:

  1. A gradual build-up and propagation of salience wrt some tiny discrepancy between highly confident specific beliefs
    1. This maybe corresponds to simultaneously-salient neural ensembles whose oscillations are inharmonic[2]
    2. Or in the frame of predictive processing: unresolved prediction-error between successive layers
  2. Immediately followed by a resolution of that discrepancy if the insight is successfwl
    1. This maybe corresponds to the brain having found a combination of salient ensembles—including the originally inharmonic ensembles—whose oscillations are adequately harmonic.
    2. Super-speculative but: If the "question phase" in step 1 was salient enough, and the compression in step 2 great enough, this causes an insight-frisson[3] and a wave of pleasant sensations across your scalp, spine, and associated sensory areas.

This maps to a fragile/chaotic high-energy "question phase" during which the violation of expectation is maximized (in order to adequately propagate the implications of the original discrepancy), followed by a compressive low-energy "solution phase" where correctness of expectation is maximized again.

In order to make this work, I think the brain is specifically designed to avoid being "robust"—though here I'm using a more narrow definition of the word than I suspect you intended. Specifically, there are several homeostatic mechanisms which make the brain-state hug the border between phase-transitions as tightly as possible. In other words, the brain maximizes dynamic correlation length between neurons[4], which is when they have the greatest ability to influence each other across long distances (aka "communicate"). This is called the critical brain hypothesis, and it suggests that good thinking is necessarily chaotic in some sense.

Another point is that insight-making is anti-inductive.[5] Theoretical reasoning is a frontier that's continuously being exploited based on the brain's native Value-of-Information-estimator, which means that the forests with the highest naively-calculated-VoI are also less likely to have any low-hanging fruit remaining. What this implies is that novel insights are likely to be very narrow targets—which means they could be really hard to hold on to for the brief moment between initial hunch and build-up of salience. (Concise handle: epistemic frontiers are anti-inductive.)

  1. ^

    I scope my arguments only to "theoretical processing" (i.e. purely introspective stuff like math), and I don't think they apply to "empirical processing".

  2. ^

    Harmonic (red) vs inharmonic (blue) waveforms. When a waveform is harmonic, efferent neural ensembles can quickly entrain to it and stay in sync with minimal metabolic cost. Alternatively, in the context of predictive processing, we can say that "top-down predictions" quickly "learn to predict" bottom-up stimuli.

    Comparing harmonic (top) and inharmonic (bottom) waveforms.
  3. ^

    I basically think musical pleasure (and aesthetic pleasure more generally) maps to 1) the build-up of expectations, 2) the violation of those expectations, and 3) the resolution of those violated expectations. Good art has to constantly balance between breaking and affirming automatic expectations. I think the aesthetic chills associates with insights are caused by the same structure as appogiaturas—the one-period delay of an expected tone at the end of a highly predictable sequence.

  4. ^

    I highly recommend this entire YT series!

  5. ^

    I think the term originates from Eliezer, but Q Home has more relevant discussion on it—also I'm just a big fan of their chaoticoptimal reasoning style in general. Can recommend! 🍵

Comment by Emrik (Emrik North) on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate University · 2024-05-21T03:31:39.168Z · LW · GW

I think both "jog, don't sprint" and "sprint, don't jog" is too low-dimensional as advice. It's good to try to spend 100% of one's resources on doing good—sorta tautologically. What allows Johannes to work as hard as he does, I think, is not (just) that he's obsessed with the work, it's rather that he understands his own mind well enough to navigate around its limits. And that self-insight is also what enables him aim his cognition at what matters—which is a trait I care more about than ability to work hard.

People who are good at aiming their cognition at what matters sometimes choose to purposefwly flout[1] various social expectations in order to communicate "I see through this distracting social convention and I'm willing to break it in order to aim myself more purely at what matters". Readers who haven't noticed that some of their expectations are actually superfluous or misaligned with altruistic impact, will mistakenly think the flouter has low impact-potential or is just socially incompetent.

By writing the way he does, Johannes signals that he's distancing himself from status-related putative proxies-for-effectiveness, and I think that's a hard requirement for aiming more purely at the conjunction of multipliers[2] that matter. But his signals will be invisible to people who aren't also highly attuned to that conjunction.

  1. ^

    "flouting a social expectation": choosing to disregard it while being fully aware of its existence, in a not-mean-spirited way.

  2. ^

    I think the post uses an odd definition of "conjunction", but it points to something important regardless. My term for this bag of nearby considerations is "costs of compromise":

    there are exponential costs to compromising what you are optimizing for in order to appeal to a wider variety of interests

Comment by Emrik (Emrik North) on Deep Learning Systems Are Not Less Interpretable Than Logic/Probability/Etc · 2024-05-19T13:25:53.093Z · LW · GW

The links/graphics are broken btw. Would probably be nice to fix if it's quick.

Comment by Emrik (Emrik North) on The power of finite and the weakness of infinite binary point numbers · 2024-05-19T11:00:58.693Z · LW · GW

Learning math fundamentals from a textbook, rather than via one's own sense of where the densest confusions are, is sort of an oxymoron. If you want to be rigorous, you should do anything but defer to consensus.

And from a socioepistemological perspective: if you want math fundamentals to be rigorous, you'd encourage people to try to come up with their own fundamentals before they einstellung on what's been written before. If the fundamentals are robust, they're likely to rediscover it; if they aren't, there's a chance they'll revolutionize the field.

Comment by Emrik (Emrik North) on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate University · 2024-05-19T02:08:18.452Z · LW · GW

It's a reasonable concern to have, but I've spoken enough with him to know that he's not out of touch with reality. I do think he's out of sync with social reality, however, and as a result I also think this post is badly written and the anecdotes unwisely overemphasized. His willingness to step out of social reality in order to stay grounded with what's real, however, is exactly one of the main traits that make me hopefwl about him.

I have another friend who's bipolar and has manic episodes. My ex-step-father also had rapid-cycling BP, so I know a bit about what it looks like when somebody's manic.[1] They have larger-than-usual gaps in their ability to notice their effects on other people, and it's obvious in conversation with them. When I was in a 3-person conversation with Johannes, he was highly attuned to the emotions and wellbeing of others, so I have no reason to think he has obvious mania-like blindspots here.

But when you start tuning yourself hard to reality, you usually end up weird in a way that's distinct from the weirdness associated with mania. Onlookers who don't know the difference may fail to distinguish the underlying causes, however. ("Weirdness" is a larger cluster than "normality", but people mostly practice distinguishing between samples of normality, so weirdness all looks the same to them.)

  1. ^

    I was also evaluated for it after an outlier depressive episode in 2021, so I got to see the diagnostic process up close. Turns out I just have recurring depressions, and I'm not bipolar.

Comment by Emrik (Emrik North) on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate University · 2024-05-19T00:07:12.302Z · LW · GW

He linked his extensive research log on the project above, and has made LW posts of some of their progress. That said, I don't know of any good legible summary of it. It would be good to have. I don't know if that's one of Johannes' top priorities, however. It's never obvious from the outside what somebody's top priorities ought to be.

Comment by Emrik (Emrik North) on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate University · 2024-05-18T23:41:10.456Z · LW · GW

Surely you could work for free as an engineer at an AI alignment org or something and then shift into discussions w/ them about alignment? 

To be clear: his motivation isn't "I want to contribute to alignment research!" He's aiming to actually solve the problem. If he works as an engineer at an org, he's not pursuing his project, and he'd be approximately 0% as usefwl.

Comment by Emrik (Emrik North) on Fund me please - I Work so Hard that my Feet start Bleeding and I Need to Infiltrate University · 2024-05-18T23:37:48.747Z · LW · GW

I strongly endorse Johannes' research approach. I've had 6 meetings with him, and have read/watched a decent chunk of his posts and YT vids. I think the project is very unlikely to work, but that's true of all projects I know of, and this one seems at least better than almost all of them. (Reality doesn't grade on a curve.)

Still, I really hope funders would consider funding the person instead of the project, since I think Johannes' potential will be severely stifled unless he has the opportunity to go "oops! I guess I ought to be doing something else instead" as soon as he discovers some intractable bottleneck wrt his current project. He's literally the person I have the most confidence in when it comes to swiftly changing path to whatever he thinks is optimal, and it would be a real shame if funding gave him an incentive to not notice reasons to pivot. (For more on this, see e.g. Steve's post.)

I realize my endorsement doesn't carry much weight for people who don't know me, and I don't have much general clout here, but if you're curious here's my EA forum profile and twitter. On LW, I'm mostly these users {this, 1, 2, 3, 4}. Some other things which I hope will nudge you to take my endorsement a bit more seriously:

  • I've been working full-time on AI alignment since early 2022.
    • I rarely post about my work, however, since I'm not trying to "contribute"—I'm trying to do.
  • EA has been my top life-priority since 2014 (I was 21).
  • I've read the Sequences in their entirety at least once. (Low bar, but worth mentioning.)
  • I have no academic or professional background because I'm financially secure with disability money. This means I can spend 100% of my time following my own sense of what's optimal for me without having to take orders or produce impressive/legible artifacts.
    • I think Johannes will be much more effective if he has the same freedom, and is not tied to any particular project. I really doubt anyone other than him will be better able to evaluate what the optimal use of his time is.

Edit: I should mention that Johannes hasn't prompted me to say any of this. I took notice of him due to the candor of his posts and reached out by myself a few months ago.

Comment by Emrik (Emrik North) on quila's Shortform · 2024-05-17T03:43:52.706Z · LW · GW

EDIT: I uploaded a better example here (18m18s):


Old example still here (7m25s).

Comment by Emrik (Emrik North) on quila's Shortform · 2024-05-17T02:52:46.105Z · LW · GW

Epic Lizka post is epic.

Also, I absolutely love the word "shard" but my brain refuses to use it because then it feels like we won't get credit for discovering these notions by ourselves. Well, also just because the words "domain", "context", "scope", "niche", "trigger", "preimage" (wrt to a neural function/policy / "neureme") adequately serve the same purpose and are currently more semantically/semiotically granular in my head.

trigger/preimage ⊆ scope ⊆ domain[1]

"niche" is a category in function space (including domain, operation, and codomain), "domain" is a set.

"scope" is great because of programming connotations and can be used as a verb. "This neural function is scoped to these contexts."

  1. ^

    EDIT: ig I use "scope" and "domain" in a way which doesn't neatly mean one is a subset of the other. I want to be able to distinguish between "the set of inputs it's currently applied to" and "the set of inputs it should be applied to" and "the set of inputs it could be applied to", but I don't have adequate words here.

Comment by Emrik (Emrik North) on quila's Shortform · 2024-05-16T22:35:07.353Z · LW · GW

Aaron Bergman has a vid of himself typing new sentences in real-time, which I found really helpfwl.[1] I wish I could watch lots of people record themselves typing, so I could compare what I do.

Being slow at writing can be sign of failure or winning, depending on the exact reasons why you're slow. I'd worry about being "too good" at writing, since that'd be evidence that your brain is conforming your thoughts to the language, instead of conforming your language to your thoughts. English is just a really poor medium for thought (at least compared to e.g. visuals and pre-word intuitive representations), so it's potentially dangerous to care overmuch about it.

  1. ^

    Btw, Aaron is another person-recommendation. He's awesome. Has really strong self-insight, goodness-of-heart, creativity. (Twitter profile, blog+podcast, EAF, links.) I haven't personally learned a whole bunch from him yet,[2] but I expect if he continues being what he is, he'll produce lots of cool stuff which I'll learn from later.

  2. ^

    Edit: I now recall that I've learned from him: screwworms (important), and the ubiquity of left-handed chirality in nature (mildly important). He also caused me to look into two-envelopes paradox, which was usefwl for me.

    Although I later learned about screwworms from Kevin Esvelt at 80kh podcast, so I would've learned it anyway. And I also later learned about left-handed chirality from Steve Mould on YT, but I may not have reflected on it as much.

Comment by Emrik (Emrik North) on Johannes C. Mayer's Shortform · 2024-05-13T21:22:48.852Z · LW · GW

I did nearly this in ~2015. I made a folder with pictures of inspiring people (it had Eliezer Yudkowsky, Brian Tomasik, David Pearce, Grigori Perelman, Feynman, more idr), and used it as my desktop background or screensaver or both (idr).

I say this because I am surprised at how much our thoughts/actions have converged, and wish to highlight examples that demonstrate this. And I wish to communicate that because basically senpai notice me. kya.

Comment by Emrik (Emrik North) on Johannes C. Mayer's Shortform · 2024-05-13T21:10:16.339Z · LW · GW

I wrote the entry in the context of the question "how can I gain the effectiveness-benefits of confidence and extreme ambition, without distorting my world-model/expectations?"

I had recently been discovering abstract arguments that seemed to strongly suggest it would be most altruistic/effective for me to pursue extremely ambitious projects;  both because 1) the low-likelihood high-payoff quadrant had highest expected utility, but also because 2) the likelihood of success for extremely ambitious projects seemed higher than I thought.  (Plus some other reasons.)  I figured that I needn't feel confident about success in order to feel confident about the approach.

Comment by Emrik (Emrik North) on Johannes C. Mayer's Shortform · 2024-05-13T19:55:02.578Z · LW · GW

This exact thought, from my diary in ~June 2022: "I advocate keeping a clear separation between how confident you are that your plan will work with how confident you are that pursuing the plan is optimal."

I think perhaps I don't fully advocate alieving that your plan is more likely to work than you actually believe it is. Or, at least, I advocate some of that on the margin, but mostly I just advocate keeping a clear separation between how confident you are that your plan will work with how confident you are that pursuing the plan is optimal. As long as you tune yourself to be inspired by working on the optimal, you can be more ambitious and less risk-averse.

Unfortunately, if you look like you're confidently pursuing a plan (because you think it's optimal, but your reasons are not immediately observable), other people will often mistake that for confidence-in-results and perhaps conclude that you're epistemically crazy. So it's nearly always socially safer to tune yourself to confidence-in-results lest you risk being misunderstood and laughed at.

You also don't want to be so confident that what you're doing is optimal that you're unable to change your path when new evidence comes in. Your first plan is unlikely to be the best you can do, and you can only find the best you can do by trying many different things and iterating. Confidence can be an obstacle to change.

On the other hand, lack of confidence can also be an obstacle to change. If you're not confident that you can do better than you're currently doing, then you'll have a hard time motivating yourself to find alternatives. Underconfidence is probably underappreciated as a source of bias due to social humility being such a virtue.

To use myself as an example (although I didn't intend for this to be about me rather than about my general take on mindset): I feel pretty good about the ideas I've come up with so far. So now I have a choice to make: 1) I could think that the ideas are so good that I should just focus on building and clarifying them, or 2) I could use the ideas as evidence that I'm able to produce even better ideas if I keep searching. I'm aiming for the latter, and I hold my current best ideas in contempt because I'm still stuck with them. In some sense, confidence makes it easier to Actually Change My Mind.

I guess the recipe I might be advocating is:
1. separate between confidence-in-results and confidence-in-optimality
2. try to hold accurate and precise beliefs about the results of your plans/ideas, mistakes here are costly
3. try to alieve that you're able to produce more optimal ideas/plans than the ones you already have, mistakes here are less costly, and gains in positive alief are much higher

I'm going to call this the Way of Aisi because it reminds me of an old friend who just did everything better than everyone else (including himself) because he had faith in himself. :p

Comment by Emrik (Emrik North) on [Concept Dependency] Edge Regular Lattice Graph · 2024-05-07T02:08:47.403Z · LW · GW

Oh cool. Another way of embedding higher dimensions in 2D. Edges don't have to visually line up as long as you label them. And if some dimension (eg 'z') is very rarely used, it takes up much less cognitive space compared to if you tried to represent it on equal terms as all other dimensions (eg as in a spatial visualisation). Not sure what I'll use it for yet tho.

Comment by Emrik (Emrik North) on How do you Select the Right Research Acitivity in the Right Moment? · 2024-05-06T12:26:44.929Z · LW · GW

personally, I try to "prepare decisions ahead of time".  so if I end up in situation where I spend more than 10s actively prioritizing the next thing to do, smth went wrong upstream.  (prev statement is exaggeration, but it's in the direction of what I aspire to lurn)

as an example, here's how I've summarized the above principle to myself in my notes:

(note: these titles is v likely cause misunderstanding if u don't already know what I mean by them; I try avoid optimizing my notes for others' viewing, so I'll never bother caveating to myself what I'll remember anyway)

I bascly want to batch process my high-level prioritization, bc I notice that I'm v bad at bird-level perspective when I'm deep in the weeds of some particular project/idea.  when I'm doing smth w many potential rabbit-holes (eg programming/design), I set a timer (~35m, but varies) for forcing myself to step back and reflect on what I'm doing (atm, I do this less than once a week; but I do an alternative which takes longer to explain).

I'm prob wasting 95% of my time on unnecessary rabbit-holes that cud be obviated if only I'd spent more Manual Effort ahead of time.  there's ~always a shorter path to my target, and it's easier to spot from a higher vantage-point/perspective.

as for figuring out what and how to distill…

Context-Logistics Framework

  • one of my project-aspirations is to make a "context-logistics framework" for ensuring that the right tidbits of information (eg excerpts fm my knowledge-network) pop up precisely in the context where I'm most likely to find use for it.
    • this can be based on eg window titles
      • eg auto-load my checklist for buying drugs when I visit, and display it on my side-monitor
    • or it can be a script which runs on every detected context-switch
      • eg ask GPT-vision to summarize what it looks like I'm trying to achieve based on screenshot-context, and then ask it to fetch relevant entries from my notes, or provide a list of nonobvious concrete tips ppl in my situation tend to be unaware of
        • prob not worth the effort if using GPT-4 tho, way too verbose and unable to say "I've got nothing"
    • a concrete use-case for smth-like-this is to display all available keyboard-shortcuts filtered by current context, which updates based on every key I'm holding (or key-history, if including chords).
      • I've looked for but not found any adequate app (or vscode extension) for this.
      • in my proof-of-concept AHK script, this infobox appears bottom-right of my monitor when I hold CapsLock for longer than 350ms:
  • my motivation for wanting smth-like-this is j observing that looking things up (even w a highly-distilled network of notes) and writing things in takes way too long, so I end up j using my brain instead (this is good exercise, but I want to free up mental capacity & motivation for other things).

Prophylactic Scope-Abstraction

  • the ~most important Manual Cognitive Algorithm (MCA) I use is:
    • Prophylactic Scope-Abstraction:
      WHEN I see an interesting pattern/function,
      1. try to imagine several specific contexts in which recalling the pattern could be usefwl
      2. spot similarities and understand the minimal shared essence that unites the contexts
        1. eg sorta like a minimal Markov blanket over the variables in context-space which are necessary for defining the contexts? or their list of shared dependencies? the overlap of their preimages?
      3. express that minimal shared essence in abstract/generalized terms
      4. then use that (and variations thereof) as u's note title, or spaced repetition, or j say it out loud a few times
    • this happens to be exactly the process I used to generate the term "prophylactic scope-abstraction" in the first place.
    • other examples of abstracted scopes for interesting patterns:
      • Giffen paradox
        • > "I want to think of this concept whenever I'm trying to balance a portfolio of resources/expenditures, over which I have varying diminishing marginal returns; especially if they have threshold-effects."
        • this enabled me to think in terms of "portfolio-management" more generally, and spot Giffen-effects in my own motivations/life, eg:
          "when the energetic cost of leisure goes up, I end up doing more of it"
          • patterns are always simpler than they appear.
      • Berkson's paradox
        • > "I want to think of this concept whenever I see a multidimensional distribution/list sorted according to an aggregate dimension (eg avg, sum, prod) or when I see an aggregate sorting-mechanism over the same domain."
    • it's important bc the brain doesn't automatically do this unless trained.  and the only way interesting patterns can be usefwl, is if they are used; and while trying to mk novel epistemic contributions, that implies u need hook patterns into contexts they haven't been used in bfr.  I didn't anticipate that this was gonna be my ~most important MCA when I initially started adopting it, but one year into it, I've seen it work too many times to ignore.
      • notice that the cost of this technique is upfront effort (hence "prophylactic"), which explains why the brain doesn't do it automatically.

examples of distilled notes

  • some examples of how I write distilled notes to myself:
    • (note: I'm not expecting any of this to be understood, I j think it's more effective communication to just show the practical manifestations of my way-of-doing-things, instead of words-words-words-ing.)
    • I also write statements I think are currently wrong into my net, eg bc that's the most efficient way of storing the current state of my confusion.  in this note, I've yet to find the precise way to synthesize the ideas, but I know a way must exist:
Comment by Emrik (Emrik North) on Three Fables of Magical Girls and Longtermism · 2022-12-02T22:44:06.857Z · LW · GW

Still the only anime with what at least half-passes for a good ending. Food for thought, thanks! 👍

Comment by Emrik (Emrik North) on Thomas Larsen's Shortform · 2022-11-09T20:16:24.088Z · LW · GW

I've been exploring evolutionary metaphors to ML, so here's a toy metaphor for RLHF: recessive persistence. (Still just trying to learn both fields, however.)

"Since loss-of-function mutations tend to be recessive (given that dominant mutations of this type generally prevent the organism from reproducing and thereby passing the gene on to the next generation), the result of any cross between the two populations will be fitter than the parent." (k)


Recessive alleles persists due to overdominance letting detrimental alleles hitchhike on fitness-enhancing dominant counterpart. The detrimental effects on fitness only show up when two recessive alleles inhabit the same locus, which can be rare enough that the dominant allele still causes the pair to be selected for in a stable equilibrium.

The metaphor with deception breaks down due to unit of selection. Parts of DNA stuck much closer together than neurons in the brain or parameters in a neural networks. They're passed down or reinforced in bulk. This is what makes hitchhiking so common in genetic evolution.

(I imagine you can have chunks that are updated together for a while in ML as well, but I expect that to be transient and uncommon. Idk.)

Bonus point: recessive phase shift.

"Allele-frequency change under directional selection favoring (black) a dominant advantageous allele and (red) a recessive advantageous allele." (source)

In ML:

  1. Generalisable non-memorising patterns start out small/sparse/simple.
  2. Which means that input patterns rarely activate it, because it's a small target to hit.
  3. But most of the time it is activated, it gets reinforced (at least more reliably than memorised patterns).
  4. So it gradually causes upstream neurons to point to it with greater weight, taking up more of the input range over time. Kinda like a distributed bottleneck.
  5. Some magic exponential thing, and then phase shift!

One way the metaphor partially breaks down because DNA doesn't have weight decay at all, so it allows for recessive beneficial mutations to very slowly approach fixation.

Comment by Emrik (Emrik North) on AllAmericanBreakfast's Shortform · 2022-11-09T19:30:11.677Z · LW · GW

Eigen's paradox is one of the most intractable puzzles in the study of the origins of life. It is thought that the error threshold concept described above limits the size of self replicating molecules to perhaps a few hundred digits, yet almost all life on earth requires much longer molecules to encode their genetic information. This problem is handled in living cells by enzymes that repair mutations, allowing the encoding molecules to reach sizes on the order of millions of base pairs. These large molecules must, of course, encode the very enzymes that repair them, and herein lies Eigen's paradox...

(I'm not making any point, just wanted to point to interesting related thing.)

Comment by Emrik (Emrik North) on Emrik's Shortform · 2022-11-08T20:32:04.425Z · LW · GW

Seems like Andy Matuschak feels the same way about spaced repetition being a great tool for innovation.

Comment by Emrik (Emrik North) on Moneypumping Bryan Caplan's Belief in Free Will · 2022-11-07T13:16:34.540Z · LW · GW

I like the framing. Seems generally usefwl somehow. If you see someone believing something you think is inconsistent, think about how to money-pump them. If you can't, then are you sure they're being inconsistent? Of course, there are lots of inconsistent beliefs that you can't money-pump, but seems usefwl to have a habit of checking. Thanks!

Comment by Emrik (Emrik North) on Instead of technical research, more people should focus on buying time · 2022-11-06T09:53:45.221Z · LW · GW

How do you account for the fact that the impact of a particular contribution to object-level alignment research can compound over time?

  1. Let's say I have a technical alignment idea now that is both hard to learn and very usefwl, such that every recipient of it does alignment research a little more efficiently. But it takes time before that idea disseminates across the community.
    1. At first, only a few people bother to learn it sufficiently to understand that it's valuable. But every person that does so adds to the total strength of the signal that tells the rest of the community that they should prioritise learning this.
    2. Not sure if this is the right framework, but let's say that researchers will only bother learning it if the strength of the signal hits their person-specific threshold for prioritising it.
    3. Number of researchers are normally distributed (or something) over threshold height, and the strength of the signal starts out below the peak of the distribution.
    4. Then (under some assumptions about the strength of individual signals and the distribution of threshold height), every learner that adds to the signal will, at first, attract more than one learner that adds to the signal, until the signal passes the peak of the distribution and the idea reaches satiation/fixation in the community.
  2. If something like the above model is correct, then the impact of alignment research plausibly goes down over time.
    1. But the same is true of a lot of time-buying work (like outreach). I don't know how to balance this, but I am now a little more skeptical of the relative value of buying time.
  3. Importantly, this is not the same as "outreach". Strong technical alignment ideas are most likely incompatible with almost everyone outside the community, so the idea doesn't increase the number of people working on alignment.
Comment by Emrik (Emrik North) on Instead of technical research, more people should focus on buying time · 2022-11-06T02:46:40.873Z · LW · GW

That's fair, but sorry[1] I misstated my intended question. I meant that I was under the impression that you didn't understand the argument, not that you didn't understand the action they advocated for.

I understand that your post and this post argue for actions that are similar in effect. And your post is definitely relevant to the question I asked in my first comment, so I appreciate you linking it.

  1. ^

    Actually sorry. Asking someone a question that you don't expect yourself or the person to benefit from is not nice, even if it was just due to careless phrasing. I just wasted your time.