Posts

Non-Confusion 2024-03-12T02:46:27.853Z
Mushin 2024-03-06T03:27:14.491Z
flowing like water; hard like stone 2024-02-20T03:20:46.531Z
Option Space Nomenclature 2023-12-08T23:14:54.204Z
Looking for a post I read if anyone recognizes it 2023-05-10T01:24:58.005Z
Programming an IFS for alternate uses 2023-04-29T00:25:38.980Z
Parametrize Priority Evaluations 2023-04-08T00:39:15.259Z

Comments

Comment by SilverFlame on What subjects are unexpectedly high-utility? · 2024-01-26T18:31:41.244Z · LW · GW
  • Learning about the trigger conditions for serotonin, oxytocin, dopamine, and cortisol, which allowed for more direct optimization away from cortisol activations

This idea started when I read this article I was pointed at by a coworker in 2020: The DOCS Happiness Model. I then did some naturalist studies with that framing in mind, and managed to reduce cortisol activations that I considered "unhelpful" by a significant degree. I consider this of high value to people who have enough control over their environment to meaningfully optimize against cortisol triggers.

  • Using method acting and other mimicry skills to more quickly learn from experts I was already trying to learn from

This was mostly learned via self-experimentation.

This is a large part of what I call my "skill stealing" skill tree, which nowadays mainly focuses on training an IFS "voice" that possesses knowledge of the skill or skill set in question. The stronger forms of these techniques tend to eat a lot of processing cycles and make it hard to maintain other parts of a "self image" while you use them, so be wary of that pitfall.

If you do want to pursue it, remember to focus on aligning as many parts of your thought process in that field to the expert's thought process as seems appropriate instead of just becoming able to sound like them. There are a lot of layers and details to be mastered in this process, but even lesser forms can start showing value quickly.

  • Applying operating system architecture knowledge to my internal thinking patterns to allow more efficient multithreading and context switching

This was mostly learned via self-experimentation.

This is performed by analyzing where there seems to be bottlenecks in my personal processing speed, and then doing some tests to see if I can nudge things towards a slightly different architecture to reduce the constraint. Which changes are needed and when seems to be pretty individual-specific, but here's some things I did:

  • Practice switching between commonly-used headspaces to make such transitions more reflexive (and thus cheaper in both energy and time)
  • Train a "scheduler" and figure out how to let it cut off trains of thought that aren't a priority at the moment (there are many pitfalls to doing this poorly, approach carefully)
  • Start grouping my IFS "skillset voices" into semi-specialized "circles" I can switch between to partition which ones are "active" at different times, which saved processing resources during active calculations and reduced signal noise during queries; picture having different predefined parties to choose from in an RPG

In general, your guiding creed should be "know your constraints and know your capabilities".

The mimicry and OS architecture applications have borne a lot of fruit over the years, but they both can take time to yield their first fruit, even if they are a good fit for your setup. The mimicry tactics are probably useful to anyone who wants to benefit from cooperative experts, but the OS tactic doesn't seem as widely useful.

Comment by SilverFlame on What subjects are unexpectedly high-utility? · 2024-01-26T16:27:54.682Z · LW · GW
  • Learning about the trigger conditions for serotonin, oxytocin, dopamine, and cortisol, which allowed for more direct optimization away from cortisol activations
  • Using method acting and other mimicry skills to more quickly learn from experts I was already trying to learn from
  • Applying operating system architecture knowledge to my internal thinking patterns to allow more efficient multithreading and context switching
Comment by SilverFlame on What rationality failure modes are there? · 2024-01-20T17:02:29.861Z · LW · GW

The two failure modes I observe most often are not exclusive to rationality, but might still be helpful to consider.

  1. Over-reliance on System 2 processing for improvement and a failure to make useful and/or intentional adjustments to System 1 processing
  2. Failing to base value estimations upon changes you can actually cause in reality, often focusing upon "virtual" value categories instead of the ones you might systemically prefer (this is best presented in LoganStrohl's How to Hug the Query)
Comment by SilverFlame on Why are people unkeen to immortality that would come from technological advancements and/or AI? · 2024-01-18T00:15:02.909Z · LW · GW

The decision was generated by my intuition since I've done the math on this question before, but it did not draw from a specific "gut feeling" beyond me querying the heavily-programmed intuition for a response with the appropriate inputs.

Your question has raised to mind some specific deviations of my perspective I have not explicitly mentioned yet:

  • I spent a large amount of time tracing what virtues I value and what sorts of "value" I care about, and afterwards have spent 5-ish years using that knowledge to "automate" calculations that use such information as input by training my intuition to do as much of the process as is reasonable
    • I know what my value categories are (even if I don't usually share the full list) and why they're on the list (and why some things aren't on the list)
    • My "decision engine" is trained to be capable of adding "research X to improve confidence" options when making decisions
      • If time or resources demand an immediate decision, then I will make a call based on the estimates I can make with minimal hesitation
    • This system is actively maintained
  • I do not consider lives "priceless", I will perform some sort of valuation if they are relevant to a calculation
    • An individual is valued via my estimates of their replacement cost, which can sometimes be alarmingly high in the case of unique individuals
    • Groups I can't easily gather data on are estimated for using intuition-driven distributions of my expectations for density of people capable of gathering/using influence and of awful people
    • My estimations and their underlying metrics are generally kept internal and subject to change because I find it socially detrimental to discuss such things without a pressing need being present
  • Two "value categories" I track are "allows timelines where Super Good Things happen" and "allows timelines where Super Bad Things happen"
    • These categories have some of the strongest weights in the list of categories
    • They specifically cover things I think would be Super Good/Bad to happen, either to myself or others
  • I estimate that skilled awful people having an unlimited lifespan would be a Super Bad Thing, therefore timelines that allow it are heavily weighted against
    • Awful people can convert "normal" people to expand the numbers of awful people, and given a lack of pressure even "average" people can trend towards being awful
    • The influence accumulation curves over time I have personally observed and estimated look to be exponential barring major external intervention and resource limitations, and currently the finite lifespan of humans forces the awful people to each deal with the slow-growth parts of their curves before hitting their stride
Comment by SilverFlame on Why are people unkeen to immortality that would come from technological advancements and/or AI? · 2024-01-17T23:18:58.882Z · LW · GW

Ok.  So remember, your choices are:

  1.  Lock away the technology for some time
  2. Release it now

 

You are choosing to kill every living person because you hope that the next generation of humans is more moral/ethical/deserving of immortality than the present, but you get no ability to affect the outcome.

Even with this context, my calculations come out the same. It appears that our estimations of the value (and possibly sacred-ness) of lives are different, as well as our allocations of relative weights for such things. I don't know that I have anything further worth mentioning, and am satisfied with my presentation of the paths my process follows.

Comment by SilverFlame on Why are people unkeen to immortality that would come from technological advancements and/or AI? · 2024-01-17T22:51:06.875Z · LW · GW

I'm not sure your position is coherent.  You, as a SWE, know that you can keep producing turing complete emulations and keep any possible software from the past working, with slight patches.  (for example, early game console games depended on UDB to work at all).

Source code and binary files would qualify as "immortal" by most definitions, but my experience using Linux and assisting in software rehosts has made me very dubious of the "immortality" of the software's usability.

Here's a brief summary of factors that contribute to that doubt:

  • Source code is usually not as portable as people think it is, and can be near-impossible to build correctly without access to sufficient documentation or copies of the original workspace(s)
  • Operating systems can be very picky about what executables they'll run, and executables also care a lot about what versions of libraries they want are present
  • There's a lot of architectures out there for workspaces, networks, and systems nowadays, and information about a lot of them is quietly being lost to brain drain and failures to document; some of that information can be near-impossible to re-acquire afterwards

 It's irrelevant if it isn't economically feasible to do so.

I do not consider economic infeasibility irrelevant when a problem can approach the scope of "a major corporation or government dogpiling the problem might have a 30% chance of solving it, and your reward will be nowhere near the price tag". It is possible that I am overestimating the feasibility of such rehosts after suffering through some painful rehost efforts, but that is an estimate from my intuition and thus there is little that discussion can achieve.

While I found your careful thought process here inspiring, the normal hypothetical assumption is to assume you have the authority to make the decision without any consequences or duty, and are immortal.  Meaning that none of these apply.

First, I make a point of asking those questions even in such a simplified context. I have spent a fair amount of time training my "option generator" and "decision processor" to embed such checklists to minimize the chances of easily-avoided outcomes slipping through. The answer to the first bullet point would easily calculate as "your role has no obligations either way", but the other two questions would still be relevant.

But, to specifically answer within your clarified framing and with the idea of my choice being the governing choice in all resulting timelines, I would currently choose to withhold the information/technology, and very likely would make use of my ability to "lock away" memories to properly control the information.

The rest of your response seems reasonable enough when using the assumption that software is immortal, so I have nothing worth saying about it beyond that.

Comment by SilverFlame on Why are people unkeen to immortality that would come from technological advancements and/or AI? · 2024-01-17T18:51:33.094Z · LW · GW

Do you think that some future generation of humans (or AI replacements) will become immortal, with the treatments being widely available?

I do not estimate the probability to be zero, but other than that my estimation metrics do not have any meaningful data to report.

Assuming they do - remember, every software system humans have ever built already is immortal, so AIs will all have that property - what bounds the awfulness of future people but not the people alive right now?

First, I'm not sure I agree that software systems are immortal. I've encountered quite a few tools and programs that are nigh-impossible to use on modern computers without extensive layers of emulation, and I expect that problem to get worse over time.

Second, I mainly track three primary limitations on somebody's "maximum awfulness":

  • In a pre-immortality world, they have only a fixed amount of time to exert direct influence and spread awfulness that way
  • The "society" in which we operate exerts pressure on nearly everyone it encompasses, amplifying the effects of "favored" actions and reducing the effects of "unpopular" actions. This is a massive oversimplification of a very multi-pronged concept, but this isn't the right time to delve into this concept.
  • Nobody is alone in the "game", and there will almost always be someone else whose actions and influence exerts pressure on whatever a given person is trying to do, although the degree of this effect varies wildly.

If immortality enters the picture, the latter two bullet points will still apply, but I estimate that they would not be nearly as effective on their own. Given infinite time, awful people can spread their influence and create awful organizations, especially given that people I consider "awful" tend to more easily acquire influence than people I consider "good" (since they have fewer inhibitions and more willingness to disrespect boundaries), so that would suggest a strong indication towards imbalance in the long term.

Why do you think future people will be better people?

I don't necessarily think future people will be better people. I don't feel confident estimating how their "awfulness rating" would compare to current people, but if held at gunpoint I would estimate little to no change. I am curious what made you think that I held such an expectation, but you don't have to answer.

If you had some authority to affect the outcome - whether or not current people get to be immortal, or you can reserve the treatment for future people who don't exist yet - does your belief that future people will be better people justify this genocide of current people?

There would be several factors in a decision to use such authority:

  • If I gained the authority through a specific role or duty, what would the expectations of that role or duty suggest I should do? This would be a calculation in its own right, but this should be a sufficient summary.
  • Do I expect my choice to prevent the spread of immortality to be meaningful long-term? The sub-questions here would look like "If I don't allow the spread, will someone else get to make a similar choice later?"
  • Is this the right time to make the decision? (I often recommend people ask this question during important decision-making)

The first and third factors I feel are self-explanatory, but I will talk a bit more on the second factor.

I would expect others given the same decision to not necessarily make the same choice, so by most statistical distributions even one or two other people facing the same decision would greatly increase my estimation of "likelihood that someone else chooses to hit the 'immortality button'". Therefore, if I expect the chance of "someone else chooses to press the button" to be "likely", I would then have to calculate further on how much I trusted the others I expected to be making such decisions. If I expected awful people to have the opportunity to choose whether to press the button, I would favor pressing it under my own control and circumstances, but if I expected good people to be my "competition", I would likely refrain and let them pursue the matter themselves.

... does your belief that future people will be better people justify this genocide of current people?

I do not currently consider myself to have enough ability to influence the pursuit of immortality, but I have consciously chosen to prioritize other things. I also prefer to frame such matters in the case of "how much change from the expected outcome can you achieve" rather than focusing upon all the perceived badness of the expected outcome. I've found such framing to be more efficient and stabilizing in my work as a software engineer.

 

As a general note to wrap things up, I prefer to avoid exerting major influence on matters where I do not feel strongly. I find that this tends to reduce "backsplash" from such exertions and shows respect for boundaries of people in general. As the topic of pursuing immortality is clearly a strong interest of many people and it is not a strong interest of mine, I tend to refrain from taking action more overt than being willing to discuss my perspective.

Comment by SilverFlame on Why are people unkeen to immortality that would come from technological advancements and/or AI? · 2024-01-17T17:11:42.382Z · LW · GW

First, a brief summary of my personal stance on immortality:

- Escaping the effects of aging for myself does not currently rate highly on my "satisfying my core desires" metrics at the moment

- Improving my resilience to random chances of dying rates as a medium priority on said metrics, but that puts it in the midst of a decently large group of objectives

- If immortality becomes widely available, we will lose the current guarantee that "awful people will eventually die", which greatly increases the upper bounds of the awfulness they can spread

- Personal growth can achieve a lot, but there's also parts of your "self" that can be near-impossible to get rid of, and I've noticed they tend to accumulate over time. It isn't too hard to extrapolate from there and expect a future where things have changed so much that the life you want to live just isn't possible anymore, and none of the options available are acceptable.

Some final notes:

- There are other maybe-impossible-maybe-not objectives I personally care more about that can be pursued (I am not ready to speak publicly on most of them)

- I place a decent amount of prioritization pressure to objectives that support a "duty" or "role" that I take up, when relevant, and according to my estimations my stance would change if I somehow took up a role where personal freedom from aging was required to fulfill the duty

- I do not care strongly enough to oppose non-"awful" (by my own definitions) people from pursuing immortality; my priorities mostly affect my own allocations of resources

- I mentioned in several places things I'm not willing to fight over, but I am somewhat willing to explain some aspects of my trains of thought. Note, however, that I am a somewhat private person and often elect silence over even acknowledging a boundary was approached.

Comment by SilverFlame on For fun: How long can you hold your breath? · 2023-12-08T23:22:48.955Z · LW · GW

1:15 with the use of some distraction and breathing techniques. Mid-20s male in decent health but asthma.

I remember pushing to 90 seconds at one point when experimenting with some body control techniques, but that was a couple years ago and I'd probably have to take some unhealthy measures to repeat that nowadays.

Comment by SilverFlame on Cultivating a state of mind where new ideas are born · 2023-12-08T23:06:43.776Z · LW · GW

Circling back a few months later, I have some observations from trying out this idea:

  • I found myself tossing ideas to friends and acquaintances more often, which tended to improve my relationships with them somewhat
  • I noticed that some of the ideas I was preparing to hand off to someone else had glimmers of concepts I could use for other things, which had obvious benefits
  • I didn't notice any impact to my normal ideation/processing bandwidth as a result of the change in operating method
  • Sometimes ideas I handed off to someone else would circle back later and benefit one of my own projects, although I suspect the success rates for such second-order results will vary wildly

Overall, it seems to have been worth trying, and I'll probably keep it going.

Comment by SilverFlame on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-08T01:32:42.937Z · LW · GW

My opinion is a bit mixed on LessWrong at the moment. I'm usually looking for one of two types of content whenever I peruse the site:
- Familiar Ideas Under Other Names: Descriptions of concepts and techniques I already understand that use language more approachable to "normal" people than the highly-niche jargon I use myself, which help me discuss them with others more conveniently
- Unfamiliar or Forgotten Ideas: Descriptions of concepts and techniques I haven't thought of recently or at all, which can be used as components for future projects

I've only been using the site for a few months, but I found a large initial surge of Familiar Ideas Under Other Names, and now I have my filters mostly set up to fish for possible new catches over time. Given the complexity and scope of some of my favorite posts in this category, I'm still fairly satisfied with a post meeting these requirements only showing up once a month or so. Before coming to LW, I would seldom encounter such things, so I'm still enjoying an increased intake.
I've been having a much harder time finding Unfamiliar or Forgotten Ideas, but that category has always been a tricky one to pursue even at the best of times, so it's hard to speculate one way or another about whether the current state of the site is acceptable or not.

On a more general note, I'm not able to direct much interest towards much of the AI discussions because it rates very poorly on the "how important is this to me" scale. I've been having to invest some effort into adjusting my filters to compensate, but I notice that there's still a lot of content that is adjacent-but-not-directly AI that sneaks in anyways. However, I haven't had too long to fully exercise the filters, so I don't want to present that as some sort of major issue when it's currently just a bit tiresome.

Comment by SilverFlame on How did you make your way back from meta? · 2023-09-08T02:47:33.531Z · LW · GW

I have had similar experiences with getting lost in the meta, as well as the isolated experience that it provides. In my case, it would manifest as me focusing on trying to improve my big-picture "system metaphor" for my IFS-esque mental multi-threading architecture (one of my most useful constructs), even when I was well past the point where it was worth trying to further refine the top-down granularity.

I did notice the trend eventually, and once I consciously acknowledged the problem I was able to visualize some fairly straightforward paths away from it... except that external circumstances were keeping me from having the resources I would need to act upon those paths. As an attempt to summarize: the system metaphor development was pretty low-computation due to it being a fishing expedition via intuition pump, and the more grounded paths I identified all required major concentration shifts away from daily routines that I couldn't afford.

I ended up having to make a pretty major environment shift to try and escape that cycle, basically ditching parts of my life that were tying up the resources I wanted to draw upon to follow the non-meta paths. This was a fairly recent change and the dust hasn't yet settled, but I've been able to hit the ground running again on non-meta activities.

Comment by SilverFlame on Cultivating a state of mind where new ideas are born · 2023-08-04T16:26:55.939Z · LW · GW

Another idea if you want to push against the mental pressure that kills good ideas, from Paul Graham’s recent essay on how to do good work: “One way to do that is to ask what would be good ideas for someone else to explore. Then your subconscious won't shoot them down to protect you.” I don’t know of anyone using this technique, but it might work.

This angle of attack sounds worth investigating for myself, especially because it can circumvent censorship for other reasons, such as resource availability or personal interests. I've had ideas before that I immediately knew weren't something I'd be interested in pursuing myself, and it would be a waste to automatically throw them out without trying to think of someone more willing to take up the torch.

Comment by SilverFlame on Naturalism · 2023-05-11T00:03:48.505Z · LW · GW

I think naturalism can be directed even at things "contaminated by human design", if you apply the framing correctly. In a way, that's how I started out as something of a naturalist, so it is territory I'd consider a bit familiar.

The best starting point I can offer based on Raemon's comment is to look at changes in a field of study or technology over time, preferably one you already have some interest in (perhaps AI-related?). The naturalist perspective focuses on small observations over time, so I recommend embarking on brief "nature walks" where you find some way to expose yourself to information regarding some innovation in the field, be it ancient or modern. An example of this could be reading up on a new training algorithm you are not already familiar with (since it will be easier to use Original Seeing upon), without expending too much concentration or energy upon trying to calculate major insights.

Comment by SilverFlame on Naturalist Experimentation · 2023-05-10T23:39:42.158Z · LW · GW

The goal of naturalism is to reach a point where you relate to a part of the world in such a way that perpetual learning is inevitable.

I utilize a stance that seems very similar in spirit and a number of details to what is described here, and I would like to emphasize the value of frequent, small experiments to gather knowledge and expand awareness of options. I have found the practice valuable in reducing the complexity and investment requirements of experimentation, as well as synchronizing well with the update speed of mental models and other "deep knowledge".

Comment by SilverFlame on Looking for a post I read if anyone recognizes it · 2023-05-10T11:55:22.971Z · LW · GW

That's the one, thank you!

Comment by SilverFlame on System 2 as working-memory augmented System 1 reasoning · 2023-05-01T11:54:03.959Z · LW · GW

The most notable example of a Type 2 process that chains other Type 2 processes as well as Type 1 processes is my "path to goal" generator, but as I sit here to analyze it I am surprised to notice that much of what used to be Type 2 processing in its chain has been replaced with fairly solid Type 1 estimators with triggers for when you leave their operating scope. I am noticing that what I thought started as Type 2s that call Type 2s now looks more like Type 2s that set triggers via Type 1s to cause other Type 2s to get a turn on the processor later. It's something of an indirect system, but the intentionality is there.

My visibility into the current intricacies of my pseudo-IFS is currently low due to the energy costs maintaining such visibility produces, and circumstances do not make regaining it feasible for a while. As a result, I find myself having some difficulty identifying any specific processes that are Type 2 that aren't super implementation-specific and vague on the intricacies. I apologize for not having more helpful details on that front.

I have something a bit clearer as an example of what started as Type 2 behavior and transitioned to Type 1 behavior. I noticed at one point that I was calculating gradients in a timeframe that seemed automatic. Later investigation seemed to suggest that I had ended up with a Type 1 estimator that could handle a number of common data forms that I might want gradients of (it seems to resemble Riemann sums), and I have something of a felt sense for whether the form of data I'm looking at will mesh well with the estimator's scope.

Comment by SilverFlame on Why Our Kind Can't Cooperate · 2023-04-30T21:43:32.403Z · LW · GW

I have a modest amount of pair programming/swarming experience, and there are some lessons I have learned from studying those techniques that seem relevant here:

  • General cooperation models typically opt for vagueness instead of specificity to broaden the audiences that can make use of them
  • Complicated/technical problems such as engineering, programming, and rationality tend to require a higher level of quality and efficiency in cooperation than more common problems
  • Complicated/technical problems also exaggerate the overhead costs of trying to harmonize thought and communication patterns amongst the team(s) due to reduced tolerance of failures

With these in mind, I would posit that a factor worth considering is that the traditional models of collaboration simply don't meet the quality and cost requirements in their unmodified form. It is quite easy to picture a rationalist determining that the cost of forging new collaboration models isn't worth the opportunity costs, especially if they aren't actively on the front lines of some issue they consider Worth It.

Comment by SilverFlame on System 2 as working-memory augmented System 1 reasoning · 2023-04-30T00:57:41.714Z · LW · GW

Under this model, then, Type 2 processing is a particular way of chaining together the outputs of various Type 1 subagents using working memory. Some of the processes involved in this chaining are themselves implemented by particular kinds of subagents.

Something I have encountered in my own self-experiments and tinkering is Type 2 processes that chain together other Type 2 processes (and often some Type 1 subagents as well), meshing well with persistent Type 2 subagents that get re-used due to their practicality and sometimes end up resembling Type 1 subagents as their decision process becomes reflexive to repeat.

Have you encountered anything similar?

Comment by SilverFlame on Parametrize Priority Evaluations · 2023-04-08T19:34:10.506Z · LW · GW

I assign weights to terminal and instrumental value differently, with instrumental value growing higher for steps that are less removed from producing terminal value and/or for steps that won't easily backslide/revert without maintenance.

As far as uncertainty goes, my general formula is to focus upon keeping plans composed of "sure bet" steps if the risk of failure is high, but I'll allow less surefire steps to be attempted if there is more wiggle room in play. This sometimes results in plans that are overly circuitous, but resistant to common points of failure. The success rate of a step is calculated from my relevant experience and practice levels, as well as awareness of any relevant environmental factors. The actual weights were developed through iteration, and are likely specific to my framework.

Here's a real example of a decision calculation, as requested:

Scenario: I'm driving home from work, and need to pick which restaurant to get dinner from.

Value Categories (a sampling):

  • Existing Desires: Is there anything I'm already in the mood for, or conversely something I'm not in the mood for?
  • Diminishing Returns: Have I chosen one or more of the options too recently, or has it been a while since I chose one of the options?
  • Travel Distance: Is it a short or long diversion from my route home to reach the restaurant(s)?
  • Price Tag: How pricey or cheap are the food options?

I don't enjoy driving much, so Travel Distance is usually the highest-ranked Value Category, thoroughly eliminating food options that are too much of a deviation from my route. Next is Existing Desires, then Diminishing Returns, which let me pursue my desires and avoid getting overexposed to things. My finances are generally in a state where Price Tag doesn't make much difference on location selection, but it will play a more noticeable role when it comes time to figure out my order.