Non-Coercive Perfectionism 2021-01-26T16:53:36.238Z
Would most people benefit from being less coercive to themselves? 2021-01-21T14:24:17.187Z
Why Productivity Systems Don't Stick 2021-01-16T17:45:37.479Z
How to Write Like Kaj Sotala 2021-01-07T19:33:35.260Z
When Gears Go Wrong 2020-08-02T06:21:25.389Z
Reconsolidation Through Questioning 2019-11-14T23:22:43.518Z
Reconsolidation Through Experience 2019-11-13T20:04:39.345Z
The Hierarchy of Memory Reconsolidation Techniques 2019-11-13T20:02:43.449Z
Practical Guidelines for Memory Reconsolidation 2019-11-13T19:54:10.097Z
A Practical Theory of Memory Reconsolidation 2019-11-13T19:52:20.364Z
Expected Value- Millionaires Math 2019-10-09T14:50:26.732Z
On Collusion - Vitalik Buterin 2019-10-09T14:45:20.924Z
Exercises for Overcoming Akrasia and Procrastination 2019-09-16T11:53:10.362Z
Appeal to Consequence, Value Tensions, And Robust Organizations 2019-07-19T22:09:43.583Z
Overcoming Akrasia/Procrastination - Volunteers Wanted 2019-07-15T18:29:40.888Z
What are good resources for learning functional programming? 2019-07-04T01:22:05.876Z
Matt Goldenberg's Short Form Feed 2019-06-21T18:13:54.275Z
What makes a scientific fact 'ripe for discovery'? 2019-05-17T09:01:32.578Z
The Case for The EA Hotel 2019-03-31T12:31:30.969Z
How to Understand and Mitigate Risk 2019-03-12T10:14:19.873Z
What Vibing Feels Like 2019-03-11T20:10:30.017Z
S-Curves for Trend Forecasting 2019-01-23T18:17:56.436Z
A Framework for Internal Debugging 2019-01-16T16:04:16.478Z
The 3 Books Technique for Learning a New Skilll 2019-01-09T12:45:19.294Z
Symbiosis - An Intentional Community For Radical Self-Improvement 2018-04-22T23:15:06.832Z
How Going Meta Can Level Up Your Career 2018-04-14T02:13:02.380Z
Video: The Phenomenology of Intentions 2018-01-09T03:40:45.427Z
Video - Subject - Object Shifts and How to Have Them 2018-01-04T02:11:22.142Z


Comment by Matt Goldenberg (mr-hire) on Beware over-use of the agent model · 2021-04-26T15:01:56.246Z · LW · GW

But what is a good second lens for looking at these conglomerations of atoms that exert power over the future? 


One interesting alternative I've been learning about recently is the buddhist idea of "dependent origination".  I'll give a brief summary of some thoughts I've had based on it, although these should definitely not be taken as an accurate representation of the actual dependent origination teaching.

The basic idea is that the delusion of agency (or in buddhist terms, self) comes from the conglomeration of sensations (or sensors) and desires.  This leads to a clinging on to things that fulfill those desires, which leads to a need to pretend there is an agent that can fulfill those desires.  This then leads to the creation of more things that desire and sense(babies, AIs, whatever), to whom we pass on the same delusions.  We can view each of these as individual agents, or we can simply view whole process as one thing, a perpetual cycle of cause and effect the Buddhists call dependent origination.

Comment by Matt Goldenberg (mr-hire) on Against "Context-Free Integrity" · 2021-04-15T11:06:14.067Z · LW · GW

I unironically think this a great example of doing the thing the OP is pointing at correctly.

Comment by Matt Goldenberg (mr-hire) on "Taking your environment as object" vs "Being subject to your environment" · 2021-04-12T02:22:48.336Z · LW · GW

I think you may mean taking your environment as object? The typical idea behind a subject-object shift is that first you are subject to a lens, then you can take it as an object to look at.

Comment by Matt Goldenberg (mr-hire) on Clubhouse · 2021-03-15T03:51:51.863Z · LW · GW

Alright I'm convinced. Does anyone know if I can emulate this if I don't have an iPhone?

Comment by Matt Goldenberg (mr-hire) on The Comprehension Curve · 2021-02-23T17:04:07.104Z · LW · GW

So, in addition to raw comprehension rate, there's also what kind of knowledge you want to foster. This probably varies from text to text. In some cases it's best to absorb a broad base of material rapidly. In other cases it's more useful to get a really detailed understanding, questioning all of the author's conclusions, working through everything yourself.


I tried to take a stab at when to do which model in this post.

Comment by Matt Goldenberg (mr-hire) on The slopes to common sense · 2021-02-23T16:58:59.883Z · LW · GW

I think Gerald is making the point that perhaps the slope is asymmetric because the risk is asymmetric.

Comment by Matt Goldenberg (mr-hire) on Oliver Sipple · 2021-02-23T16:46:08.124Z · LW · GW

Ahh that makes sense.

Comment by Matt Goldenberg (mr-hire) on What's your best alternate history utopia? · 2021-02-23T00:08:34.125Z · LW · GW

In particular, spaced repetition software with judicious use of machine learning on the backend. Technically this breaks the rule of "Same level of technology as today


I'm pretty sure this definitely exists today in apps like Duolingo and Khan Academy, at the very least the originator of spaced repetition wrote an article about it in 1998:

Comment by Matt Goldenberg (mr-hire) on Your Cheerful Price · 2021-02-21T23:32:05.988Z · LW · GW

Yeah I mean it's pretty clear to me when I'm talking about things that make me "cheerful" that my feelings are fairly scope insensitive

Comment by Matt Goldenberg (mr-hire) on Oliver Sipple · 2021-02-21T19:37:46.978Z · LW · GW

It seems to me there was some causal factor that caused the switch to flip to me (maybe it was reading about UDT or something), and I should be seeking to cause that same causal factor in other similar brains.

Comment by Matt Goldenberg (mr-hire) on Oliver Sipple · 2021-02-21T03:12:10.943Z · LW · GW

How does this acausally increase their chances?  I still don't get TDT it just seems obvious that the only way this would increase their chances was it if somehow effected someone else through culture or something.

Comment by Matt Goldenberg (mr-hire) on Your Cheerful Price · 2021-02-21T02:05:17.700Z · LW · GW

Ahh interesting, I replied with a potential counterexample to your attempted reconciliation, curious about your thoughts!

Comment by Matt Goldenberg (mr-hire) on Your Cheerful Price · 2021-02-21T02:04:43.083Z · LW · GW

In my homo-economicus worldview, there exists a single price at which I'm exactly indifferent and then my cheerfulness goes up smoothly/continuously from there. It feels very arbitrary to pick something on that continuum and call it "the" cheerful price I have.


When I think about cheerful prices, I don't think this necessarily fits my experience.  For instance, in this comment, I talk about how even at absurdly high prices, I wouldn't be cheerful (even if I thought it was "worth it") because I would still be sad about the thing I was paying for. 

Comment by Matt Goldenberg (mr-hire) on “PR” is corrosive; “reputation” is not. · 2021-02-17T02:55:36.768Z · LW · GW

I get how honor is a spiritual concept but don't really get how reputation is. It seems like reputation is precisely the thing PR is concerned with while it ignores honor.

This is very confusing to me when Anna in the original post talks about "reputation" "honor" and "brand" as equivalent. Reputation and brand are precisely worrying about how others think of you (PR), whereas Honor is about how you think of yourself.

Comment by Matt Goldenberg (mr-hire) on Your Cheerful Price · 2021-02-14T04:25:40.429Z · LW · GW

Man Iove how much of a departure this is from "shut up and multiply". In many ways it's "stop multiplying and feel things". I would really love to see the synthesis of these two views (which is in many ways a "practical virtues ethics vs. practical utilitarianism thing")

Comment by Matt Goldenberg (mr-hire) on Your Cheerful Price · 2021-02-13T22:50:24.050Z · LW · GW

I pondered the dog thing for a second before realizing that I wouldn't be cheerful even at trillions of dollars, because I would still be sad about my dog being tortured. This may be a way that "cheerful price" is much less psychologically damaging than "willingness to pay" (indeed Eliezer points towards this in the article).

I suspect the same is true of many people being paid to have sex with others.


(Also, I think there's supposed to be a norm about using torture in thought experiments, so there's that).

Comment by Matt Goldenberg (mr-hire) on A great hard day · 2021-02-11T16:08:50.061Z · LW · GW

Saving this for the next "rationalists don't win" argument.

I think this story is common in the rationalist community, going from life is awful to "life is great" certainly counts as win in my book.

Comment by Matt Goldenberg (mr-hire) on Why I Am Not in Charge · 2021-02-09T11:41:10.845Z · LW · GW

My understanding of utility function is that it's impossible not to have one, even if the function is implicit, subconscious, or something that wouldn't be endorsed if it could be stated explicitly.

My understanding is that a utility function implies consistent and coherent preferences. Humans definitely don't have that, our preferences are inconsistent and subject to your instance framing effects.

Comment by Matt Goldenberg (mr-hire) on Fake Frameworks for Zen Meditation (Summary of Sekida's Zen Training) · 2021-02-07T17:55:33.076Z · LW · GW

Thanks, one thing I didn't quite understand in the post is Why making the attention more regular through diaghramatic breathing causes 2nd and 3rd nen to stop being produced (if indeed those two things are linked).

Does Sekida have a framework for why the other nen fall away?

Comment by Matt Goldenberg (mr-hire) on Is the world becoming better? · 2021-02-07T17:49:52.556Z · LW · GW

I was surprised that this post didn't talk about inequality, which is a common metric used to show the world is getting worse.

The research on happiness seems to suggest that often our life satisfaction is not based solely on standard of living, but in relative standard of living compared to our peers.

This suggests a few things:

  1. If everyone's standard of living raises together, it won't do much to raise happiness.
  2. If a few people's standards of living raise more than others, this will make people less happy on average.
  3. If we have more perception of people with higher standard of living than us, our happiness will go down (and if we have more perception of people with standard of living worse than us, it will go up).
Comment by Matt Goldenberg (mr-hire) on The 10,000-Hour Rule is a myth · 2021-02-06T01:23:50.396Z · LW · GW

Yeah, I think this passes the common sense test as well. It'd be quite suspicious if it took 10,000 hours to get to the top of the field of any discipline, regardless of the relative competitiveness or difficulty of different disciplines.  

On the other hand, I think frontier's point is good as well. If you don't have any data, it's reasonable to use the average as a rule of thumb.  I think the real point of Gladwell's 10,000 hour rule is "It's almost certainly going to take a ton of practice to become an expert at the thing, and you should expect and relish that."

Comment by Matt Goldenberg (mr-hire) on Speaking of the efficiency of utopia · 2021-02-06T01:17:44.612Z · LW · GW

I mean, I think that's a valid view of utopia but my point being here that it's a very specific one.

It doesn't seem like you're making an effort to actually engage or understand what I'm saying here?  

Comment by Matt Goldenberg (mr-hire) on Reflections on the cryonics sequence · 2021-02-05T03:53:28.568Z · LW · GW

! But I think I stand by the choice to do what I did with the sequence. I said in the very first post that I was writing for "people who already think signing up for cryonics is a good idea but are putting it off because they're not sure what they actually need to do next", and I think having that narrower mission let me write a better sequence overall. 

Yeah I mean I guess I don't have a horse in this race, if you think it's best and you feel good about the sequence that's what matters

Three months ago, I had a dream my mom died while cryocrastinating, and I decided to finally start the process.  It took me about five years after first encountering the idea on LessWrong to feel comfortable enough with it that I wanted to sign up. I still think it's an incredibly long shot, and I'm probably just using it the way many people use religion – to stave off my crippling fear of death. 

FWIW I think this is my favorite explanation/walkthrough of a reason to sign up for cryonics that I've heard.  I think often even as rationalists that know better it's easy to get caught in the mindset that "being rational is ignoring emotions when making decisions."  

But of course, emotions are part of reality and ignoring them is exceedingly irrational. I loved this honest emotion-acknowledging take on cryonics and felt a personal "yes" when I imagined this introducing/bringing in the sequence.  Combining a very emotional reason/story along with a later very much dry/fact based sequence just feels so aesthetically beautiful to me.   So that may explain where I was coming from; at the same time acknowledging you may be coming from a very different place.

Comment by Matt Goldenberg (mr-hire) on Speaking of the efficiency of utopia · 2021-02-04T20:39:27.551Z · LW · GW

This assumes that the only valid morals are ones that don't include other people, but only yourself. As stated in the last comment, this only works for moral systems that prize freedom/autonomy the highest over other values.  

Comment by Matt Goldenberg (mr-hire) on Speaking of the efficiency of utopia · 2021-02-04T09:03:30.782Z · LW · GW

No, I did mean specific.  For instance if people have specific morals they may see as dystopia anything which allows behavior outside of those morals.  Only a value system that puts freedom ahead of most or all other values would have this as a constraint.

I'm not saying it is right or wrong but it is specific.

Comment by Matt Goldenberg (mr-hire) on Speaking of the efficiency of utopia · 2021-02-04T00:41:44.754Z · LW · GW

Utopia is where anything mathematically possible is physically possible within finite time without destroying said utopia.

This seems like quite a specific vision of utopia.

Comment by Matt Goldenberg (mr-hire) on Reflections on the cryonics sequence · 2021-02-04T00:40:08.830Z · LW · GW

I think the story you put at the top could be an excellent addition to the sequence itself.  The sequence does a great job at "How" and "What" but less so on "Why".  I think talking about your own story and motivations could help people connect to the rest of the sequence and potentially help propel them through the difficult process.

Comment by Matt Goldenberg (mr-hire) on Jimrandomh's Shortform · 2021-02-02T03:40:02.717Z · LW · GW

And they will also be able to do the opposite, placing ads over scenic vistas

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2021-01-29T16:20:09.659Z · LW · GW

It sort of seems like Predictive Processing provides a grounded foundation for the simulation argument.

Comment by Matt Goldenberg (mr-hire) on What are some real life Inadequate Equilibria? · 2021-01-29T16:07:58.073Z · LW · GW

Just to clarify, what is allowed here.  I can think of tons of scenarios where there are probably better equilibria  (e.g. nuclear disarmament, no starving people, etc) in which clearly the current state is not optimal, and there's some other theoretical equilibria that's more optimal. 

Comment by Matt Goldenberg (mr-hire) on How can I find trustworthy dietary advice? · 2021-01-29T02:29:36.748Z · LW · GW

I recommend the book "The Renaissance Diet 2.0" which does a competent job summarizing the science.

Comment by Matt Goldenberg (mr-hire) on Are the consequences of groups usually highly contingent on their details? · 2021-01-29T01:52:20.475Z · LW · GW

There are other reasons that I think we're not in that world, among them serial entrepreneurs being much more successful than usual with such tactical successes.

Comment by Matt Goldenberg (mr-hire) on Has anyone on LW written about material bottlenecks being the main factor in making any technological progress? · 2021-01-28T15:42:03.340Z · LW · GW

I realized that you might be able to make them 'smarter', but because the animal still has finite I/O - it has no thumbs, it cannot speak, it doesn't have the right kind of eyes for tool manipulation - it wouldn't get much benefit.

I'm fairly skeptical of this claim. It seems to me that even moderate differences in animal intelligence in E.G dogs leads to things like tool use and better ability to communicate things to humans.

Comment by Matt Goldenberg (mr-hire) on Has anyone on LW written about material bottlenecks being the main factor in making any technological progress? · 2021-01-28T15:39:24.501Z · LW · GW

Do we know of materials that could make a good dyson sphere?

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-28T04:32:49.388Z · LW · GW

But then if so much flexibility is possible, what is even producing this distinction between enlightenment and heaven approaches?

My guess is that there are attractors in this broad space, similar to other personality differences.

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-28T02:20:32.882Z · LW · GW

Sure, we might find preferences, but those preferences must themselves be the result of these same brain processes over which the preferences operate,

Why must they?  Surely it's possible there are parts of the mind that are influenced by other processes outside of the predictive processing components? 

It's pretty clear to me for instance that people act differently when on psychedelics not because somehow they're making a prediction about what will happen when they're on psychedelics, but because it's actually changing the way in which the brain accesses and makes those predictions.  So it's not hard to imagine other chemicals in people's brains operating at different biological set points fundamentally altering the way their brains would like to update.  Not to mention biological brain differences,  etc.

It could be starting dispositions as well, that can then be changed, but I don't see a principled reason that that should be the case.

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-28T01:11:19.116Z · LW · GW

Ahh yeup must have forgotten to edit that when I ported over from twitter.


Sounds downright alien, this guilt thing and all this obsession with shoulds and musts.
Or worrying about meeting expectations from boss/God/parents/whatever.
It sounds rather exhausting.

Yeah, it's very possible that you don't experience it. 

It's also possible it's there for you, but in shadow, as talked about in the article. Might be worth spending 10 minutes probing feelings around obligations to see if any sense of "not wanting to look" or "attention being yanked away" comes up, as that's a good sign that there's something there you don't want to acknowledge. 

In general though not everyone experiences these sort of feelings so it's equally possible you're one of them.

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-27T19:24:23.662Z · LW · GW

I think of the human brain as primarily performing the activity of minimizing prediction errors. That's not literally all it does in that "prediction error" is a weird way to talk about what happens in feedback loops where the "prediction" is some fixed setpoint not readily subject to update based on learning information (e.g. setpoints for things related to survival like eating enough calories).

I tend to think that there are several of these, some of which relate to deeper emotional needs, which I think is an important distinction.

if we try to choose between these approaches, which is that it depends on the notion of there being some important distinction between minimizing prediction error one way or another. 

I'm stating that in different minds, it sure looks to me like there is indeed a fundamental preference for different ways of minimizing prediction error. I tend to call this "heaven" or "enlightenment" orientation although I think it's quite correlated with what I've heard called "masculine" or "feminine" orientation.

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-27T19:21:56.358Z · LW · GW

What I mean by perfectionism is a desire for a certain unusually high level of challenge and thoroughness. It's not about high valuation according to a more abstract or otherwise relevant measure/goal. So making a process "more perfect" in this sense means bringing challenge and thoroughness closer to the emotionally determined comfortable levels (in particular, it might involve making something less challenging if it was too challenging originally). The words "more perfect" aren't particularly apt for this idea.

Ahh interesting, thanks for sharing!

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-27T14:23:12.399Z · LW · GW

Do you think that the process by which you get to rarely encountered unimportant stuff is perfect, or could you bring more perfection to the process?

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-27T13:03:42.709Z · LW · GW

Ahh if it wasn't clear when I say less effort, I wasn't meaning "effort averaged over time", but less absolute effort (which in your case means spending less time)

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-27T13:01:55.286Z · LW · GW

Thanks for sharing! I think that what you are talking about is another common cause of procrastination! IME what you are talking about is usually experienced as overwhelm or ambiguity, rather than perfectionism, and it will be the subject of another article.

To be clear, I'm not invalidating that you experience this underlying fear on the surface as perfectionism, it just hasn't how it's presented to the people I've worked with.

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-27T04:57:59.591Z · LW · GW

What would it be look to strive for perfection in your process of choosing how much effort to put into each process?

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-27T02:01:19.140Z · LW · GW

 to address the fundamental unhappiness that comes from wanting something at all.

I think your whole comment, and this clause in particular, comes from what I refer to as a very  "enlightenment-oriented" frame.

That is, the thing that matters is feeling good (or not feeling bad), and the goal is to get to that.

There's another perspective, that I like to call the "heaven-oriented" perspective, in which the thing that matters is achieving a world where all needs are met and nourished, and the goal is to get to that.

I have heard people coming from a more heaven-oriented perspective say that people who think they just want to be happy are making a fundamental category error, and not in touch with what they actually care about.

I have heard people coming from a more enlightenment-oriented perspective say that people who think they want to achieve a state of the world are making a fundamental category error, and not in touch with what they actually care about.

My take, having worked with lots of people and guiding introspection into fundamental motivations, is that both of these are true for different people. My current frame is that these perspectives are more like fundamental dispositions that people have, to lean more towards enlightenment or heaven (with some at the extremes and some at different places along the spectrum), although it does get a bit more complicated because they may lean in different ways in respect to different needs.

In general, the type of advice I'll be giving in this sequence will tend to be more useful to heaven-oriented individuals, although I encourage people who are more enlightenment-oriented to follow along and take what they'd like from it.

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-26T22:33:50.068Z · LW · GW


Comment by Matt Goldenberg (mr-hire) on How do you build resilient personal systems? · 2021-01-26T12:58:36.972Z · LW · GW

The original book on the Transtheoretical model is still my go to resource for this, It's called "Changing for Good" by James Prochaska and Carlo DiClemente. However it's quite a commonly used model especially in the treatment of addiction, and there's plenty of info online including wikipedia, probably webmd, etc.

Forgiveness and procrastination: This study from Wohl et al:

That plan bot is cool but the week time frame seems like an odd choice. For many habits like new years resolution I find it takes them longer to fail then a week, So I'd recommend mentally replacing that with something like 6 months.

Comment by Matt Goldenberg (mr-hire) on How do you build resilient personal systems? · 2021-01-25T14:21:24.042Z · LW · GW

Great question!

A few thoughts on this:

  1. In general, I like to use the stages of change model when trying to make a change. The research basically says that if people try to change when they're ready to change, they'll do it the first time, but if they try to change before they're ready, it will take multiple attempts.

For this reason, I try not to set action-based new years resolution (it'd be really suspicious if all the changes I wanted to make suddenly moved into the "Action" stage on the 1st). Instead, I'll do something like a "Theme" for the year (this year it's "Full contact with reality" and then take stage appropriate actions for that theme (thinking and reading during contemplation, planning during preparation, creating habits during action, etc.)

  1. MurphyJitsu is a great tool to use here. There's a bunch of good exanations on LW, but the basic tool is to imagine you failed, ask yourself why, then patch your approach until it's very surprising that you failed.

  2. Learning to forgive yourself is HUGE here. Research says that people who forgive themselves for procrastinating are less likely to procrastinate in the future, and I'm pretty sure this generalizes. Expect adjustments and forgive yourself for needing to make them.

  3. If you're continually finding yourself with systems that don't stick, IME it's likely that you're fundamentally motivating yourself in a coercive way. You may want to read this post and sequence to begin to reorient your motivation system to a more sustainable strategy:

Comment by Matt Goldenberg (mr-hire) on Everything Okay · 2021-01-24T13:16:20.704Z · LW · GW

Did you think the comment above missed something about that dynamic? I was meaning it to apply to interactions as well.

Comment by Matt Goldenberg (mr-hire) on Poll: Which variables are most strategically relevant? · 2021-01-23T20:21:42.161Z · LW · GW

When we say "new technology" in Wardley Mapping we're referring to a fundamentally new idea upon which new things can be built.

Only if AGI springs forth as soon as that new idea is created would it be in the custom built stage. It's equally possible that AGI could come from iterating on or making the new idea repeatable/practical/cost effective that AGI could arise.

An analogy would be if we were talking about - FHT (Faster than Horse Technology). The exact moment we crossed the barrier of being faster than a horse might have been when a new technology was created, bits it's equally possible that it would be between one model of car and another, with no fundamentally new technology just iterating on the existing technology and making the speed go up through experimentation, better understanding, or the result of being able to manufacture at higher scale.

Comment by Matt Goldenberg (mr-hire) on Everything Okay · 2021-01-23T15:18:55.098Z · LW · GW

It seems like not ok mode is the mode of 'get others to see my panic and create a plan, whereas ok mode is the mode of 'create my own plan'. Interestingly, this seems almost the opposite of your model.

One could imagine a 3 by 2.

  • Sees reality clearly
  • Sees a problem in reality
  • Is in ok mode

It seems like the best place to be is yes in all 3, but it's probably better to be yes yes no than no no yes.