Posts

Non-Coercive Perfectionism 2021-01-26T16:53:36.238Z
Would most people benefit from being less coercive to themselves? 2021-01-21T14:24:17.187Z
Why Productivity Systems Don't Stick 2021-01-16T17:45:37.479Z
How to Write Like Kaj Sotala 2021-01-07T19:33:35.260Z
When Gears Go Wrong 2020-08-02T06:21:25.389Z
Reconsolidation Through Questioning 2019-11-14T23:22:43.518Z
Reconsolidation Through Experience 2019-11-13T20:04:39.345Z
The Hierarchy of Memory Reconsolidation Techniques 2019-11-13T20:02:43.449Z
Practical Guidelines for Memory Reconsolidation 2019-11-13T19:54:10.097Z
A Practical Theory of Memory Reconsolidation 2019-11-13T19:52:20.364Z
Expected Value- Millionaires Math 2019-10-09T14:50:26.732Z
On Collusion - Vitalik Buterin 2019-10-09T14:45:20.924Z
Exercises for Overcoming Akrasia and Procrastination 2019-09-16T11:53:10.362Z
Appeal to Consequence, Value Tensions, And Robust Organizations 2019-07-19T22:09:43.583Z
Overcoming Akrasia/Procrastination - Volunteers Wanted 2019-07-15T18:29:40.888Z
What are good resources for learning functional programming? 2019-07-04T01:22:05.876Z
Matt Goldenberg's Short Form Feed 2019-06-21T18:13:54.275Z
What makes a scientific fact 'ripe for discovery'? 2019-05-17T09:01:32.578Z
The Case for The EA Hotel 2019-03-31T12:31:30.969Z
How to Understand and Mitigate Risk 2019-03-12T10:14:19.873Z
What Vibing Feels Like 2019-03-11T20:10:30.017Z
S-Curves for Trend Forecasting 2019-01-23T18:17:56.436Z
A Framework for Internal Debugging 2019-01-16T16:04:16.478Z
The 3 Books Technique for Learning a New Skilll 2019-01-09T12:45:19.294Z
Symbiosis - An Intentional Community For Radical Self-Improvement 2018-04-22T23:15:06.832Z
How Going Meta Can Level Up Your Career 2018-04-14T02:13:02.380Z
Video: The Phenomenology of Intentions 2018-01-09T03:40:45.427Z
Video - Subject - Object Shifts and How to Have Them 2018-01-04T02:11:22.142Z

Comments

Comment by Matt Goldenberg (mr-hire) on The Best Software For Every Need · 2021-09-15T16:28:39.611Z · LW · GW

I personally have found FocusMe to be more flexible than Freedom and Self Control.

Comment by Matt Goldenberg (mr-hire) on Why didn't we find katas for rationality? · 2021-09-15T01:41:30.507Z · LW · GW

Possibly because Katas aren't a very good framework for practice?   Most functional martial arts that work in e.g. the UFC will  suggest at the very least practicing with a partner, and gradually working up to live resistance.  Martials arts that emphasize katas over live resistance tend to not be great at self-defense.

My guess is there's something similar going on with rationality. Any sort of kata that doesn't have a heavy component of "gradually working up to interacting with the real world" probably won't be very effective.

Comment by Matt Goldenberg (mr-hire) on [Review] Edge of Tomorrow (2014) · 2021-09-05T16:02:24.114Z · LW · GW

I once saw a fan theory that made me see the movie in a whole new light.

 

  1. How could Emily Blunt know that the transfusion made her lose her time travel powers - after all, she's never been able to test it.
  2. Therefore, how could Tom Cruise know that the time loop made him lose his time travel powers.
  3. Therefore, Emily and Blunt are still in the time loop. They've "succeeded", but the moment one of them dies, they'll find themselves back at the beginning of the same day.
Comment by Matt Goldenberg (mr-hire) on We Live in an Era of Unprecedented World Peace · 2021-09-01T17:16:04.353Z · LW · GW

Agreed, it's more appropriate for making an entire species of animal go extinct though.

Comment by Matt Goldenberg (mr-hire) on We Live in an Era of Unprecedented World Peace · 2021-08-31T16:58:12.542Z · LW · GW

Right, but as far as I can tell the method by which we've been positive has been genocide?

Comment by Matt Goldenberg (mr-hire) on We Live in an Era of Unprecedented World Peace · 2021-08-31T01:17:08.906Z · LW · GW

Of course this analysis involves ignoring other sentient beings.  If we take their lives into account, we probably live in a time of unprecedented genocide.

Comment by Matt Goldenberg (mr-hire) on The Death of Behavioral Economics · 2021-08-25T17:41:40.335Z · LW · GW

It turns out that loss aversion does exist, but only for large losses. This makes sense. We *should* be particularly wary of decisions that can wipe us out. That's not a so-called "cognitive bias". It's not irrational. In fact, it's completely sensical. If a decision can destroy you and/or your family, it's sane to be cautious.

 

It sounds like much of loss aversion is just an intuitive use of the Kelly Criterion?

Comment by Matt Goldenberg (mr-hire) on Slack Has Positive Externalities For Groups · 2021-08-02T18:53:46.590Z · LW · GW

and most of the rest of the time will be spent laying in bed doing nothing or timewasting, this person has the most slack. 

It depends on how much optionality the person has around changing this behavior. Often busy people have the most slack because they have the most agency to change their behavior.

Comment by Matt Goldenberg (mr-hire) on Beware over-use of the agent model · 2021-04-26T15:01:56.246Z · LW · GW

But what is a good second lens for looking at these conglomerations of atoms that exert power over the future? 

 

One interesting alternative I've been learning about recently is the buddhist idea of "dependent origination".  I'll give a brief summary of some thoughts I've had based on it, although these should definitely not be taken as an accurate representation of the actual dependent origination teaching.

The basic idea is that the delusion of agency (or in buddhist terms, self) comes from the conglomeration of sensations (or sensors) and desires.  This leads to a clinging on to things that fulfill those desires, which leads to a need to pretend there is an agent that can fulfill those desires.  This then leads to the creation of more things that desire and sense(babies, AIs, whatever), to whom we pass on the same delusions.  We can view each of these as individual agents, or we can simply view whole process as one thing, a perpetual cycle of cause and effect the Buddhists call dependent origination.

Comment by Matt Goldenberg (mr-hire) on Against "Context-Free Integrity" · 2021-04-15T11:06:14.067Z · LW · GW

I unironically think this a great example of doing the thing the OP is pointing at correctly.

Comment by Matt Goldenberg (mr-hire) on "Taking your environment as object" vs "Being subject to your environment" · 2021-04-12T02:22:48.336Z · LW · GW

I think you may mean taking your environment as object? The typical idea behind a subject-object shift is that first you are subject to a lens, then you can take it as an object to look at.

Comment by Matt Goldenberg (mr-hire) on Clubhouse · 2021-03-15T03:51:51.863Z · LW · GW

Alright I'm convinced. Does anyone know if I can emulate this if I don't have an iPhone?

Comment by Matt Goldenberg (mr-hire) on The Comprehension Curve · 2021-02-23T17:04:07.104Z · LW · GW

So, in addition to raw comprehension rate, there's also what kind of knowledge you want to foster. This probably varies from text to text. In some cases it's best to absorb a broad base of material rapidly. In other cases it's more useful to get a really detailed understanding, questioning all of the author's conclusions, working through everything yourself.

 

I tried to take a stab at when to do which model in this post.

Comment by Matt Goldenberg (mr-hire) on The slopes to common sense · 2021-02-23T16:58:59.883Z · LW · GW

I think Gerald is making the point that perhaps the slope is asymmetric because the risk is asymmetric.

Comment by Matt Goldenberg (mr-hire) on Oliver Sipple · 2021-02-23T16:46:08.124Z · LW · GW

Ahh that makes sense.

Comment by Matt Goldenberg (mr-hire) on What's your best alternate history utopia? · 2021-02-23T00:08:34.125Z · LW · GW

In particular, spaced repetition software with judicious use of machine learning on the backend. Technically this breaks the rule of "Same level of technology as today

 

I'm pretty sure this definitely exists today in apps like Duolingo and Khan Academy, at the very least the originator of spaced repetition wrote an article about it in 1998: https://www.supermemo.com/en/archives1990-2015/english/ol/nn_train

Comment by Matt Goldenberg (mr-hire) on Your Cheerful Price · 2021-02-21T23:32:05.988Z · LW · GW

Yeah I mean it's pretty clear to me when I'm talking about things that make me "cheerful" that my feelings are fairly scope insensitive

Comment by Matt Goldenberg (mr-hire) on Oliver Sipple · 2021-02-21T19:37:46.978Z · LW · GW

It seems to me there was some causal factor that caused the switch to flip to me (maybe it was reading about UDT or something), and I should be seeking to cause that same causal factor in other similar brains.

Comment by Matt Goldenberg (mr-hire) on Oliver Sipple · 2021-02-21T03:12:10.943Z · LW · GW

How does this acausally increase their chances?  I still don't get TDT it just seems obvious that the only way this would increase their chances was it if somehow effected someone else through culture or something.

Comment by Matt Goldenberg (mr-hire) on Your Cheerful Price · 2021-02-21T02:05:17.700Z · LW · GW

Ahh interesting, I replied with a potential counterexample to your attempted reconciliation, curious about your thoughts!

Comment by Matt Goldenberg (mr-hire) on Your Cheerful Price · 2021-02-21T02:04:43.083Z · LW · GW

In my homo-economicus worldview, there exists a single price at which I'm exactly indifferent and then my cheerfulness goes up smoothly/continuously from there. It feels very arbitrary to pick something on that continuum and call it "the" cheerful price I have.

 

When I think about cheerful prices, I don't think this necessarily fits my experience.  For instance, in this comment, I talk about how even at absurdly high prices, I wouldn't be cheerful (even if I thought it was "worth it") because I would still be sad about the thing I was paying for. 

Comment by Matt Goldenberg (mr-hire) on “PR” is corrosive; “reputation” is not. · 2021-02-17T02:55:36.768Z · LW · GW

I get how honor is a spiritual concept but don't really get how reputation is. It seems like reputation is precisely the thing PR is concerned with while it ignores honor.

This is very confusing to me when Anna in the original post talks about "reputation" "honor" and "brand" as equivalent. Reputation and brand are precisely worrying about how others think of you (PR), whereas Honor is about how you think of yourself.

Comment by Matt Goldenberg (mr-hire) on Your Cheerful Price · 2021-02-14T04:25:40.429Z · LW · GW

Man Iove how much of a departure this is from "shut up and multiply". In many ways it's "stop multiplying and feel things". I would really love to see the synthesis of these two views (which is in many ways a "practical virtues ethics vs. practical utilitarianism thing")

Comment by Matt Goldenberg (mr-hire) on Your Cheerful Price · 2021-02-13T22:50:24.050Z · LW · GW

I pondered the dog thing for a second before realizing that I wouldn't be cheerful even at trillions of dollars, because I would still be sad about my dog being tortured. This may be a way that "cheerful price" is much less psychologically damaging than "willingness to pay" (indeed Eliezer points towards this in the article).

I suspect the same is true of many people being paid to have sex with others.

 

(Also, I think there's supposed to be a norm about using torture in thought experiments, so there's that).

Comment by Matt Goldenberg (mr-hire) on A great hard day · 2021-02-11T16:08:50.061Z · LW · GW

Saving this for the next "rationalists don't win" argument.

I think this story is common in the rationalist community, going from life is awful to "life is great" certainly counts as win in my book.

Comment by Matt Goldenberg (mr-hire) on Why I Am Not in Charge · 2021-02-09T11:41:10.845Z · LW · GW

My understanding of utility function is that it's impossible not to have one, even if the function is implicit, subconscious, or something that wouldn't be endorsed if it could be stated explicitly.

My understanding is that a utility function implies consistent and coherent preferences. Humans definitely don't have that, our preferences are inconsistent and subject to your instance framing effects.

Comment by Matt Goldenberg (mr-hire) on Fake Frameworks for Zen Meditation (Summary of Sekida's Zen Training) · 2021-02-07T17:55:33.076Z · LW · GW

Thanks, one thing I didn't quite understand in the post is Why making the attention more regular through diaghramatic breathing causes 2nd and 3rd nen to stop being produced (if indeed those two things are linked).

Does Sekida have a framework for why the other nen fall away?

Comment by Matt Goldenberg (mr-hire) on Is the world becoming better? · 2021-02-07T17:49:52.556Z · LW · GW

I was surprised that this post didn't talk about inequality, which is a common metric used to show the world is getting worse.

The research on happiness seems to suggest that often our life satisfaction is not based solely on standard of living, but in relative standard of living compared to our peers.

This suggests a few things:

  1. If everyone's standard of living raises together, it won't do much to raise happiness.
  2. If a few people's standards of living raise more than others, this will make people less happy on average.
  3. If we have more perception of people with higher standard of living than us, our happiness will go down (and if we have more perception of people with standard of living worse than us, it will go up).
Comment by Matt Goldenberg (mr-hire) on The 10,000-Hour Rule is a myth · 2021-02-06T01:23:50.396Z · LW · GW

Yeah, I think this passes the common sense test as well. It'd be quite suspicious if it took 10,000 hours to get to the top of the field of any discipline, regardless of the relative competitiveness or difficulty of different disciplines.  

On the other hand, I think frontier's point is good as well. If you don't have any data, it's reasonable to use the average as a rule of thumb.  I think the real point of Gladwell's 10,000 hour rule is "It's almost certainly going to take a ton of practice to become an expert at the thing, and you should expect and relish that."

Comment by Matt Goldenberg (mr-hire) on Speaking of the efficiency of utopia · 2021-02-06T01:17:44.612Z · LW · GW

I mean, I think that's a valid view of utopia but my point being here that it's a very specific one.

It doesn't seem like you're making an effort to actually engage or understand what I'm saying here?  

Comment by Matt Goldenberg (mr-hire) on Reflections on the cryonics sequence · 2021-02-05T03:53:28.568Z · LW · GW

! But I think I stand by the choice to do what I did with the sequence. I said in the very first post that I was writing for "people who already think signing up for cryonics is a good idea but are putting it off because they're not sure what they actually need to do next", and I think having that narrower mission let me write a better sequence overall. 

Yeah I mean I guess I don't have a horse in this race, if you think it's best and you feel good about the sequence that's what matters

Three months ago, I had a dream my mom died while cryocrastinating, and I decided to finally start the process.  It took me about five years after first encountering the idea on LessWrong to feel comfortable enough with it that I wanted to sign up. I still think it's an incredibly long shot, and I'm probably just using it the way many people use religion – to stave off my crippling fear of death. 

FWIW I think this is my favorite explanation/walkthrough of a reason to sign up for cryonics that I've heard.  I think often even as rationalists that know better it's easy to get caught in the mindset that "being rational is ignoring emotions when making decisions."  

But of course, emotions are part of reality and ignoring them is exceedingly irrational. I loved this honest emotion-acknowledging take on cryonics and felt a personal "yes" when I imagined this introducing/bringing in the sequence.  Combining a very emotional reason/story along with a later very much dry/fact based sequence just feels so aesthetically beautiful to me.   So that may explain where I was coming from; at the same time acknowledging you may be coming from a very different place.

Comment by Matt Goldenberg (mr-hire) on Speaking of the efficiency of utopia · 2021-02-04T20:39:27.551Z · LW · GW

This assumes that the only valid morals are ones that don't include other people, but only yourself. As stated in the last comment, this only works for moral systems that prize freedom/autonomy the highest over other values.  

Comment by Matt Goldenberg (mr-hire) on Speaking of the efficiency of utopia · 2021-02-04T09:03:30.782Z · LW · GW

No, I did mean specific.  For instance if people have specific morals they may see as dystopia anything which allows behavior outside of those morals.  Only a value system that puts freedom ahead of most or all other values would have this as a constraint.

I'm not saying it is right or wrong but it is specific.

Comment by Matt Goldenberg (mr-hire) on Speaking of the efficiency of utopia · 2021-02-04T00:41:44.754Z · LW · GW

Utopia is where anything mathematically possible is physically possible within finite time without destroying said utopia.

This seems like quite a specific vision of utopia.

Comment by Matt Goldenberg (mr-hire) on Reflections on the cryonics sequence · 2021-02-04T00:40:08.830Z · LW · GW

I think the story you put at the top could be an excellent addition to the sequence itself.  The sequence does a great job at "How" and "What" but less so on "Why".  I think talking about your own story and motivations could help people connect to the rest of the sequence and potentially help propel them through the difficult process.

Comment by Matt Goldenberg (mr-hire) on Jimrandomh's Shortform · 2021-02-02T03:40:02.717Z · LW · GW

And they will also be able to do the opposite, placing ads over scenic vistas

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2021-01-29T16:20:09.659Z · LW · GW

It sort of seems like Predictive Processing provides a grounded foundation for the simulation argument.

Comment by Matt Goldenberg (mr-hire) on What are some real life Inadequate Equilibria? · 2021-01-29T16:07:58.073Z · LW · GW

Just to clarify, what is allowed here.  I can think of tons of scenarios where there are probably better equilibria  (e.g. nuclear disarmament, no starving people, etc) in which clearly the current state is not optimal, and there's some other theoretical equilibria that's more optimal. 

Comment by Matt Goldenberg (mr-hire) on How can I find trustworthy dietary advice? · 2021-01-29T02:29:36.748Z · LW · GW

I recommend the book "The Renaissance Diet 2.0" which does a competent job summarizing the science.

Comment by Matt Goldenberg (mr-hire) on Are the consequences of groups usually highly contingent on their details? · 2021-01-29T01:52:20.475Z · LW · GW

There are other reasons that I think we're not in that world, among them serial entrepreneurs being much more successful than usual with such tactical successes.

Comment by Matt Goldenberg (mr-hire) on Has anyone on LW written about material bottlenecks being the main factor in making any technological progress? · 2021-01-28T15:42:03.340Z · LW · GW

I realized that you might be able to make them 'smarter', but because the animal still has finite I/O - it has no thumbs, it cannot speak, it doesn't have the right kind of eyes for tool manipulation - it wouldn't get much benefit.

I'm fairly skeptical of this claim. It seems to me that even moderate differences in animal intelligence in E.G dogs leads to things like tool use and better ability to communicate things to humans.

Comment by Matt Goldenberg (mr-hire) on Has anyone on LW written about material bottlenecks being the main factor in making any technological progress? · 2021-01-28T15:39:24.501Z · LW · GW

Do we know of materials that could make a good dyson sphere?

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-28T04:32:49.388Z · LW · GW

But then if so much flexibility is possible, what is even producing this distinction between enlightenment and heaven approaches?

My guess is that there are attractors in this broad space, similar to other personality differences.

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-28T02:20:32.882Z · LW · GW

Sure, we might find preferences, but those preferences must themselves be the result of these same brain processes over which the preferences operate,

Why must they?  Surely it's possible there are parts of the mind that are influenced by other processes outside of the predictive processing components? 

It's pretty clear to me for instance that people act differently when on psychedelics not because somehow they're making a prediction about what will happen when they're on psychedelics, but because it's actually changing the way in which the brain accesses and makes those predictions.  So it's not hard to imagine other chemicals in people's brains operating at different biological set points fundamentally altering the way their brains would like to update.  Not to mention biological brain differences,  etc.

It could be starting dispositions as well, that can then be changed, but I don't see a principled reason that that should be the case.

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-28T01:11:19.116Z · LW · GW

Ahh yeup must have forgotten to edit that when I ported over from twitter.

 

Sounds downright alien, this guilt thing and all this obsession with shoulds and musts.
Or worrying about meeting expectations from boss/God/parents/whatever.
It sounds rather exhausting.

Yeah, it's very possible that you don't experience it. 

It's also possible it's there for you, but in shadow, as talked about in the article. Might be worth spending 10 minutes probing feelings around obligations to see if any sense of "not wanting to look" or "attention being yanked away" comes up, as that's a good sign that there's something there you don't want to acknowledge. 

In general though not everyone experiences these sort of feelings so it's equally possible you're one of them.

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-27T19:24:23.662Z · LW · GW

I think of the human brain as primarily performing the activity of minimizing prediction errors. That's not literally all it does in that "prediction error" is a weird way to talk about what happens in feedback loops where the "prediction" is some fixed setpoint not readily subject to update based on learning information (e.g. setpoints for things related to survival like eating enough calories).

I tend to think that there are several of these, some of which relate to deeper emotional needs, which I think is an important distinction.

if we try to choose between these approaches, which is that it depends on the notion of there being some important distinction between minimizing prediction error one way or another. 

I'm stating that in different minds, it sure looks to me like there is indeed a fundamental preference for different ways of minimizing prediction error. I tend to call this "heaven" or "enlightenment" orientation although I think it's quite correlated with what I've heard called "masculine" or "feminine" orientation.

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-27T19:21:56.358Z · LW · GW

What I mean by perfectionism is a desire for a certain unusually high level of challenge and thoroughness. It's not about high valuation according to a more abstract or otherwise relevant measure/goal. So making a process "more perfect" in this sense means bringing challenge and thoroughness closer to the emotionally determined comfortable levels (in particular, it might involve making something less challenging if it was too challenging originally). The words "more perfect" aren't particularly apt for this idea.

Ahh interesting, thanks for sharing!

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-27T14:23:12.399Z · LW · GW

Do you think that the process by which you get to rarely encountered unimportant stuff is perfect, or could you bring more perfection to the process?

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-27T13:03:42.709Z · LW · GW

Ahh if it wasn't clear when I say less effort, I wasn't meaning "effort averaged over time", but less absolute effort (which in your case means spending less time)

Comment by Matt Goldenberg (mr-hire) on Non-Coercive Perfectionism · 2021-01-27T13:01:55.286Z · LW · GW

Thanks for sharing! I think that what you are talking about is another common cause of procrastination! IME what you are talking about is usually experienced as overwhelm or ambiguity, rather than perfectionism, and it will be the subject of another article.

To be clear, I'm not invalidating that you experience this underlying fear on the surface as perfectionism, it just hasn't how it's presented to the people I've worked with.