Posts

Good ways to monetarily profit from the increasing demand for power? 2024-06-10T15:29:13.418Z
Mechanism for feature learning in neural networks and backpropagation-free machine learning models 2024-03-19T14:55:59.296Z
Peter Thiel on Technological Stagnation and Out of Touch Rationalists 2022-12-07T13:15:32.009Z
Non-Coercive Perfectionism 2021-01-26T16:53:36.238Z
Would most people benefit from being less coercive to themselves? 2021-01-21T14:24:17.187Z
Why Productivity Systems Don't Stick 2021-01-16T17:45:37.479Z
How to Write Like Kaj Sotala 2021-01-07T19:33:35.260Z
When Gears Go Wrong 2020-08-02T06:21:25.389Z
Reconsolidation Through Questioning 2019-11-14T23:22:43.518Z
Reconsolidation Through Experience 2019-11-13T20:04:39.345Z
The Hierarchy of Memory Reconsolidation Techniques 2019-11-13T20:02:43.449Z
Practical Guidelines for Memory Reconsolidation 2019-11-13T19:54:10.097Z
A Practical Theory of Memory Reconsolidation 2019-11-13T19:52:20.364Z
Expected Value- Millionaires Math 2019-10-09T14:50:26.732Z
On Collusion - Vitalik Buterin 2019-10-09T14:45:20.924Z
Exercises for Overcoming Akrasia and Procrastination 2019-09-16T11:53:10.362Z
Appeal to Consequence, Value Tensions, And Robust Organizations 2019-07-19T22:09:43.583Z
Overcoming Akrasia/Procrastination - Volunteers Wanted 2019-07-15T18:29:40.888Z
What are good resources for learning functional programming? 2019-07-04T01:22:05.876Z
Matt Goldenberg's Short Form Feed 2019-06-21T18:13:54.275Z
What makes a scientific fact 'ripe for discovery'? 2019-05-17T09:01:32.578Z
The Case for The EA Hotel 2019-03-31T12:31:30.969Z
How to Understand and Mitigate Risk 2019-03-12T10:14:19.873Z
What Vibing Feels Like 2019-03-11T20:10:30.017Z
S-Curves for Trend Forecasting 2019-01-23T18:17:56.436Z
A Framework for Internal Debugging 2019-01-16T16:04:16.478Z
The 3 Books Technique for Learning a New Skilll 2019-01-09T12:45:19.294Z
Symbiosis - An Intentional Community For Radical Self-Improvement 2018-04-22T23:15:06.832Z
How Going Meta Can Level Up Your Career 2018-04-14T02:13:02.380Z
Video: The Phenomenology of Intentions 2018-01-09T03:40:45.427Z
Video - Subject - Object Shifts and How to Have Them 2018-01-04T02:11:22.142Z

Comments

Comment by Matt Goldenberg (mr-hire) on Rethink Wellbeing’s Year 2 Update: Foster Sustainable High Performance for Ambitious Altruists · 2024-12-08T16:42:42.816Z · LW · GW

Excellent work! Thanks for what you do

Comment by Matt Goldenberg (mr-hire) on o1 tried to avoid being shut down · 2024-12-05T21:17:14.205Z · LW · GW

fwiw while it's fair to call this "heavy nudging", this mirrors exactly what my prompts for agentic workflows look like. I have to repeat things like "Don't DO ANYTHING YOU WEREN'T ASKED" multiple times to get them to work consistently.

Comment by Matt Goldenberg (mr-hire) on Spaciousness In Partner Dance: A Naturalism Demo · 2024-12-05T18:13:22.805Z · LW · GW

I found this post to be incredibly useful to get a deeper sense of Logan's work on naturalism.

I think his work on Naturalism is a great and unusual example of original research happening in the rationality community and what actually investigating rationality looks like.

Comment by mr-hire on [deleted post] 2024-12-05T18:03:09.815Z

Emailed you.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-12-05T17:47:18.106Z · LW · GW

In my role as Head of Operations at Monastic Academy, every person in the organization is on a personal improvement plan that addresses the personal responsibility level, and each team in the organization is responsible for process improvements that address the systemic level.

In the performance improvement weekly meetings, my goal is to constantly bring them back to the level of personal responsibility.  Any time they start saying the reason they couldn't meet their improvement goal was because of X event or Y person, I bring it back. What could THEY have done differently, what internal psychological patterns prevented them from doing that, and what can they do to shift those patterns this week.

Meanwhile, each team also chooses process improvements weekly.  In those meetings, my role is to do the exact opposite, and bring it back to the level of process.  Any time they're examining a team failure and come to the conclusion "we just need to prioritize it more, or try harder, or the manager needs to hold us to something",  I bring it back to the level of process. How can we change the order or way we do things, or the incentives involved, such that it's not dependent on any given person's ability to work hard or remember or be good at a certain thing.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-12-05T15:57:10.865Z · LW · GW

Personal responsibility and systemic failure are different levels of abstraction.

If you're within the system and doing horrible things while saying, "🤷 It's just my incentives, bro," you're essentially allowing the egregore to control you, letting it shove its hand up your ass and pilot you like a puppet.

At the same time, if you ignore systemic problems, you're giving the egregore power by pretending it doesn't exist—even though it’s puppeting everyone. By doing so, you're failing to claim your own power, which lies in recognizing your ability to work towards systemic change.

Both truths coexist:

  • There are those perpetuating evil by surrendering their personal responsibility to an evil egregore.
  • There are those perpetuating evil by letting the egregore run rampant and denying its existence.

The solution requires addressing both levels of abstraction.

Comment by Matt Goldenberg (mr-hire) on My Model Of EA Burnout · 2024-12-05T14:33:28.896Z · LW · GW

I think the model of "Burnout as shadow values" is quite important and loadbearing in my own model of working with many EAs/Rationalists.  I don't think I first got it from this post but I'm glad to see it written up so clearly here.

Comment by mr-hire on [deleted post] 2024-12-03T18:25:39.010Z

Any easy quick way to test is to offer some free coaching in this method.

Comment by mr-hire on [deleted post] 2024-12-01T16:31:53.177Z

Can you say more about how you've used this personally or with clients? What approaches you tried that didn't work, and how this has changed if at all to be more effective over time?

There's a lot here that's interesting, but hard for me to tell from just your description how battletested this is

Comment by Matt Goldenberg (mr-hire) on Eli's shortform feed · 2024-11-26T23:52:38.855Z · LW · GW

What would the title be?

Comment by Matt Goldenberg (mr-hire) on Counting AGIs · 2024-11-26T22:02:47.565Z · LW · GW

I still don't quite get it. We already have an Ilya Sutskever who can make type 1 and type 2 improvements, and don't see the sort of jump's in days your talking about (I mean, maybe we do, and they just look discontinuous because of the release cycles?)

Comment by Matt Goldenberg (mr-hire) on Counting AGIs · 2024-11-26T18:48:45.174Z · LW · GW

Why do you imagine this? I imagine we'd get something like one Einstein from such a regime, which would maybe increase the timelines over existing AI labs by 1.2x or something? Eventually this gain compounds but I imagine that could tbe relatively slow and smooth , with the occasional discontinuous jump when something truly groundbreaking is discovered

Comment by Matt Goldenberg (mr-hire) on Two flavors of computational functionalism · 2024-11-26T18:23:58.011Z · LW · GW

Right, and per the second part of my comment - insofar as consciousness is a real phenomenon, there's an empirical question of if whatever frame invariant definition of computation you're using is the correct one.

Comment by Matt Goldenberg (mr-hire) on You are not too "irrational" to know your preferences. · 2024-11-26T16:27:34.737Z · LW · GW

Do you think wants that arise from conscious thought processes are equally valid to wants that arise from feelings? How do you think about that?

Comment by Matt Goldenberg (mr-hire) on Counting AGIs · 2024-11-26T16:03:53.192Z · LW · GW

while this paradigm of 'training a model that's an agi, and then running it at inference' is one way we get to transformative agi, i find myself thinking that probably WON'T be the first transformative AI, because my guess is that there are lots of tricks using lots of compute at inference to get not quite transformative ai to transformative ai.

my guess is that getting to that transformative level is gonna require ALL the tricks and compute, and will therefore eek out being transformative BY utilizing all those resources.

one of those tricks may be running millions of copies of the thing in an agentic swarm, but i would expect that to be merely a form of inference time scaling, and therefore wouldn't expect ONE of those things to be transformative AGI on it's own.

and i doubt that these tricks can funge against train time compute, as you seem to be assuming in your analysis.  my guess is that you hit diminishing returns for various types of train compute, then diminishing returns for various types of inference compute, and that we'll get to a point where we need to push both of them to that point to get tranformative ai

Comment by Matt Goldenberg (mr-hire) on Two flavors of computational functionalism · 2024-11-26T15:36:52.512Z · LW · GW

This seems arbitrary to me. I'm bringing in bits of information on multiple layers when I write a computer program to calculate the thing and then read out the result from the screen

Consider, if the transistors on the computer chip were moved around, would it still process the data in the same way and wield the correct answer?

Yes under some interpretation, but no from my perspective, because the right answer is about the relationship between what I consider computation and how I interpret the results in getting


But the real question for me is - under a computational perspective of consciousness, are there features of this computation that actually correlate to strength of consciousness? Does any interpretation of computation get equal weight? We could nail down a precise definition of what we mean by consciousness that we agreed on that didn't have the issues mentioned above, but who knows whether that would be the definition that actually maps to the territory of consciousness?

Comment by Matt Goldenberg (mr-hire) on Two flavors of computational functionalism · 2024-11-25T13:52:59.079Z · LW · GW

For me the answer is yes. There's some way of interpreting the colors of grains of sands on the beach as they swirl in the wind that would perfectly implement the miller robin primality test algorithm. So is the wind + sand computing the algorithm?

Comment by Matt Goldenberg (mr-hire) on Which things were you surprised to learn are not metaphors? · 2024-11-23T01:39:42.257Z · LW · GW

No, people really do see it, that whispiness can be crisp and clear

I'm not the most visual person. But occasionally when I'm reading I'll start seeing the scene. I then get jolted out of it when I realize I don't know how I'm seeing the words as they've been replaced with the imagined visuals

Comment by Matt Goldenberg (mr-hire) on Which things were you surprised to learn are not metaphors? · 2024-11-22T17:36:07.056Z · LW · GW

I used to think "getting lost in your eyes" was a metaphor, until I made eye contact with particularly beautiful woman in college and found myself losing track of where I was and what I was doing.

Comment by Matt Goldenberg (mr-hire) on Which things were you surprised to learn are metaphors? · 2024-11-22T17:33:26.408Z · LW · GW

Tad James has a fascinating theory called timeline therapy. In it, he explores how different people represent their timelines and his theory about how shifting those representations will change fundamental ways you relate to the world.

Comment by Matt Goldenberg (mr-hire) on AI #91: Deep Thinking · 2024-11-21T21:36:22.862Z · LW · GW

fwiw i think that your first sentence makes sense, and second sentence doesn't understand why

i think people OBVIOUSLY have a sense of what meaning is, but it's really hard to describe

Comment by Matt Goldenberg (mr-hire) on Ayn Rand’s model of “living money”; and an upside of burnout · 2024-11-19T20:17:23.987Z · LW · GW

ah that makes sense

 in my mind this isn't resources flowing to elsewhere, it's either:

 

  1. An emotional learning update
  2. A part of you that hasn't been getting what it wants speaking up.
Comment by Matt Goldenberg (mr-hire) on StefanHex's Shortform · 2024-11-19T19:50:13.390Z · LW · GW

this is great, thanks for sharing

Comment by Matt Goldenberg (mr-hire) on Ayn Rand’s model of “living money”; and an upside of burnout · 2024-11-19T19:32:24.686Z · LW · GW

in my model that happens through local updates, rather than a global system

for instance, if i used my willpower to feel my social anxiety completely (instead of the usual strategy of suppression) while socializing, i might get some small or large reconsolidation updates to the social anxiety, such that that part thinks it's needed in less situations or not at all

alternatively, the part that has the strategy of going to socialize and feeling confident may gain some more internal evidence, so it wins the internal conflict slightly more (but the internal conflict is still there and causes a drain)

i think the sort of global evaluation you're talking about is pretty rare, though something like it can happen when someone e.g. reaches a deep state of love through meditation, and then is able to access lots of their unloved parts that are downstream TRYING to get to that love and suddenly a big shift happens to whole system simultaneously (another type of global reevaulation can take place through reconsolidating deep internal organizing principles like fundamental ontological constraints or attachment style)

Comment by Matt Goldenberg (mr-hire) on Ayn Rand’s model of “living money”; and an upside of burnout · 2024-11-18T16:43:30.527Z · LW · GW

also, this 'subconscious parts going on strike' theory makes slightly different predictions than the 'is it good for the whole system/live' theory

 

for instance, i predict that you can have 'dead parts' that e.g. give people social anxiety based on past trauma, even though it's no longer actually relevant to their current situation.

and that if you override this social anxiety using 'live willpower' for a while, you can get burnout, even though the willpower is in some sense 'correct' about what would be good for the overall flourishing of the system given the current reality.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-11-18T14:11:18.939Z · LW · GW

A lot of people are looking at the implications of o1's training process as a future scaling paradigm, but it seems to me that this implementation of applying inference time compute to just in time fine tune the model for hard questions is equally promising and may have equally impressive results if it scales with compute, and has equal potential in terms of low hanging fruit to be picked to improve it.

Don't sleep on test time training as a potential future scaling paradigm.

Comment by Matt Goldenberg (mr-hire) on Ayn Rand’s model of “living money”; and an upside of burnout · 2024-11-17T17:14:49.013Z · LW · GW

I often talk about w/ clients burnout as your subconscious/parts 'going on strike' because you've ignored them for too long

I never made the analogy to Atlas Shrugged and the live money leaving the dead money because it wasn't actually tending to the needs of the system, but now you've got me thinking

Comment by Matt Goldenberg (mr-hire) on Is the Power Grid Sustainable? · 2024-11-17T17:02:33.715Z · LW · GW

really, say more?

Comment by Matt Goldenberg (mr-hire) on Boring & straightforward trauma explanation · 2024-11-11T19:31:44.206Z · LW · GW

Another definition along the same vein:

Trauma is overgeneralization of emotional learning.

Comment by Matt Goldenberg (mr-hire) on Should CA, TX, OK, and LA merge into a giant swing state, just for elections? · 2024-11-08T19:30:15.082Z · LW · GW

A real life use for smart contracts 😆

Comment by Matt Goldenberg (mr-hire) on Current safety training techniques do not fully transfer to the agent setting · 2024-11-08T15:30:54.025Z · LW · GW

However, this would not address the underlying pattern of alignment failing to generalize.


Is there proof that this is an overall pattern? It would make sense that models are willing to do things they're not willing to talk about, but that doesn't mean there's a general pattern that e.g. they wouldn't be willing to talk about things, and wouldn't be willing to do them, but WOULD be willing to some secret third option.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-11-05T21:32:52.706Z · LW · GW

I don't remember them having the actual stats, not watching it again though. I wonder if they published those elsewhere

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-11-05T19:01:44.116Z · LW · GW

They replicated it within the video itself?

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-11-04T18:42:55.762Z · LW · GW

Enjoyed this video by Veritasium with data showing how Politics is the Mind Killer

 

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-11-04T00:32:04.872Z · LW · GW

I'll send out to you round 2 when I've narrowed things done. Right now I'm looking for gut check system 1 decisions, and if you have trouble doing tahat I'd recommend waiting.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-11-03T21:50:28.429Z · LW · GW

Want to help me out?

Vote on the book cover for my new book!

It'll be up for a couple of days. The contest website only gives me a few days before I have to pick finalists.

https://form.jotform.com/243066790448060

Comment by Matt Goldenberg (mr-hire) on The hostile telepaths problem · 2024-10-29T11:38:50.590Z · LW · GW

IME you can usually see in someone's face or body when they have a big release, just from the release of tension.

But I think it's harder to distinguish this from other hypotheses I've heard like "negative emotions are stored in the tissues" or "muscular tension is a way of stabilizing intentions."

Comment by Matt Goldenberg (mr-hire) on johnswentworth's Shortform · 2024-10-28T22:28:11.530Z · LW · GW

Oh yes, if you're going on people's words, it's obviously not much better, but the whole point of vibing is that it's not about the words.  Your aesthetics, vibes, the things you care about will be communicated non-verbally.

Comment by Matt Goldenberg (mr-hire) on johnswentworth's Shortform · 2024-10-27T20:47:41.329Z · LW · GW

I predict you would enjoy the free-association game better if you cultivated the skill of vibing more.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-10-12T15:44:20.152Z · LW · GW

Yes, this is an excellent point I didn't get across in the past above.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-10-12T15:41:15.847Z · LW · GW

Yes, if people were using Wikipedia in the way they are using the LLMs.

In practice that doesn't happen though, people cite Wikipedia for facts but are using LLMs for judgement calls.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-10-11T21:23:32.750Z · LW · GW

Of course a random person is biased. Some people will will have more authority than others, and we'll trust them more, and argument screens off authority.

What I don't want people to do is give chatGPT or Claude authority. Give it to the wisest people you know not Claude.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-10-11T19:40:22.437Z · LW · GW

What they're saying is I got a semi-objective answer fast. 

 

Exactly. Please stop saying this. It's not semi-objective. The trend of casually treating LLMs as an arbiter of truth leads to moral decay.

 

I doubt the orga got much of their own bias into the RLHF/RLAIF process

This is obviously untrue, orgs spend lots of effort making sure their AI doesn't say things that would give them bad press for example.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-10-11T19:29:33.278Z · LW · GW

I desperately want people to stop using "I asked Claude or ChatGPT" as a stand-in for "I got an objective third party to review"

LLMs are not objective.  They are trained on the internet which has specific sets of cultural, religious, ideological biases, and then further trained via RL to be biased in a way that a specific for-profit entity wanted them to be.

Comment by Matt Goldenberg (mr-hire) on [Completed] The 2024 Petrov Day Scenario · 2024-09-27T11:04:53.377Z · LW · GW

I happened to log on at that time and thought someone had launched a nuke

Comment by Matt Goldenberg (mr-hire) on Pay-on-results personal growth: first success · 2024-09-17T16:19:50.962Z · LW · GW

So far I’m seeing data that’s strongly in favor of it being easy for me to facilitate rapid growth for many people in this space. But am I missing something here? If you have any ideas please let me know in the comments.

My take:

You can facilitate rapid growth in these areas.

I don't think you're particularly unique in this regard.  There are several people who I know (myself included) who can create these sorts of rapid changes on a semi-consistent basis. You named a few as reviewers.  There are far more coaches/therapists who are ineffective, but lots of highly effective practitioners who can create rapid change using experiential methods.

@PJ_Eby @Kaj_Sotala @Damon Sasi all come to mind as people on LW who can do this.  Having worked with many coaches and therapists, I assure you that many others also have the skill.

Right now I think you're overestimating just how consistent what you do is, and the results focus you're taking is likely creating other negative effects in the psyche that will have to be cleaned up later.  It will also mean that if you don't get to the issue in the first session, it will be harder and harder for your work to have an impact over time.

But in general the approach you're taking can and will create rapid results in some people that haven't seen results before.   

Comment by Matt Goldenberg (mr-hire) on What are examples of someone doing a lot of work to find the best of something? · 2024-09-15T00:59:12.495Z · LW · GW

I've really been enjoying Charlie Anderson's YouTube channel for this reason, trying to find the absolute best way to make pizza.

https://youtube.com/@charlieandersoncooking?si=uhpLcNDyE7jLbTMY

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-09-13T16:48:34.357Z · LW · GW

It seems like the obvious thing to do with a model like o1 trained on reasoning through problems would be to train it to write code that helps it solve reasoning problems.

Perhaps the idea was to not give it this crutch so it could learn those reasoning skills without the help of code.

But it seems like from the examples that while its great at high level reasoning and figuring out where it went wrong, it still struggles with basic things like counting, which, if it had the instinct to write code in those areas which it's likely to get tripped up, would be easily solved.

Comment by Matt Goldenberg (mr-hire) on How I got 4.2M YouTube views without making a single video · 2024-09-03T18:19:32.776Z · LW · GW

Sorta surprised that this got so many up votes with the clickbaity title, which goes against norms around here

Otherwise th content seems good

Comment by Matt Goldenberg (mr-hire) on the Giga Press was a mistake · 2024-08-24T14:05:06.411Z · LW · GW

I'm not talking about 10 year time horizons no