Posts

Good ways to monetarily profit from the increasing demand for power? 2024-06-10T15:29:13.418Z
Mechanism for feature learning in neural networks and backpropagation-free machine learning models 2024-03-19T14:55:59.296Z
Peter Thiel on Technological Stagnation and Out of Touch Rationalists 2022-12-07T13:15:32.009Z
Non-Coercive Perfectionism 2021-01-26T16:53:36.238Z
Would most people benefit from being less coercive to themselves? 2021-01-21T14:24:17.187Z
Why Productivity Systems Don't Stick 2021-01-16T17:45:37.479Z
How to Write Like Kaj Sotala 2021-01-07T19:33:35.260Z
When Gears Go Wrong 2020-08-02T06:21:25.389Z
Reconsolidation Through Questioning 2019-11-14T23:22:43.518Z
Reconsolidation Through Experience 2019-11-13T20:04:39.345Z
The Hierarchy of Memory Reconsolidation Techniques 2019-11-13T20:02:43.449Z
Practical Guidelines for Memory Reconsolidation 2019-11-13T19:54:10.097Z
A Practical Theory of Memory Reconsolidation 2019-11-13T19:52:20.364Z
Expected Value- Millionaires Math 2019-10-09T14:50:26.732Z
On Collusion - Vitalik Buterin 2019-10-09T14:45:20.924Z
Exercises for Overcoming Akrasia and Procrastination 2019-09-16T11:53:10.362Z
Appeal to Consequence, Value Tensions, And Robust Organizations 2019-07-19T22:09:43.583Z
Overcoming Akrasia/Procrastination - Volunteers Wanted 2019-07-15T18:29:40.888Z
What are good resources for learning functional programming? 2019-07-04T01:22:05.876Z
Matt Goldenberg's Short Form Feed 2019-06-21T18:13:54.275Z
What makes a scientific fact 'ripe for discovery'? 2019-05-17T09:01:32.578Z
The Case for The EA Hotel 2019-03-31T12:31:30.969Z
How to Understand and Mitigate Risk 2019-03-12T10:14:19.873Z
What Vibing Feels Like 2019-03-11T20:10:30.017Z
S-Curves for Trend Forecasting 2019-01-23T18:17:56.436Z
A Framework for Internal Debugging 2019-01-16T16:04:16.478Z
The 3 Books Technique for Learning a New Skilll 2019-01-09T12:45:19.294Z
Symbiosis - An Intentional Community For Radical Self-Improvement 2018-04-22T23:15:06.832Z
How Going Meta Can Level Up Your Career 2018-04-14T02:13:02.380Z
Video: The Phenomenology of Intentions 2018-01-09T03:40:45.427Z
Video - Subject - Object Shifts and How to Have Them 2018-01-04T02:11:22.142Z

Comments

Comment by Matt Goldenberg (mr-hire) on Boring & straightforward trauma explanation · 2024-11-11T19:31:44.206Z · LW · GW

Another definition along the same vein:

Trauma is overgeneralization of emotional learning.

Comment by Matt Goldenberg (mr-hire) on Should CA, TX, OK, and LA merge into a giant swing state, just for elections? · 2024-11-08T19:30:15.082Z · LW · GW

A real life use for smart contracts 😆

Comment by Matt Goldenberg (mr-hire) on Current safety training techniques do not fully transfer to the agent setting · 2024-11-08T15:30:54.025Z · LW · GW

However, this would not address the underlying pattern of alignment failing to generalize.


Is there proof that this is an overall pattern? It would make sense that models are willing to do things they're not willing to talk about, but that doesn't mean there's a general pattern that e.g. they wouldn't be willing to talk about things, and wouldn't be willing to do them, but WOULD be willing to some secret third option.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-11-05T21:32:52.706Z · LW · GW

I don't remember them having the actual stats, not watching it again though. I wonder if they published those elsewhere

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-11-05T19:01:44.116Z · LW · GW

They replicated it within the video itself?

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-11-04T18:42:55.762Z · LW · GW

Enjoyed this video by Veritasium with data showing how Politics is the Mind Killer

 

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-11-04T00:32:04.872Z · LW · GW

I'll send out to you round 2 when I've narrowed things done. Right now I'm looking for gut check system 1 decisions, and if you have trouble doing tahat I'd recommend waiting.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-11-03T21:50:28.429Z · LW · GW

Want to help me out?

Vote on the book cover for my new book!

It'll be up for a couple of days. The contest website only gives me a few days before I have to pick finalists.

https://form.jotform.com/243066790448060

Comment by Matt Goldenberg (mr-hire) on The hostile telepaths problem · 2024-10-29T11:38:50.590Z · LW · GW

IME you can usually see in someone's face or body when they have a big release, just from the release of tension.

But I think it's harder to distinguish this from other hypotheses I've heard like "negative emotions are stored in the tissues" or "muscular tension is a way of stabilizing intentions."

Comment by Matt Goldenberg (mr-hire) on johnswentworth's Shortform · 2024-10-28T22:28:11.530Z · LW · GW

Oh yes, if you're going on people's words, it's obviously not much better, but the whole point of vibing is that it's not about the words.  Your aesthetics, vibes, the things you care about will be communicated non-verbally.

Comment by Matt Goldenberg (mr-hire) on johnswentworth's Shortform · 2024-10-27T20:47:41.329Z · LW · GW

I predict you would enjoy the free-association game better if you cultivated the skill of vibing more.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-10-12T15:44:20.152Z · LW · GW

Yes, this is an excellent point I didn't get across in the past above.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-10-12T15:41:15.847Z · LW · GW

Yes, if people were using Wikipedia in the way they are using the LLMs.

In practice that doesn't happen though, people cite Wikipedia for facts but are using LLMs for judgement calls.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-10-11T21:23:32.750Z · LW · GW

Of course a random person is biased. Some people will will have more authority than others, and we'll trust them more, and argument screens off authority.

What I don't want people to do is give chatGPT or Claude authority. Give it to the wisest people you know not Claude.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-10-11T19:40:22.437Z · LW · GW

What they're saying is I got a semi-objective answer fast. 

 

Exactly. Please stop saying this. It's not semi-objective. The trend of casually treating LLMs as an arbiter of truth leads to moral decay.

 

I doubt the orga got much of their own bias into the RLHF/RLAIF process

This is obviously untrue, orgs spend lots of effort making sure their AI doesn't say things that would give them bad press for example.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-10-11T19:29:33.278Z · LW · GW

I desperately want people to stop using "I asked Claude or ChatGPT" as a stand-in for "I got an objective third party to review"

LLMs are not objective.  They are trained on the internet which has specific sets of cultural, religious, ideological biases, and then further trained via RL to be biased in a way that a specific for-profit entity wanted them to be.

Comment by Matt Goldenberg (mr-hire) on [Completed] The 2024 Petrov Day Scenario · 2024-09-27T11:04:53.377Z · LW · GW

I happened to log on at that time and thought someone had launched a nuke

Comment by Matt Goldenberg (mr-hire) on Pay-on-results personal growth: first success · 2024-09-17T16:19:50.962Z · LW · GW

So far I’m seeing data that’s strongly in favor of it being easy for me to facilitate rapid growth for many people in this space. But am I missing something here? If you have any ideas please let me know in the comments.

My take:

You can facilitate rapid growth in these areas.

I don't think you're particularly unique in this regard.  There are several people who I know (myself included) who can create these sorts of rapid changes on a semi-consistent basis. You named a few as reviewers.  There are far more coaches/therapists who are ineffective, but lots of highly effective practitioners who can create rapid change using experiential methods.

@PJ_Eby @Kaj_Sotala @Damon Sasi all come to mind as people on LW who can do this.  Having worked with many coaches and therapists, I assure you that many others also have the skill.

Right now I think you're overestimating just how consistent what you do is, and the results focus you're taking is likely creating other negative effects in the psyche that will have to be cleaned up later.  It will also mean that if you don't get to the issue in the first session, it will be harder and harder for your work to have an impact over time.

But in general the approach you're taking can and will create rapid results in some people that haven't seen results before.   

Comment by Matt Goldenberg (mr-hire) on What are examples of someone doing a lot of work to find the best of something? · 2024-09-15T00:59:12.495Z · LW · GW

I've really been enjoying Charlie Anderson's YouTube channel for this reason, trying to find the absolute best way to make pizza.

https://youtube.com/@charlieandersoncooking?si=uhpLcNDyE7jLbTMY

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-09-13T16:48:34.357Z · LW · GW

It seems like the obvious thing to do with a model like o1 trained on reasoning through problems would be to train it to write code that helps it solve reasoning problems.

Perhaps the idea was to not give it this crutch so it could learn those reasoning skills without the help of code.

But it seems like from the examples that while its great at high level reasoning and figuring out where it went wrong, it still struggles with basic things like counting, which, if it had the instinct to write code in those areas which it's likely to get tripped up, would be easily solved.

Comment by Matt Goldenberg (mr-hire) on How I got 4.2M YouTube views without making a single video · 2024-09-03T18:19:32.776Z · LW · GW

Sorta surprised that this got so many up votes with the clickbaity title, which goes against norms around here

Otherwise th content seems good

Comment by Matt Goldenberg (mr-hire) on the Giga Press was a mistake · 2024-08-24T14:05:06.411Z · LW · GW

I'm not talking about 10 year time horizons no

Comment by Matt Goldenberg (mr-hire) on the Giga Press was a mistake · 2024-08-23T12:15:45.735Z · LW · GW

we know that's not what US executives were thinking because they don't think that long-term due to the incentives they face

The story of "they're doing something that's bad in the short term, but good in the long term, but only accidentally they're actually trying to do something good in the short term but failing" seems suspicious.

I know that the CEOs I know do plan in the long term.

I also know that the many of the worlds most famous consumer brands (Apple, Amazon. Tesla) have valuations that only make sense because people trust the CEOs to prioritize the long term and those future earnings are priced in.

And I also know that if you look at the spending budget of many of the top consumer tech companies, and the amount spent on longterm R&D and moon shots, it sure looks like they are spending on the long term.

Comment by Matt Goldenberg (mr-hire) on the Giga Press was a mistake · 2024-08-22T16:20:45.147Z · LW · GW

I don't think that's the sophisticated argument for switching your in house app to the cloud. There's a recognition that because it's more efficient for developers, more and more talent will learn to use and infrastructure will be built on top of cloud solutions.

Which means your organization risks being bottlenecked on talent and infrastructure if you fall too far behind the adoption curve.

Comment by Matt Goldenberg (mr-hire) on the Giga Press was a mistake · 2024-08-21T10:53:24.851Z · LW · GW

When magazines talked about, say, "microservices" or "the cloud" being the future, it actually made them happen. There are enough executives that are gullible or just want to be given something to talk about and work on that it established an environment where everyone wanted to get "microservices" or whatever on their resume for future job requirements, and it was self-sustaining.

Is the claim here that cloud computing and microservice architectures are less efficient and a mistake?

Comment by Matt Goldenberg (mr-hire) on Most smart and skilled people are outside of the EA/rationalist community: an analysis · 2024-07-13T06:46:14.577Z · LW · GW

median rationalist at roughly MENSA level. This still feels wrong to me: if they’re so smart, where are the nobel laureates? The famous physicists? And why does arguing on Lesswrong make me feel like banging my head against the wall?

I think you'd have to consider both Scott Aaronson and Taylor Cowen to be rationalist adjacent, and both considered intellectual heavyweights

Dustin Moskovitz EA adjacent, again considered a heavyweight, but applied to business rather than academia

Then there's the second point, but unfortunately I haven't seen any evidence that someone being smart makes them pleasant to argue with (the contrary in fact)

Comment by Matt Goldenberg (mr-hire) on Reliable Sources: The Story of David Gerard · 2024-07-11T21:09:36.116Z · LW · GW

The whole first part of the article is how this is wrong, due to the gaming of notable sources

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-06-18T11:30:11.088Z · LW · GW

One way that think about "forces beyond yourself" is pointing to what it feels like to operate from a right-hemisphere dominant mode, as defined by Ian McGilcrist.

The language is deliberately designed to evoke that mode - so while I'll get more specific here, know that to experience the thing I'm talking about you need to let go of the mind that wants this type of explanation in order to experience what I'm talking about.

When I'm talking about "Higher Forces" I'm talking about states of being that feel like something is moving through you - you're not a head controlling a body but rather you're first connecting to, then channeling, then becoming part of a larger universal force.

In my coaching work, I like to use Phil Stutz's idea of "Higher forces" like Infinite Love, Forward Motion, Self-Expression, etc, as they're particularly suited for the modern Western Mind.

Here's how Stutz defines the higher force of Self-Expression on his website:

"The Higher Force You’re Invoking: Self-Expression The force of Self-Expression allows us to reveal ourselves in a truthful, genuine way—without caring about others' approval. It speaks through us with unusual clarity and authority, but it also expresses itself nonverbally, like when an athlete is "in the zone." In adults, this force gets buried in the Shadow. Inner Authority, by connecting you to the Shadow, enables you to resurrect the force and have it flow through you."

Of course, religions also have names for these type of special states, calling them Muses, Jhanas, Direct Connection to God.

All of these states (while I can and do teach techniques, steps, and systems to invoke them) ultimately can only be accessed through surrender to the moment, faith in what's there, and letting go of a need for knowing.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-06-16T20:10:06.066Z · LW · GW

It's precisely when handing your life to forces beyond yourself (not Gods, thats just handing your life over to someone else) that you can avoid giving your life over to others/society.

Souls is metaphorical of course, not some essential unchanging part of yourself - just a thing that actually matters, that moves you

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-06-16T12:16:26.233Z · LW · GW

In the early 2000s, we all thought the next productivity system would save us. If we could just follow Tim Ferriss's system and achieve a four-hour workweek, or adopt David Allen's "Getting Things Done" (GTD) methodology, everything would be better. We believed the grind would end.

In retrospect, this was our generation's first attempt at addressing the growing sacredness deficit disorder that was, and still is, ravaging our souls. It was a good distraction for a time—a psyop that convinced us that with the perfect productivity system, we could design the perfect lifestyle and achieve perfection.

However, the edges started to fray when put into action. Location-independent digital nomads turned out to be just as lonely as everyone else. The hyper-productive GTD enthusiasts still burned out.

For me, this era truly ended when Merlin Mann, the author of popular GTD innovations like the "hipster PDA" and "inbox zero," failed to publish his book. He had all the tools in the world and knew all the systems. But when it mattered—when it came to building something from his soul that would stand the test of time—it didn't make a difference.

Merlin wrote a beautiful essay about this failure called "Cranking" (https://43folders.com/2011/04/22/cranking). He mused on the sterile, machine-like crank that would move his father's bed when he could no longer walk. He compared this to the sterile, machine-like systems he used to get himself to write, not knowing what he was writing or why, just turning the crank.

No amount of cranking could reconnect him to the sacred. No system or steps could ensure that the book he was writing would touch your soul, or his. So instead of sending his book draft to the editor, he sent the essay.

Reading that essay did something to me, and I think it marked a shift that many others who grew up in the "productivity systems" era experienced. It's a shift that many caught up in the current crop of "protocols" from the likes of Andrew Huberman and Bryan Johnson will go through in the next few years—a realization that the sacred can't be reached through a set of steps, systems, lists, or protocols.

At best, those systems can point towards something that must be surrendered to in mystery and faith. No amount of cranking will ever get you there, and no productivity system will save you. Only through complete devotion or complete surrender to forces beyond yourself will you find it.

Comment by Matt Goldenberg (mr-hire) on "Metastrategic Brainstorming", a core building-block skill · 2024-06-11T13:05:04.286Z · LW · GW

I just realized that this then brings the problem of "oh, but what's the meta-meta strategy i use), but I think there's just an element of taste to this.

Comment by Matt Goldenberg (mr-hire) on "Metastrategic Brainstorming", a core building-block skill · 2024-06-11T12:36:26.432Z · LW · GW

One thing to note - Brainstorming itself is a meta-strategy that may or may not be the best approach at certain points in the problem, to generate meta-strategic approaches.

Brainstorming for me has a particular flavor - it's helpful when I have a lot of ideas but don't know where to start, or when it feels like my mind just needs the starter cord pulled a few times.

Other times, I get a lot more out of a taking a walk and let my mind wander around the problem, not specifically listing out lanes of attack, but sort of holding the intention that one may show up as I think in a free associative way and walk.

Other times it's helpful for me to have a conversation with a friend, especially one who I can see has the right mind-shape to frame this sort of problem.

Other times it's helpful to specifically look through the list of meta-strategies I have, wandering around my Roam and seeing how different mental models and frameworks can frame the problem.

 

I guess what I'm saying is, it's helpful to separate the move of "oh, it's time to figure out what meta-strategy I can use" from "oh, it's time to brainstorm"

Comment by Matt Goldenberg (mr-hire) on Good ways to monetarily profit from the increasing demand for power? · 2024-06-11T03:20:19.766Z · LW · GW

For those who disagreed, I'd love to be linked to convincing arguments to the contrary!

Comment by Matt Goldenberg (mr-hire) on Good ways to monetarily profit from the increasing demand for power? · 2024-06-10T17:24:44.634Z · LW · GW

I've heard several people who should know (Musk, Ascher) make detailed cases that seem right, and haven't heard any convincing arguments to the contrary.

Comment by Matt Goldenberg (mr-hire) on The Data Wall is Important · 2024-06-10T09:42:11.218Z · LW · GW

But once they break the data-wall, competitors are presumably gonna copy their method.

Is the assumption here that corporate espionage is efficient enough in the AI space that inventing entirely novel methods of training doesn't give much of a competitive advantage?

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-04-24T20:31:40.843Z · LW · GW

i don't think the constraint is that energy is too expensive? i think we just literally don't have enough of it concentrated in one place

but i have no idea actually

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-04-24T14:10:36.848Z · LW · GW

Zuck and Musk point to energy as a quickly approaching deep learning bottleneck over and above compute.

This to me seems like it could slow takeoff substantially and effectively create a wall for a long time.

Best arguments against this?

Comment by Matt Goldenberg (mr-hire) on Is there software to practice reading expressions? · 2024-04-23T22:38:16.290Z · LW · GW

Paul Ekmans software is decent. When I used it (before it was a SaaS, just a cd) it just basicallyflashed an expression for a moment then went back to neutral pic. After some training it did help to identify micro expressions in people

Comment by Matt Goldenberg (mr-hire) on Mid-conditional love · 2024-04-23T22:08:47.333Z · LW · GW

People talk about unconditional love and conditional love. Maybe I’m out of the loop regarding the great loves going on around me, but my guess is that love is extremely rarely unconditional. Or at least if it is, then it is either very broadly applied or somewhat confused or strange: if you love me unconditionally, presumably you love everything else as well, since it is only conditions that separate me from the worms.

 

Yes. this is my experience of cultivating unconditional love, it loves everything without target. I doesn't feel confused or strange, just like I am love, and my experience e.g. cultivating it in coaching is that people like being in the present of such love.

It's also very helpful for people to experience conditional love! In particular of the type "I've looked at you, truly seen you, and loved you for that."

IME both of these loves feel pure and powerful from both sides, and neither of them are related to being attached, being pulled towards or pushed away from people.

 

It feels like maybe we're using the word love very differently?

Comment by Matt Goldenberg (mr-hire) on Fabien's Shortform · 2024-04-12T01:53:19.223Z · LW · GW

Both causal.app and getguesstimate.com have pretty good monte carlo uis

Comment by Matt Goldenberg (mr-hire) on Best in Class Life Improvement · 2024-04-04T17:40:20.186Z · LW · GW

IME there is a real effect where nicotine acts as a gateway drug to tobacco or vaping

in general this whole post seems to make this mistake of saying 'a common second order effect of this thing is doing it in a way that will get you addicted - so don't do that' which is just such an obvious failure mode that to call it a chesterton fence is generous

Comment by Matt Goldenberg (mr-hire) on Modern Transformers are AGI, and Human-Level · 2024-03-27T20:09:57.352Z · LW · GW

The question is - how far can we get with in-context learning.  If we filled Gemini's 10 million tokens with Sudoku rules and examples, showing where it went wrong each time, would it generalize? I'm not sure but I think it's possible

Comment by Matt Goldenberg (mr-hire) on Modern Transformers are AGI, and Human-Level · 2024-03-27T16:27:13.273Z · LW · GW

It seems likely to me that you could create a prompt that would have a transformer do this.

Comment by Matt Goldenberg (mr-hire) on Daniel Kokotajlo's Shortform · 2024-03-26T15:25:15.289Z · LW · GW

i like coase's work on transaction costs as an explanation here

coase is an unusually clear thinker and writer, and i recommend reading through some of his papers

Comment by Matt Goldenberg (mr-hire) on Should rationalists be spiritual / Spirituality as overcoming delusion · 2024-03-26T14:44:07.877Z · LW · GW

i just don't see the buddha making any reference to nervous systems or mammalians when he talks about suffering(not even some sort of pali equivalent that points to the materialist understanding at the time)

Comment by Matt Goldenberg (mr-hire) on Should rationalists be spiritual / Spirituality as overcoming delusion · 2024-03-26T14:00:15.259Z · LW · GW

? TBC I think the claims about suffering in Buddhism are claims about how our mammalian nervous systems happen to be wired and ways you can improve it.

 

This seems like quite a western modern take on buddhism

it feels hard to read the original buddha this way

Comment by Matt Goldenberg (mr-hire) on General Thoughts on Secular Solstice · 2024-03-25T03:17:05.742Z · LW · GW

Compare, the world will be exactly as it has been in the past, with the world will always be exactly as it is in this moment

Comment by Matt Goldenberg (mr-hire) on D0TheMath's Shortform · 2024-03-24T15:54:08.533Z · LW · GW

it's true, but I don't think there's anything fundamental preventing the same sort of proliferation and advances in open source LLMs that we've seen in stable diffusion (aside from the fact that LLMs aren't as useful for porn). that it has been relatively tame so far doesn't change the basic pattern of how open source effects the growth of technology

Comment by Matt Goldenberg (mr-hire) on D0TheMath's Shortform · 2024-03-24T10:18:53.965Z · LW · GW

yeah, it's much less likely now

Comment by Matt Goldenberg (mr-hire) on D0TheMath's Shortform · 2024-03-23T15:02:06.258Z · LW · GW

it doesn't seem like that's the case to me - but even if it were the case, isn't that moving the goal posts of the original post?

I don't think time-to-AGI got shortened at all.