Posts

Peter Thiel on Technological Stagnation and Out of Touch Rationalists 2022-12-07T13:15:32.009Z
Non-Coercive Perfectionism 2021-01-26T16:53:36.238Z
Would most people benefit from being less coercive to themselves? 2021-01-21T14:24:17.187Z
Why Productivity Systems Don't Stick 2021-01-16T17:45:37.479Z
How to Write Like Kaj Sotala 2021-01-07T19:33:35.260Z
When Gears Go Wrong 2020-08-02T06:21:25.389Z
Reconsolidation Through Questioning 2019-11-14T23:22:43.518Z
Reconsolidation Through Experience 2019-11-13T20:04:39.345Z
The Hierarchy of Memory Reconsolidation Techniques 2019-11-13T20:02:43.449Z
Practical Guidelines for Memory Reconsolidation 2019-11-13T19:54:10.097Z
A Practical Theory of Memory Reconsolidation 2019-11-13T19:52:20.364Z
Expected Value- Millionaires Math 2019-10-09T14:50:26.732Z
On Collusion - Vitalik Buterin 2019-10-09T14:45:20.924Z
Exercises for Overcoming Akrasia and Procrastination 2019-09-16T11:53:10.362Z
Appeal to Consequence, Value Tensions, And Robust Organizations 2019-07-19T22:09:43.583Z
Overcoming Akrasia/Procrastination - Volunteers Wanted 2019-07-15T18:29:40.888Z
What are good resources for learning functional programming? 2019-07-04T01:22:05.876Z
Matt Goldenberg's Short Form Feed 2019-06-21T18:13:54.275Z
What makes a scientific fact 'ripe for discovery'? 2019-05-17T09:01:32.578Z
The Case for The EA Hotel 2019-03-31T12:31:30.969Z
How to Understand and Mitigate Risk 2019-03-12T10:14:19.873Z
What Vibing Feels Like 2019-03-11T20:10:30.017Z
S-Curves for Trend Forecasting 2019-01-23T18:17:56.436Z
A Framework for Internal Debugging 2019-01-16T16:04:16.478Z
The 3 Books Technique for Learning a New Skilll 2019-01-09T12:45:19.294Z
Symbiosis - An Intentional Community For Radical Self-Improvement 2018-04-22T23:15:06.832Z
How Going Meta Can Level Up Your Career 2018-04-14T02:13:02.380Z
Video: The Phenomenology of Intentions 2018-01-09T03:40:45.427Z
Video - Subject - Object Shifts and How to Have Them 2018-01-04T02:11:22.142Z

Comments

Comment by Matt Goldenberg (mr-hire) on Raemon's Shortform · 2024-03-18T05:41:53.417Z · LW · GW

I think most people have short term, medium term, and long term goals. E.g., right about now many people probably have the goal of doing their taxes, and depending on their situation those may match many of your desiderata.

I used to put a lot of effort into creating exercises, simulations, and scenarios that matched up with various skills I was teaching, but ultimately found it much more effective to just say "look at your todo list, and find something that causes overwhelm". Deliberate practice consists of finding a thing that causes overwhelm, seeing how to overcome that overwhelm, working for two minutes, then finding another task that induces overwhelm. I also use past examples, imagining in detail what it would have been like to act in this different way

You're operating in a slightly different domain, but still I imagine people have plenty of problems and sub problems in either their life or research where the things you're teaching applies, and you can scope them small enough to get tighter feedback loops.

Comment by Matt Goldenberg (mr-hire) on Raemon's Shortform · 2024-03-17T21:29:29.515Z · LW · GW

Why not just have people spend some time working with their existing goals?

Comment by Matt Goldenberg (mr-hire) on "How could I have thought that faster?" · 2024-03-13T06:34:08.026Z · LW · GW

I usually explain my process these days to clients with the acronym LIFE

Learn New Tools Integrate Resistance Forge an Identity Express Yourself

Learn New Tools is cognitive-emotional strategies, of which TYCS is an example. Fwiw a some of TYCS is actually deliberate practice to discover cognitive strategies ( as compared to something like CFAR which extracts and teaches them directly), but the result is the same.

The important thing is to just have a clear tool, give people something they know they can use in certain situations, that works immediately to solve their problems.

But the thing is, people don't use them, because they have resistance. That's where parts work and other resistance integration tools come into play.

Even when thata done, there's still the issue that you don't automatically use the techniques. This is where forge an Identity comes in, where you use identity change techniques to make the way you see yourself be in alignment with a way of being that the technique brings out. (This is one thing TYCS gets wrong in my opinion, trying to directly reinforce the cognitive strategies instead of creating an identity and reinforcing the strategies as affirming that identity.)

Finally that identity needs to propogate to every area of your life, so there's not situations where you fail to use the technique and way of being. This is just a process of looking at each area, seeing where it's not in alignment with the identity, then deliberately taking an action to bring it to that area.

IME all of these pieces are needed to make a life change from a technique, although it's rarely as linear as I describe it.

Comment by Matt Goldenberg (mr-hire) on "How could I have thought that faster?" · 2024-03-12T00:54:16.738Z · LW · GW

The way I do this with my clients is that we train cognitive tools first, then find the resistance to those habits and work on it using parts work

Comment by Matt Goldenberg (mr-hire) on leogao's Shortform · 2024-03-09T16:06:12.963Z · LW · GW

can you give examples?

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-02-29T16:44:08.476Z · LW · GW

I can hover over quick takes to get the full comment, but not popular comments.

Comment by Matt Goldenberg (mr-hire) on New LessWrong review winner UI ("The LeastWrong" section and full-art post pages) · 2024-02-29T03:39:03.460Z · LW · GW

Why not show the top-rated review, like you do at the top of the page?

Comment by Matt Goldenberg (mr-hire) on New LessWrong review winner UI ("The LeastWrong" section and full-art post pages) · 2024-02-28T22:10:46.211Z · LW · GW

The art change is pretty distracting, and having to hover to see the author is also a bummer, plus no way to get a summary (that I can see).

It's seemingly optimized for a "judge a book by it's cover" type of thing where I click around until I see a title and image I like

Comment by Matt Goldenberg (mr-hire) on How I internalized my achievements to better deal with negative feelings · 2024-02-27T23:21:08.338Z · LW · GW

Appreciated this writeup.

How long have you been using the tool, and do you find any resistance to using it?

Do you always assume the underlying issue, or do you do focusing each time? Or do you find a contradictory experience through intuition without knowing why it works?

Comment by Matt Goldenberg (mr-hire) on How I build and run behavioral interviews · 2024-02-26T16:26:47.026Z · LW · GW

I haven't looked into this recently, but last time I looked at the literature behavioral interviews were far more predictive of job performance than other interviewing methods.

It's possible that they've become less predictive as people started preparing for them more.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-02-24T22:48:54.277Z · LW · GW

Thanks. Appreciate this. I'm going to give another shot at writing this

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-02-24T21:37:27.518Z · LW · GW

Request for feedback: Do I sound like a raving lunatic above?

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-02-24T21:36:28.194Z · LW · GW

Surprising thing I've found as I begin to study and integrate skillful coercive motivation is the centrality of belief in providence and faith of this way of motivating yourself. Here are some central examples: the first from War of Art, the second from The Tools, the third from David Goggins. these aren't cherry picked (this is a whole section of War of Art and a whole chapter of The Tools).

Image
Image
Image

This has interesting implications given that as a society (at least in america) we've historically been motivated by this type of masculine, apollonian motivation - but have increasingly let go of faith in higher powers as a tenet of our central religion, secular humanism.This means the core motivation that drives us to build, create, transcend our nature... is running on fumes. We are motivated by gratitude, w/o sense of to what or whom we should be grateful, told to follow our calling w/o a since of who is calling.

We've tried to hide this contradiction. our seminaries separate our twin Religion (Secular Humanism and Scientific Materialism) into stem and humanities tracks to hide that what motivates The Humanities to create is invalidated by the philosophy that allows STEM o discover. But this is crumbling, the cold philosophy of scientific materialism is eroding the shaky foundations that allow secular humanists to connect to these higher forces - this is one of the drivers of the meaning the crisis.

I don't really see any way we can make it through the challenges we're facing with these powerful new technologies w/o a new religion that connects us to the mystical truly wise core that allows us to be motivated towards what's good and true. This is exactly what Marc Gafni is trying to do with Cosmo-Erotic Humanism, and what Monastic Academy is trying to do with a new, mystical form of dataism - but both these projects are moonshots to massively change the direction of culture.

Comment by Matt Goldenberg (mr-hire) on The Gemini Incident · 2024-02-23T19:11:49.965Z · LW · GW

Or invisible?

Comment by Matt Goldenberg (mr-hire) on I played the AI box game as the Gatekeeper — and lost · 2024-02-12T22:37:17.539Z · LW · GW

The original reasoning that Eliezer gave if I remember correctly was that it's better to make people realize there are unknown unknowns, rather than taking one specific strategy and saying "oh, I know how I would have stopped that particular strategy"

Comment by Matt Goldenberg (mr-hire) on Prediction Markets aren't Magic · 2024-02-12T00:15:22.831Z · LW · GW

Some quick calculations from chatGPT puts the value from a british government bond (considered the world power then) at about equal to the value of gold, assuming a fixed interest rate of 3% with gold coming out slightly ahead.

I haven't really checked these calculations but they pass the sniff test (except the part where chatGPT tried to adjust todays dollars for inflation).

 

Comment by Matt Goldenberg (mr-hire) on Upgrading the AI Safety Community · 2024-02-10T13:50:51.146Z · LW · GW

This is not what I mean by wisdom.

Comment by Matt Goldenberg (mr-hire) on Prediction Markets aren't Magic · 2024-02-02T17:01:28.224Z · LW · GW

Compared to what?  My guess is it's a better bet than most currencies during that time, aside from a few winners that it would have been hard to predict ahead of time.  E.g., if 200 years ago, you had taken the most powerful countries and their currencies, and put your money into those, I predict you'd be much worse off than gold.

Comment by Matt Goldenberg (mr-hire) on Universal Love Integration Test: Hitler · 2024-01-12T02:31:33.062Z · LW · GW

I think oftentimes what's needed to let go of grief is to stop pushing it away, in doing that, it may be felt more fully, which once the message is received, can allow you to let it go. This process may involve fully feeling pain that you were suppressing.

Comment by Matt Goldenberg (mr-hire) on Universal Love Integration Test: Hitler · 2024-01-11T20:25:00.017Z · LW · GW

. It doesn't hurt the way pity or lamenting might; there's no grief in it, just well-wishing.

While true, I think there's a caveat that often the thing preventing the feeling of true love from coming forth can be unprocessed grief that needs to be felt, or unprocessed pain that needs to be forgiven.

I think there's a danger in saying "if love feels painful you're doing this wrong" as often that's exactly the developmentally correct thing to be experiencing in order to get to the love underneath.

Comment by Matt Goldenberg (mr-hire) on On the Contrary, Steelmanning Is Normal; ITT-Passing Is Niche · 2024-01-10T12:01:49.243Z · LW · GW

I couldn't pass an ITT for advocates of Islam or extrasensory perception. On the one hand, this does represent a distinct deficit in my ability to model what the advocates of these ideas are thinking, a tragic gap in my comprehension of reality, which I would hope to remedy in the Glorious Transhumanist Future if that were a real thing. On the other hand, facing the constraints of our world, my inability to pass an ITT for Islam or ESP seems ... basically fine? I already have strong reasons to doubt the existence of ontologically fundamental mental entities. I accept my ignorance of the reasons someone might postulate otherwise, not out of contempt, but because I just don't have the time.

I think there's a hidden or assumed goal here that I don't understand. The goal clearly isn't truth for it's own sake because then there wouldn't be a distinction between the truth of what they believe and the truth of whats real. You can of course make a distinction such as Simulacra levels but ultimately it's all part of the territory.

If the goal is instrumental ability to impact the world, I think probably a good portion of the time it's as important to understand peoples beliefs as the reality, because a good portion of the time your impact will he based on not just knowing the truth, but convincing others to change their actions or beliefs.

So what actually is the goal you are after?

Comment by Matt Goldenberg (mr-hire) on What should a non-genius do in the face of rapid progress in GAI to ensure a decent life? · 2024-01-04T17:44:25.811Z · LW · GW

I think this post has decent financial advice if you believe in near term GAI.

 

https://www.lesswrong.com/posts/CTBta9i8sav7tjC2r/how-to-hopefully-ethically-make-money-off-of-agi

Comment by Matt Goldenberg (mr-hire) on Prediction Markets aren't Magic · 2023-12-31T05:06:17.138Z · LW · GW

I dunno I still think Bitcoin is actually a good store of value and hedge against problems in fiat currency. Probably as good a bet as gold as this point.

Comment by Matt Goldenberg (mr-hire) on Stupid Questions - April 2023 · 2023-12-27T16:39:24.538Z · LW · GW

I think one of the things rationalists try to do is take the numbers seriously from a consequentialist/utilitarian perspective. This means that even if there's a small chance of doom, you should put vast resources towards preventing it since the expected loss is high.

I think this makes people think that the expectations of doom in the community are much higher than they actually are, because the expected value of preventing doom is so high. 

Comment by Matt Goldenberg (mr-hire) on align your latent spaces · 2023-12-25T17:24:40.633Z · LW · GW

I'm curious about the down votes to these comments. Do people think they weren't adding to the discussion?

Comment by Matt Goldenberg (mr-hire) on align your latent spaces · 2023-12-25T16:24:26.643Z · LW · GW

I don't think I would describe hatties thinking as shoddy, he's one of the more careful thinkers in educational theory, both dong a careful review of the literature, and then testing his insights through his work with implementing his insights into schools and being careful with the results. Of course when you want your material implemented there's tradeoffs you make, between implementability and depth. But your assessment seems premature based on looking at my one comment

It's true that you're doing transfer with the previous skills that already went through the process while doing surface with new skills that are going through, but I dont think tht means it comes first. It comes last from those previous skills. If you want more specifics you can of course read Hattie.

Comment by Matt Goldenberg (mr-hire) on align your latent spaces · 2023-12-25T03:45:05.999Z · LW · GW

In general I've found it practically very useful for my own learning, which is reason enough for me. 

The model itself fell out Hattie's work trying to pull out all of the important meta studies in education, and understand the data - he found that by applying these 3 stages, he could better understand why certain interventions were effective in some cases and not others.

I would be very surprised if there are clear delineated boundaries between the 3, as such an abstraction rarely corresponds to reality so cleanly.  And yet I still find it an incredibly useful model.

Comment by Matt Goldenberg (mr-hire) on align your latent spaces · 2023-12-24T22:08:10.318Z · LW · GW

The Hattie model of learning posits surface -> deep -> transfer learning as a general rule for how learning progresses. I suspect that flashcards are excellent for surface learning, and the integration you're talking about is transfer learning.  It's possible that you could try to skip straight to transfer learning, but I suspect it would actually take longer, as you'd be using transfer learning methods to get the surface and deep learning done.

Comment by Matt Goldenberg (mr-hire) on AI Girlfriends Won't Matter Much · 2023-12-24T16:57:54.042Z · LW · GW

I think the main effect will be AI boyfriends, which aren't already saturated.

Comment by Matt Goldenberg (mr-hire) on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-20T06:41:33.671Z · LW · GW

Well, I don't actually know what "crybullying" or "sociosexuality" mean, but I definitely know that male sociopaths make use of reputation destruction.

Comment by Matt Goldenberg (mr-hire) on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-19T21:46:52.028Z · LW · GW

This felt unnecessarily gendered to me.  There are obviously masculine manipulative sociopaths.

Comment by Matt Goldenberg (mr-hire) on Upgrading the AI Safety Community · 2023-12-17T18:04:43.675Z · LW · GW

You're not going to just be able to stop the train at the moment the costs outweigh the benefits. The majority of negative consequences will most likely come from grey swans that won't show up in your nuanced calculations of costs and benefits.

Comment by Matt Goldenberg (mr-hire) on Upgrading the AI Safety Community · 2023-12-16T16:34:29.118Z · LW · GW

I don't think anyone is saying this outright so I suppose I will - pushing forward the frontier on intelligence enhancement as a solution to alignment is not wise. The second order effects of pushing that particular frontier (both the capabilities and overton window) are disastrous, and our intelligence outpacing our wisdom is what got us into this mess in the first place.

Comment by Matt Goldenberg (mr-hire) on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-08T17:10:33.234Z · LW · GW

Nothing wrong with it, in fact I recommend it. But seeing oneself as a hero and persuading others of it will indeed be one of the main issues leading to hero worship.

Comment by Matt Goldenberg (mr-hire) on My Effortless Weightloss Story: A Quick Runthrough · 2023-12-08T02:03:33.822Z · LW · GW

Potatoes are relatively low calorie density and high satiety.

 

Potatoes aren't just satiating, they're weirdly satiating.

 

You can of course say that satiety explains the weight loss, but then you have to ask... what explains the satiety?

Comment by Matt Goldenberg (mr-hire) on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-07T14:58:30.993Z · LW · GW

I like LW, and think that it does a certain subset of things better than anywhere else on the internet.

In particular, terms of "sane takes on what's going on" I can usually find them somewhere in the highly upvoted posts or comments.

I think in general my issue with LW is it just reflects the pitfalls of the rationalist worldview. In general the prevailing view conflates intelligence with wisdom, and therefore fails to grasp what is sacred on a moment to moment level that allows skillful action.

I think the fallout of SBF, the fact that rationalists and EAs keep building AI capabilities organizations, rationality adjacent cults centered around obviously immoral world views etc., are all predictable consequences of doing a thing where you try to intelligence hard enough that wisdom comes out.

I don't really expect this to change, and expect LW to continue to be a place that has the sanest takes on what's going on and then leads to incredible mistakes when trying to address that situation. And that previous sentence basically sums up how I feel about LW these days.

Comment by Matt Goldenberg (mr-hire) on Google Gemini Announced · 2023-12-07T03:53:32.338Z · LW · GW

I think the video is mostly faked as a sequence of things Gemini can kind of sort of do. In the blog post they do it with few shot prompting and 3 screenshots, and say gemini sometimes gets it wrong:

https://developers.googleblog.com/2023/12/how-its-made-gemini-multimodal-prompting.html?m=1

Comment by Matt Goldenberg (mr-hire) on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-06T15:19:41.252Z · LW · GW

I think one of the issues with Eliezer is that he sees himself as a hero, and it comes through both explicitly and in vibes in the writing, and Eliezer is also a persuasive writer.

Comment by Matt Goldenberg (mr-hire) on How to Control an LLM's Behavior (why my P(DOOM) went down) · 2023-11-28T21:36:23.735Z · LW · GW

I think that it's risky to have a simple waluigi switch that can be turned on at inferencing time. Not sure how risky.

Comment by Matt Goldenberg (mr-hire) on How to Control an LLM's Behavior (why my P(DOOM) went down) · 2023-11-28T21:28:17.777Z · LW · GW

The <good> <bad> thing is really cool, although it leaves open the possibility of a bug (or leaked weights) causing the creation of a maximally misaligned AGI.

Comment by Matt Goldenberg (mr-hire) on Neither EA nor e/acc is what we need to build the future · 2023-11-28T16:52:00.462Z · LW · GW

Even Jaan Tallinn is “now questioning the merits of running companies based on the philosophy.”

The actual quote by Tallin is:

The OpenAI governance crisis highlights the fragility of voluntary EA-motivated governance schemes... So the world should not rely on such governance working as intended.

which to me is a different claim than questioning the merits of running companies based on the EA philosophy - it's questioning an implementation of that philosophy via voluntarily limiting the company from being too profit motivated at the expense of other EA concerns.

Comment by Matt Goldenberg (mr-hire) on Apocalypse insurance, and the hardline libertarian take on AI risk · 2023-11-28T12:55:25.420Z · LW · GW

"responsibility they have for the future of humanity"

 

As I read it, it only wanted to capture the possibility of killing currently living individuals. If they had to also account for 'killing' potential future lives it could make an already unworkable proposal even MORE unworkable.

Comment by Matt Goldenberg (mr-hire) on Spaced repetition for teaching two-year olds how to read (Interview) · 2023-11-28T01:54:44.855Z · LW · GW

Did you think they were going too easy on their children or too hard? Or some orthogonal values mismatch?

Comment by Matt Goldenberg (mr-hire) on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-10T16:13:28.706Z · LW · GW

I, being under the age of 30, have a ~80% chance of making it to LEV in my lifespan, with an approximately 5% drop for every additional decade older you are at the present.

 

You, being a relatively wealthy person in a modernized country?  Do you think you'll be able to afford the LEV by that time, or only that some of the wealthiest people will?

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2023-11-10T16:10:43.954Z · LW · GW

My sense is that most people who haven't done one in the last 6 months or so would benefit from at least a week long silent retreat without phone, computer, or books.

Comment by Matt Goldenberg (mr-hire) on Vote on Interesting Disagreements · 2023-11-10T15:07:04.534Z · LW · GW

I don't have any special knowledge, but my guess is their code is like a spaghetti tower (https://www.lesswrong.com/posts/NQgWL7tvAPgN2LTLn/spaghetti-towers#:~:text=The distinction about spaghetti towers,tower is more like this.) because they've prioritized pushing out new features over refactoring and making a solid code base.

Comment by Matt Goldenberg (mr-hire) on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-03T17:48:11.625Z · LW · GW

I have ~70% confidence that in the absence of superhuman AGI or other x-risks in the near term, we have a shot at getting to longevity escape velocity in 20 years. 

 

Is the claim here a 70% chance of longevity escape velocity by 2043? It's a bit hard to parse.

If that is indeed the claim, I find it very surprising, and I'm curious about what evidence you're using to make that claim?  Also, is that LEV for like, a billionaire, a middle class person in a developed nation, or everyone?

Comment by Matt Goldenberg (mr-hire) on Thoughts on open source AI · 2023-11-03T16:39:43.403Z · LW · GW
  • Note that if camelidAI is very capable, some of these preventative measures might be very ambitious, e.g. “make society robust to engineered pandemics.” The source of hope here is that we have access to a highly capable and well-behaved GPT-SoTA. 


I think there are many harms that are asymmetric in terms of creating them vs. preventing them.  For instance, I suspect it's a lot easier to create a bot that people will fall in love with than to create a technology that prevents people from falling in love with bots (maybe you could create like, a psychology bot that helps people once they're hopelessly addicted, but that's already asymmetric) .

There of course are things that are asymmetric in the other direction (maybe by the time you can create a bot that reliably exploits and hacks software, you can create a bot that rewrites that same software to be formally verified) but all it takes is a few things that are asymmetric in the other direction to make this plan infeasible, and I suspect that the closer we get to general intelligence, the more of these we get (simply because of the breadth of activities it can be used for.)

Comment by Matt Goldenberg (mr-hire) on Book Review: Going Infinite · 2023-10-31T20:40:17.626Z · LW · GW

I think virtue ethics is a practical solution, but if you just say "if corner cases show up, don't follow it" means you're doing something else other than being a virtue ethicist.

Comment by Matt Goldenberg (mr-hire) on Book Review: Going Infinite · 2023-10-25T20:06:36.575Z · LW · GW

. The elegance of this argument and arguments like it is the reason people like utilitarianism, myself included.

 

Excessive bullet biting for the pursuit of elegance is a road to moral ruin.  Human value is complex.  To be a consistent agent in Deontology, Virtue Ethics, or Utilitarianism, you necessarily have to (at minimum) toss out the other two. But morally, we actually DO value aspects of all 3 - we really DO think it's bad to murder someone outside of the consequences of doing so, and it feels like adding epicycles to justify that moral intuition with reasons  when there is indeed a deontological core to some of our moral intuitions. Of course, there's also a core of utilitarianism and virtue ethics that would all suggest not murdering - but throwing out things you actually value in terms of your moral intuitions in the name of elegance is bad, actually.