Posts

Mechanism for feature learning in neural networks and backpropagation-free machine learning models 2024-03-19T14:55:59.296Z
Peter Thiel on Technological Stagnation and Out of Touch Rationalists 2022-12-07T13:15:32.009Z
Non-Coercive Perfectionism 2021-01-26T16:53:36.238Z
Would most people benefit from being less coercive to themselves? 2021-01-21T14:24:17.187Z
Why Productivity Systems Don't Stick 2021-01-16T17:45:37.479Z
How to Write Like Kaj Sotala 2021-01-07T19:33:35.260Z
When Gears Go Wrong 2020-08-02T06:21:25.389Z
Reconsolidation Through Questioning 2019-11-14T23:22:43.518Z
Reconsolidation Through Experience 2019-11-13T20:04:39.345Z
The Hierarchy of Memory Reconsolidation Techniques 2019-11-13T20:02:43.449Z
Practical Guidelines for Memory Reconsolidation 2019-11-13T19:54:10.097Z
A Practical Theory of Memory Reconsolidation 2019-11-13T19:52:20.364Z
Expected Value- Millionaires Math 2019-10-09T14:50:26.732Z
On Collusion - Vitalik Buterin 2019-10-09T14:45:20.924Z
Exercises for Overcoming Akrasia and Procrastination 2019-09-16T11:53:10.362Z
Appeal to Consequence, Value Tensions, And Robust Organizations 2019-07-19T22:09:43.583Z
Overcoming Akrasia/Procrastination - Volunteers Wanted 2019-07-15T18:29:40.888Z
What are good resources for learning functional programming? 2019-07-04T01:22:05.876Z
Matt Goldenberg's Short Form Feed 2019-06-21T18:13:54.275Z
What makes a scientific fact 'ripe for discovery'? 2019-05-17T09:01:32.578Z
The Case for The EA Hotel 2019-03-31T12:31:30.969Z
How to Understand and Mitigate Risk 2019-03-12T10:14:19.873Z
What Vibing Feels Like 2019-03-11T20:10:30.017Z
S-Curves for Trend Forecasting 2019-01-23T18:17:56.436Z
A Framework for Internal Debugging 2019-01-16T16:04:16.478Z
The 3 Books Technique for Learning a New Skilll 2019-01-09T12:45:19.294Z
Symbiosis - An Intentional Community For Radical Self-Improvement 2018-04-22T23:15:06.832Z
How Going Meta Can Level Up Your Career 2018-04-14T02:13:02.380Z
Video: The Phenomenology of Intentions 2018-01-09T03:40:45.427Z
Video - Subject - Object Shifts and How to Have Them 2018-01-04T02:11:22.142Z

Comments

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-04-24T20:31:40.843Z · LW · GW

i don't think the constraint is that energy is too expensive? i think we just literally don't have enough of it concentrated in one place

but i have no idea actually

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-04-24T14:10:36.848Z · LW · GW

Zuck and Musk point to energy as a quickly approaching deep learning bottleneck over and above compute.

This to me seems like it could slow takeoff substantially and effectively create a wall for a long time.

Best arguments against this?

Comment by Matt Goldenberg (mr-hire) on Is there software to practice reading expressions? · 2024-04-23T22:38:16.290Z · LW · GW

Paul Ekmans software is decent. When I used it (before it was a SaaS, just a cd) it just basicallyflashed an expression for a moment then went back to neutral pic. After some training it did help to identify micro expressions in people

Comment by Matt Goldenberg (mr-hire) on Mid-conditional love · 2024-04-23T22:08:47.333Z · LW · GW

People talk about unconditional love and conditional love. Maybe I’m out of the loop regarding the great loves going on around me, but my guess is that love is extremely rarely unconditional. Or at least if it is, then it is either very broadly applied or somewhat confused or strange: if you love me unconditionally, presumably you love everything else as well, since it is only conditions that separate me from the worms.

 

Yes. this is my experience of cultivating unconditional love, it loves everything without target. I doesn't feel confused or strange, just like I am love, and my experience e.g. cultivating it in coaching is that people like being in the present of such love.

It's also very helpful for people to experience conditional love! In particular of the type "I've looked at you, truly seen you, and loved you for that."

IME both of these loves feel pure and powerful from both sides, and neither of them are related to being attached, being pulled towards or pushed away from people.

 

It feels like maybe we're using the word love very differently?

Comment by Matt Goldenberg (mr-hire) on Fabien's Shortform · 2024-04-12T01:53:19.223Z · LW · GW

Both causal.app and getguesstimate.com have pretty good monte carlo uis

Comment by Matt Goldenberg (mr-hire) on Best in Class Life Improvement · 2024-04-04T17:40:20.186Z · LW · GW

IME there is a real effect where nicotine acts as a gateway drug to tobacco or vaping

in general this whole post seems to make this mistake of saying 'a common second order effect of this thing is doing it in a way that will get you addicted - so don't do that' which is just such an obvious failure mode that to call it a chesterton fence is generous

Comment by Matt Goldenberg (mr-hire) on Modern Transformers are AGI, and Human-Level · 2024-03-27T20:09:57.352Z · LW · GW

The question is - how far can we get with in-context learning.  If we filled Gemini's 10 million tokens with Sudoku rules and examples, showing where it went wrong each time, would it generalize? I'm not sure but I think it's possible

Comment by Matt Goldenberg (mr-hire) on Modern Transformers are AGI, and Human-Level · 2024-03-27T16:27:13.273Z · LW · GW

It seems likely to me that you could create a prompt that would have a transformer do this.

Comment by Matt Goldenberg (mr-hire) on Daniel Kokotajlo's Shortform · 2024-03-26T15:25:15.289Z · LW · GW

i like coase's work on transaction costs as an explanation here

coase is an unusually clear thinker and writer, and i recommend reading through some of his papers

Comment by Matt Goldenberg (mr-hire) on Should rationalists be spiritual / Spirituality as overcoming delusion · 2024-03-26T14:44:07.877Z · LW · GW

i just don't see the buddha making any reference to nervous systems or mammalians when he talks about suffering(not even some sort of pali equivalent that points to the materialist understanding at the time)

Comment by Matt Goldenberg (mr-hire) on Should rationalists be spiritual / Spirituality as overcoming delusion · 2024-03-26T14:00:15.259Z · LW · GW

? TBC I think the claims about suffering in Buddhism are claims about how our mammalian nervous systems happen to be wired and ways you can improve it.

 

This seems like quite a western modern take on buddhism

it feels hard to read the original buddha this way

Comment by Matt Goldenberg (mr-hire) on General Thoughts on Secular Solstice · 2024-03-25T03:17:05.742Z · LW · GW

Compare, the world will be exactly as it has been in the past, with the world will always be exactly as it is in this moment

Comment by Matt Goldenberg (mr-hire) on D0TheMath's Shortform · 2024-03-24T15:54:08.533Z · LW · GW

it's true, but I don't think there's anything fundamental preventing the same sort of proliferation and advances in open source LLMs that we've seen in stable diffusion (aside from the fact that LLMs aren't as useful for porn). that it has been relatively tame so far doesn't change the basic pattern of how open source effects the growth of technology

Comment by Matt Goldenberg (mr-hire) on D0TheMath's Shortform · 2024-03-24T10:18:53.965Z · LW · GW

yeah, it's much less likely now

Comment by Matt Goldenberg (mr-hire) on D0TheMath's Shortform · 2024-03-23T15:02:06.258Z · LW · GW

it doesn't seem like that's the case to me - but even if it were the case, isn't that moving the goal posts of the original post?

I don't think time-to-AGI got shortened at all.

Comment by Matt Goldenberg (mr-hire) on D0TheMath's Shortform · 2024-03-23T14:26:10.885Z · LW · GW

The classic effect of open sourcing is to hasten the commoditization and standardization of the component, which then allows an explosion of innovation on top of that stable base.

If you look at what's happened with Stable Diffusion, this is exactly what we see.  While it's never been a cutting edge model (until soon with SD3), there's been an explosion of capabilities advances in image model generation from it.  Controlnet, best practices for LORA training, model merging, techniques for consistent characters and animation, alll coming out of the open source community.

In LLM land, though not as drastic, we see similar things happening, in particular technqiues for merging models to get rapid capability advances, and rapid creation of new patterns for agent interactions and tool use.

So while the models themselves might not be state of the art, open sourcing the models obviously pushes the state of the art.

Comment by Matt Goldenberg (mr-hire) on Raemon's Shortform · 2024-03-18T05:41:53.417Z · LW · GW

I think most people have short term, medium term, and long term goals. E.g., right about now many people probably have the goal of doing their taxes, and depending on their situation those may match many of your desiderata.

I used to put a lot of effort into creating exercises, simulations, and scenarios that matched up with various skills I was teaching, but ultimately found it much more effective to just say "look at your todo list, and find something that causes overwhelm". Deliberate practice consists of finding a thing that causes overwhelm, seeing how to overcome that overwhelm, working for two minutes, then finding another task that induces overwhelm. I also use past examples, imagining in detail what it would have been like to act in this different way

You're operating in a slightly different domain, but still I imagine people have plenty of problems and sub problems in either their life or research where the things you're teaching applies, and you can scope them small enough to get tighter feedback loops.

Comment by Matt Goldenberg (mr-hire) on Raemon's Shortform · 2024-03-17T21:29:29.515Z · LW · GW

Why not just have people spend some time working with their existing goals?

Comment by Matt Goldenberg (mr-hire) on "How could I have thought that faster?" · 2024-03-13T06:34:08.026Z · LW · GW

I usually explain my process these days to clients with the acronym LIFE

Learn New Tools Integrate Resistance Forge an Identity Express Yourself

Learn New Tools is cognitive-emotional strategies, of which TYCS is an example. Fwiw a some of TYCS is actually deliberate practice to discover cognitive strategies ( as compared to something like CFAR which extracts and teaches them directly), but the result is the same.

The important thing is to just have a clear tool, give people something they know they can use in certain situations, that works immediately to solve their problems.

But the thing is, people don't use them, because they have resistance. That's where parts work and other resistance integration tools come into play.

Even when thata done, there's still the issue that you don't automatically use the techniques. This is where forge an Identity comes in, where you use identity change techniques to make the way you see yourself be in alignment with a way of being that the technique brings out. (This is one thing TYCS gets wrong in my opinion, trying to directly reinforce the cognitive strategies instead of creating an identity and reinforcing the strategies as affirming that identity.)

Finally that identity needs to propogate to every area of your life, so there's not situations where you fail to use the technique and way of being. This is just a process of looking at each area, seeing where it's not in alignment with the identity, then deliberately taking an action to bring it to that area.

IME all of these pieces are needed to make a life change from a technique, although it's rarely as linear as I describe it.

Comment by Matt Goldenberg (mr-hire) on "How could I have thought that faster?" · 2024-03-12T00:54:16.738Z · LW · GW

The way I do this with my clients is that we train cognitive tools first, then find the resistance to those habits and work on it using parts work

Comment by Matt Goldenberg (mr-hire) on leogao's Shortform · 2024-03-09T16:06:12.963Z · LW · GW

can you give examples?

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-02-29T16:44:08.476Z · LW · GW

I can hover over quick takes to get the full comment, but not popular comments.

Comment by Matt Goldenberg (mr-hire) on New LessWrong review winner UI ("The LeastWrong" section and full-art post pages) · 2024-02-29T03:39:03.460Z · LW · GW

Why not show the top-rated review, like you do at the top of the page?

Comment by Matt Goldenberg (mr-hire) on New LessWrong review winner UI ("The LeastWrong" section and full-art post pages) · 2024-02-28T22:10:46.211Z · LW · GW

The art change is pretty distracting, and having to hover to see the author is also a bummer, plus no way to get a summary (that I can see).

It's seemingly optimized for a "judge a book by it's cover" type of thing where I click around until I see a title and image I like

Comment by Matt Goldenberg (mr-hire) on How I internalized my achievements to better deal with negative feelings · 2024-02-27T23:21:08.338Z · LW · GW

Appreciated this writeup.

How long have you been using the tool, and do you find any resistance to using it?

Do you always assume the underlying issue, or do you do focusing each time? Or do you find a contradictory experience through intuition without knowing why it works?

Comment by Matt Goldenberg (mr-hire) on How I build and run behavioral interviews · 2024-02-26T16:26:47.026Z · LW · GW

I haven't looked into this recently, but last time I looked at the literature behavioral interviews were far more predictive of job performance than other interviewing methods.

It's possible that they've become less predictive as people started preparing for them more.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-02-24T22:48:54.277Z · LW · GW

Thanks. Appreciate this. I'm going to give another shot at writing this

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-02-24T21:37:27.518Z · LW · GW

Request for feedback: Do I sound like a raving lunatic above?

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-02-24T21:36:28.194Z · LW · GW

Surprising thing I've found as I begin to study and integrate skillful coercive motivation is the centrality of belief in providence and faith of this way of motivating yourself. Here are some central examples: the first from War of Art, the second from The Tools, the third from David Goggins. these aren't cherry picked (this is a whole section of War of Art and a whole chapter of The Tools).

Image
Image
Image

This has interesting implications given that as a society (at least in america) we've historically been motivated by this type of masculine, apollonian motivation - but have increasingly let go of faith in higher powers as a tenet of our central religion, secular humanism.This means the core motivation that drives us to build, create, transcend our nature... is running on fumes. We are motivated by gratitude, w/o sense of to what or whom we should be grateful, told to follow our calling w/o a since of who is calling.

We've tried to hide this contradiction. our seminaries separate our twin Religion (Secular Humanism and Scientific Materialism) into stem and humanities tracks to hide that what motivates The Humanities to create is invalidated by the philosophy that allows STEM o discover. But this is crumbling, the cold philosophy of scientific materialism is eroding the shaky foundations that allow secular humanists to connect to these higher forces - this is one of the drivers of the meaning the crisis.

I don't really see any way we can make it through the challenges we're facing with these powerful new technologies w/o a new religion that connects us to the mystical truly wise core that allows us to be motivated towards what's good and true. This is exactly what Marc Gafni is trying to do with Cosmo-Erotic Humanism, and what Monastic Academy is trying to do with a new, mystical form of dataism - but both these projects are moonshots to massively change the direction of culture.

Comment by Matt Goldenberg (mr-hire) on The Gemini Incident · 2024-02-23T19:11:49.965Z · LW · GW

Or invisible?

Comment by Matt Goldenberg (mr-hire) on I played the AI box game as the Gatekeeper — and lost · 2024-02-12T22:37:17.539Z · LW · GW

The original reasoning that Eliezer gave if I remember correctly was that it's better to make people realize there are unknown unknowns, rather than taking one specific strategy and saying "oh, I know how I would have stopped that particular strategy"

Comment by Matt Goldenberg (mr-hire) on Prediction Markets aren't Magic · 2024-02-12T00:15:22.831Z · LW · GW

Some quick calculations from chatGPT puts the value from a british government bond (considered the world power then) at about equal to the value of gold, assuming a fixed interest rate of 3% with gold coming out slightly ahead.

I haven't really checked these calculations but they pass the sniff test (except the part where chatGPT tried to adjust todays dollars for inflation).

 

Comment by Matt Goldenberg (mr-hire) on Upgrading the AI Safety Community · 2024-02-10T13:50:51.146Z · LW · GW

This is not what I mean by wisdom.

Comment by Matt Goldenberg (mr-hire) on Prediction Markets aren't Magic · 2024-02-02T17:01:28.224Z · LW · GW

Compared to what?  My guess is it's a better bet than most currencies during that time, aside from a few winners that it would have been hard to predict ahead of time.  E.g., if 200 years ago, you had taken the most powerful countries and their currencies, and put your money into those, I predict you'd be much worse off than gold.

Comment by Matt Goldenberg (mr-hire) on Universal Love Integration Test: Hitler · 2024-01-12T02:31:33.062Z · LW · GW

I think oftentimes what's needed to let go of grief is to stop pushing it away, in doing that, it may be felt more fully, which once the message is received, can allow you to let it go. This process may involve fully feeling pain that you were suppressing.

Comment by Matt Goldenberg (mr-hire) on Universal Love Integration Test: Hitler · 2024-01-11T20:25:00.017Z · LW · GW

. It doesn't hurt the way pity or lamenting might; there's no grief in it, just well-wishing.

While true, I think there's a caveat that often the thing preventing the feeling of true love from coming forth can be unprocessed grief that needs to be felt, or unprocessed pain that needs to be forgiven.

I think there's a danger in saying "if love feels painful you're doing this wrong" as often that's exactly the developmentally correct thing to be experiencing in order to get to the love underneath.

Comment by Matt Goldenberg (mr-hire) on On the Contrary, Steelmanning Is Normal; ITT-Passing Is Niche · 2024-01-10T12:01:49.243Z · LW · GW

I couldn't pass an ITT for advocates of Islam or extrasensory perception. On the one hand, this does represent a distinct deficit in my ability to model what the advocates of these ideas are thinking, a tragic gap in my comprehension of reality, which I would hope to remedy in the Glorious Transhumanist Future if that were a real thing. On the other hand, facing the constraints of our world, my inability to pass an ITT for Islam or ESP seems ... basically fine? I already have strong reasons to doubt the existence of ontologically fundamental mental entities. I accept my ignorance of the reasons someone might postulate otherwise, not out of contempt, but because I just don't have the time.

I think there's a hidden or assumed goal here that I don't understand. The goal clearly isn't truth for it's own sake because then there wouldn't be a distinction between the truth of what they believe and the truth of whats real. You can of course make a distinction such as Simulacra levels but ultimately it's all part of the territory.

If the goal is instrumental ability to impact the world, I think probably a good portion of the time it's as important to understand peoples beliefs as the reality, because a good portion of the time your impact will he based on not just knowing the truth, but convincing others to change their actions or beliefs.

So what actually is the goal you are after?

Comment by Matt Goldenberg (mr-hire) on What should a non-genius do in the face of rapid progress in GAI to ensure a decent life? · 2024-01-04T17:44:25.811Z · LW · GW

I think this post has decent financial advice if you believe in near term GAI.

 

https://www.lesswrong.com/posts/CTBta9i8sav7tjC2r/how-to-hopefully-ethically-make-money-off-of-agi

Comment by Matt Goldenberg (mr-hire) on Prediction Markets aren't Magic · 2023-12-31T05:06:17.138Z · LW · GW

I dunno I still think Bitcoin is actually a good store of value and hedge against problems in fiat currency. Probably as good a bet as gold as this point.

Comment by Matt Goldenberg (mr-hire) on Stupid Questions - April 2023 · 2023-12-27T16:39:24.538Z · LW · GW

I think one of the things rationalists try to do is take the numbers seriously from a consequentialist/utilitarian perspective. This means that even if there's a small chance of doom, you should put vast resources towards preventing it since the expected loss is high.

I think this makes people think that the expectations of doom in the community are much higher than they actually are, because the expected value of preventing doom is so high. 

Comment by Matt Goldenberg (mr-hire) on align your latent spaces · 2023-12-25T17:24:40.633Z · LW · GW

I'm curious about the down votes to these comments. Do people think they weren't adding to the discussion?

Comment by Matt Goldenberg (mr-hire) on align your latent spaces · 2023-12-25T16:24:26.643Z · LW · GW

I don't think I would describe hatties thinking as shoddy, he's one of the more careful thinkers in educational theory, both dong a careful review of the literature, and then testing his insights through his work with implementing his insights into schools and being careful with the results. Of course when you want your material implemented there's tradeoffs you make, between implementability and depth. But your assessment seems premature based on looking at my one comment

It's true that you're doing transfer with the previous skills that already went through the process while doing surface with new skills that are going through, but I dont think tht means it comes first. It comes last from those previous skills. If you want more specifics you can of course read Hattie.

Comment by Matt Goldenberg (mr-hire) on align your latent spaces · 2023-12-25T03:45:05.999Z · LW · GW

In general I've found it practically very useful for my own learning, which is reason enough for me. 

The model itself fell out Hattie's work trying to pull out all of the important meta studies in education, and understand the data - he found that by applying these 3 stages, he could better understand why certain interventions were effective in some cases and not others.

I would be very surprised if there are clear delineated boundaries between the 3, as such an abstraction rarely corresponds to reality so cleanly.  And yet I still find it an incredibly useful model.

Comment by Matt Goldenberg (mr-hire) on align your latent spaces · 2023-12-24T22:08:10.318Z · LW · GW

The Hattie model of learning posits surface -> deep -> transfer learning as a general rule for how learning progresses. I suspect that flashcards are excellent for surface learning, and the integration you're talking about is transfer learning.  It's possible that you could try to skip straight to transfer learning, but I suspect it would actually take longer, as you'd be using transfer learning methods to get the surface and deep learning done.

Comment by Matt Goldenberg (mr-hire) on AI Girlfriends Won't Matter Much · 2023-12-24T16:57:54.042Z · LW · GW

I think the main effect will be AI boyfriends, which aren't already saturated.

Comment by Matt Goldenberg (mr-hire) on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-20T06:41:33.671Z · LW · GW

Well, I don't actually know what "crybullying" or "sociosexuality" mean, but I definitely know that male sociopaths make use of reputation destruction.

Comment by Matt Goldenberg (mr-hire) on Effective Aspersions: How the Nonlinear Investigation Went Wrong · 2023-12-19T21:46:52.028Z · LW · GW

This felt unnecessarily gendered to me.  There are obviously masculine manipulative sociopaths.

Comment by Matt Goldenberg (mr-hire) on Upgrading the AI Safety Community · 2023-12-17T18:04:43.675Z · LW · GW

You're not going to just be able to stop the train at the moment the costs outweigh the benefits. The majority of negative consequences will most likely come from grey swans that won't show up in your nuanced calculations of costs and benefits.

Comment by Matt Goldenberg (mr-hire) on Upgrading the AI Safety Community · 2023-12-16T16:34:29.118Z · LW · GW

I don't think anyone is saying this outright so I suppose I will - pushing forward the frontier on intelligence enhancement as a solution to alignment is not wise. The second order effects of pushing that particular frontier (both the capabilities and overton window) are disastrous, and our intelligence outpacing our wisdom is what got us into this mess in the first place.

Comment by Matt Goldenberg (mr-hire) on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-08T17:10:33.234Z · LW · GW

Nothing wrong with it, in fact I recommend it. But seeing oneself as a hero and persuading others of it will indeed be one of the main issues leading to hero worship.