Posts

Good ways to monetarily profit from the increasing demand for power? 2024-06-10T15:29:13.418Z
Mechanism for feature learning in neural networks and backpropagation-free machine learning models 2024-03-19T14:55:59.296Z
Peter Thiel on Technological Stagnation and Out of Touch Rationalists 2022-12-07T13:15:32.009Z
Non-Coercive Perfectionism 2021-01-26T16:53:36.238Z
Would most people benefit from being less coercive to themselves? 2021-01-21T14:24:17.187Z
Why Productivity Systems Don't Stick 2021-01-16T17:45:37.479Z
How to Write Like Kaj Sotala 2021-01-07T19:33:35.260Z
When Gears Go Wrong 2020-08-02T06:21:25.389Z
Reconsolidation Through Questioning 2019-11-14T23:22:43.518Z
Reconsolidation Through Experience 2019-11-13T20:04:39.345Z
The Hierarchy of Memory Reconsolidation Techniques 2019-11-13T20:02:43.449Z
Practical Guidelines for Memory Reconsolidation 2019-11-13T19:54:10.097Z
A Practical Theory of Memory Reconsolidation 2019-11-13T19:52:20.364Z
Expected Value- Millionaires Math 2019-10-09T14:50:26.732Z
On Collusion - Vitalik Buterin 2019-10-09T14:45:20.924Z
Exercises for Overcoming Akrasia and Procrastination 2019-09-16T11:53:10.362Z
Appeal to Consequence, Value Tensions, And Robust Organizations 2019-07-19T22:09:43.583Z
Overcoming Akrasia/Procrastination - Volunteers Wanted 2019-07-15T18:29:40.888Z
What are good resources for learning functional programming? 2019-07-04T01:22:05.876Z
Matt Goldenberg's Short Form Feed 2019-06-21T18:13:54.275Z
What makes a scientific fact 'ripe for discovery'? 2019-05-17T09:01:32.578Z
The Case for The EA Hotel 2019-03-31T12:31:30.969Z
How to Understand and Mitigate Risk 2019-03-12T10:14:19.873Z
What Vibing Feels Like 2019-03-11T20:10:30.017Z
S-Curves for Trend Forecasting 2019-01-23T18:17:56.436Z
A Framework for Internal Debugging 2019-01-16T16:04:16.478Z
The 3 Books Technique for Learning a New Skilll 2019-01-09T12:45:19.294Z
Symbiosis - An Intentional Community For Radical Self-Improvement 2018-04-22T23:15:06.832Z
How Going Meta Can Level Up Your Career 2018-04-14T02:13:02.380Z
Video: The Phenomenology of Intentions 2018-01-09T03:40:45.427Z
Video - Subject - Object Shifts and How to Have Them 2018-01-04T02:11:22.142Z

Comments

Comment by Matt Goldenberg (mr-hire) on Most smart and skilled people are outside of the EA/rationalist community: an analysis · 2024-07-13T06:46:14.577Z · LW · GW

median rationalist at roughly MENSA level. This still feels wrong to me: if they’re so smart, where are the nobel laureates? The famous physicists? And why does arguing on Lesswrong make me feel like banging my head against the wall?

I think you'd have to consider both Scott Aaronson and Taylor Cowen to be rationalist adjacent, and both considered intellectual heavyweights

Dustin Moskovitz EA adjacent, again considered a heavyweight, but applied to business rather than academia

Then there's the second point, but unfortunately I haven't seen any evidence that someone being smart makes them pleasant to argue with (the contrary in fact)

Comment by Matt Goldenberg (mr-hire) on Reliable Sources: The Story of David Gerard · 2024-07-11T21:09:36.116Z · LW · GW

The whole first part of the article is how this is wrong, due to the gaming of notable sources

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-06-18T11:30:11.088Z · LW · GW

One way that think about "forces beyond yourself" is pointing to what it feels like to operate from a right-hemisphere dominant mode, as defined by Ian McGilcrist.

The language is deliberately designed to evoke that mode - so while I'll get more specific here, know that to experience the thing I'm talking about you need to let go of the mind that wants this type of explanation in order to experience what I'm talking about.

When I'm talking about "Higher Forces" I'm talking about states of being that feel like something is moving through you - you're not a head controlling a body but rather you're first connecting to, then channeling, then becoming part of a larger universal force.

In my coaching work, I like to use Phil Stutz's idea of "Higher forces" like Infinite Love, Forward Motion, Self-Expression, etc, as they're particularly suited for the modern Western Mind.

Here's how Stutz defines the higher force of Self-Expression on his website:

"The Higher Force You’re Invoking: Self-Expression The force of Self-Expression allows us to reveal ourselves in a truthful, genuine way—without caring about others' approval. It speaks through us with unusual clarity and authority, but it also expresses itself nonverbally, like when an athlete is "in the zone." In adults, this force gets buried in the Shadow. Inner Authority, by connecting you to the Shadow, enables you to resurrect the force and have it flow through you."

Of course, religions also have names for these type of special states, calling them Muses, Jhanas, Direct Connection to God.

All of these states (while I can and do teach techniques, steps, and systems to invoke them) ultimately can only be accessed through surrender to the moment, faith in what's there, and letting go of a need for knowing.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-06-16T20:10:06.066Z · LW · GW

It's precisely when handing your life to forces beyond yourself (not Gods, thats just handing your life over to someone else) that you can avoid giving your life over to others/society.

Souls is metaphorical of course, not some essential unchanging part of yourself - just a thing that actually matters, that moves you

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-06-16T12:16:26.233Z · LW · GW

In the early 2000s, we all thought the next productivity system would save us. If we could just follow Tim Ferriss's system and achieve a four-hour workweek, or adopt David Allen's "Getting Things Done" (GTD) methodology, everything would be better. We believed the grind would end.

In retrospect, this was our generation's first attempt at addressing the growing sacredness deficit disorder that was, and still is, ravaging our souls. It was a good distraction for a time—a psyop that convinced us that with the perfect productivity system, we could design the perfect lifestyle and achieve perfection.

However, the edges started to fray when put into action. Location-independent digital nomads turned out to be just as lonely as everyone else. The hyper-productive GTD enthusiasts still burned out.

For me, this era truly ended when Merlin Mann, the author of popular GTD innovations like the "hipster PDA" and "inbox zero," failed to publish his book. He had all the tools in the world and knew all the systems. But when it mattered—when it came to building something from his soul that would stand the test of time—it didn't make a difference.

Merlin wrote a beautiful essay about this failure called "Cranking" (https://43folders.com/2011/04/22/cranking). He mused on the sterile, machine-like crank that would move his father's bed when he could no longer walk. He compared this to the sterile, machine-like systems he used to get himself to write, not knowing what he was writing or why, just turning the crank.

No amount of cranking could reconnect him to the sacred. No system or steps could ensure that the book he was writing would touch your soul, or his. So instead of sending his book draft to the editor, he sent the essay.

Reading that essay did something to me, and I think it marked a shift that many others who grew up in the "productivity systems" era experienced. It's a shift that many caught up in the current crop of "protocols" from the likes of Andrew Huberman and Bryan Johnson will go through in the next few years—a realization that the sacred can't be reached through a set of steps, systems, lists, or protocols.

At best, those systems can point towards something that must be surrendered to in mystery and faith. No amount of cranking will ever get you there, and no productivity system will save you. Only through complete devotion or complete surrender to forces beyond yourself will you find it.

Comment by Matt Goldenberg (mr-hire) on "Metastrategic Brainstorming", a core building-block skill · 2024-06-11T13:05:04.286Z · LW · GW

I just realized that this then brings the problem of "oh, but what's the meta-meta strategy i use), but I think there's just an element of taste to this.

Comment by Matt Goldenberg (mr-hire) on "Metastrategic Brainstorming", a core building-block skill · 2024-06-11T12:36:26.432Z · LW · GW

One thing to note - Brainstorming itself is a meta-strategy that may or may not be the best approach at certain points in the problem, to generate meta-strategic approaches.

Brainstorming for me has a particular flavor - it's helpful when I have a lot of ideas but don't know where to start, or when it feels like my mind just needs the starter cord pulled a few times.

Other times, I get a lot more out of a taking a walk and let my mind wander around the problem, not specifically listing out lanes of attack, but sort of holding the intention that one may show up as I think in a free associative way and walk.

Other times it's helpful for me to have a conversation with a friend, especially one who I can see has the right mind-shape to frame this sort of problem.

Other times it's helpful to specifically look through the list of meta-strategies I have, wandering around my Roam and seeing how different mental models and frameworks can frame the problem.

 

I guess what I'm saying is, it's helpful to separate the move of "oh, it's time to figure out what meta-strategy I can use" from "oh, it's time to brainstorm"

Comment by Matt Goldenberg (mr-hire) on Good ways to monetarily profit from the increasing demand for power? · 2024-06-11T03:20:19.766Z · LW · GW

For those who disagreed, I'd love to be linked to convincing arguments to the contrary!

Comment by Matt Goldenberg (mr-hire) on Good ways to monetarily profit from the increasing demand for power? · 2024-06-10T17:24:44.634Z · LW · GW

I've heard several people who should know (Musk, Ascher) make detailed cases that seem right, and haven't heard any convincing arguments to the contrary.

Comment by Matt Goldenberg (mr-hire) on The Data Wall is Important · 2024-06-10T09:42:11.218Z · LW · GW

But once they break the data-wall, competitors are presumably gonna copy their method.

Is the assumption here that corporate espionage is efficient enough in the AI space that inventing entirely novel methods of training doesn't give much of a competitive advantage?

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-04-24T20:31:40.843Z · LW · GW

i don't think the constraint is that energy is too expensive? i think we just literally don't have enough of it concentrated in one place

but i have no idea actually

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-04-24T14:10:36.848Z · LW · GW

Zuck and Musk point to energy as a quickly approaching deep learning bottleneck over and above compute.

This to me seems like it could slow takeoff substantially and effectively create a wall for a long time.

Best arguments against this?

Comment by Matt Goldenberg (mr-hire) on Is there software to practice reading expressions? · 2024-04-23T22:38:16.290Z · LW · GW

Paul Ekmans software is decent. When I used it (before it was a SaaS, just a cd) it just basicallyflashed an expression for a moment then went back to neutral pic. After some training it did help to identify micro expressions in people

Comment by Matt Goldenberg (mr-hire) on Mid-conditional love · 2024-04-23T22:08:47.333Z · LW · GW

People talk about unconditional love and conditional love. Maybe I’m out of the loop regarding the great loves going on around me, but my guess is that love is extremely rarely unconditional. Or at least if it is, then it is either very broadly applied or somewhat confused or strange: if you love me unconditionally, presumably you love everything else as well, since it is only conditions that separate me from the worms.

 

Yes. this is my experience of cultivating unconditional love, it loves everything without target. I doesn't feel confused or strange, just like I am love, and my experience e.g. cultivating it in coaching is that people like being in the present of such love.

It's also very helpful for people to experience conditional love! In particular of the type "I've looked at you, truly seen you, and loved you for that."

IME both of these loves feel pure and powerful from both sides, and neither of them are related to being attached, being pulled towards or pushed away from people.

 

It feels like maybe we're using the word love very differently?

Comment by Matt Goldenberg (mr-hire) on Fabien's Shortform · 2024-04-12T01:53:19.223Z · LW · GW

Both causal.app and getguesstimate.com have pretty good monte carlo uis

Comment by Matt Goldenberg (mr-hire) on Best in Class Life Improvement · 2024-04-04T17:40:20.186Z · LW · GW

IME there is a real effect where nicotine acts as a gateway drug to tobacco or vaping

in general this whole post seems to make this mistake of saying 'a common second order effect of this thing is doing it in a way that will get you addicted - so don't do that' which is just such an obvious failure mode that to call it a chesterton fence is generous

Comment by Matt Goldenberg (mr-hire) on Modern Transformers are AGI, and Human-Level · 2024-03-27T20:09:57.352Z · LW · GW

The question is - how far can we get with in-context learning.  If we filled Gemini's 10 million tokens with Sudoku rules and examples, showing where it went wrong each time, would it generalize? I'm not sure but I think it's possible

Comment by Matt Goldenberg (mr-hire) on Modern Transformers are AGI, and Human-Level · 2024-03-27T16:27:13.273Z · LW · GW

It seems likely to me that you could create a prompt that would have a transformer do this.

Comment by Matt Goldenberg (mr-hire) on Daniel Kokotajlo's Shortform · 2024-03-26T15:25:15.289Z · LW · GW

i like coase's work on transaction costs as an explanation here

coase is an unusually clear thinker and writer, and i recommend reading through some of his papers

Comment by Matt Goldenberg (mr-hire) on Should rationalists be spiritual / Spirituality as overcoming delusion · 2024-03-26T14:44:07.877Z · LW · GW

i just don't see the buddha making any reference to nervous systems or mammalians when he talks about suffering(not even some sort of pali equivalent that points to the materialist understanding at the time)

Comment by Matt Goldenberg (mr-hire) on Should rationalists be spiritual / Spirituality as overcoming delusion · 2024-03-26T14:00:15.259Z · LW · GW

? TBC I think the claims about suffering in Buddhism are claims about how our mammalian nervous systems happen to be wired and ways you can improve it.

 

This seems like quite a western modern take on buddhism

it feels hard to read the original buddha this way

Comment by Matt Goldenberg (mr-hire) on General Thoughts on Secular Solstice · 2024-03-25T03:17:05.742Z · LW · GW

Compare, the world will be exactly as it has been in the past, with the world will always be exactly as it is in this moment

Comment by Matt Goldenberg (mr-hire) on D0TheMath's Shortform · 2024-03-24T15:54:08.533Z · LW · GW

it's true, but I don't think there's anything fundamental preventing the same sort of proliferation and advances in open source LLMs that we've seen in stable diffusion (aside from the fact that LLMs aren't as useful for porn). that it has been relatively tame so far doesn't change the basic pattern of how open source effects the growth of technology

Comment by Matt Goldenberg (mr-hire) on D0TheMath's Shortform · 2024-03-24T10:18:53.965Z · LW · GW

yeah, it's much less likely now

Comment by Matt Goldenberg (mr-hire) on D0TheMath's Shortform · 2024-03-23T15:02:06.258Z · LW · GW

it doesn't seem like that's the case to me - but even if it were the case, isn't that moving the goal posts of the original post?

I don't think time-to-AGI got shortened at all.

Comment by Matt Goldenberg (mr-hire) on D0TheMath's Shortform · 2024-03-23T14:26:10.885Z · LW · GW

The classic effect of open sourcing is to hasten the commoditization and standardization of the component, which then allows an explosion of innovation on top of that stable base.

If you look at what's happened with Stable Diffusion, this is exactly what we see.  While it's never been a cutting edge model (until soon with SD3), there's been an explosion of capabilities advances in image model generation from it.  Controlnet, best practices for LORA training, model merging, techniques for consistent characters and animation, alll coming out of the open source community.

In LLM land, though not as drastic, we see similar things happening, in particular technqiues for merging models to get rapid capability advances, and rapid creation of new patterns for agent interactions and tool use.

So while the models themselves might not be state of the art, open sourcing the models obviously pushes the state of the art.

Comment by Matt Goldenberg (mr-hire) on Raemon's Shortform · 2024-03-18T05:41:53.417Z · LW · GW

I think most people have short term, medium term, and long term goals. E.g., right about now many people probably have the goal of doing their taxes, and depending on their situation those may match many of your desiderata.

I used to put a lot of effort into creating exercises, simulations, and scenarios that matched up with various skills I was teaching, but ultimately found it much more effective to just say "look at your todo list, and find something that causes overwhelm". Deliberate practice consists of finding a thing that causes overwhelm, seeing how to overcome that overwhelm, working for two minutes, then finding another task that induces overwhelm. I also use past examples, imagining in detail what it would have been like to act in this different way

You're operating in a slightly different domain, but still I imagine people have plenty of problems and sub problems in either their life or research where the things you're teaching applies, and you can scope them small enough to get tighter feedback loops.

Comment by Matt Goldenberg (mr-hire) on Raemon's Shortform · 2024-03-17T21:29:29.515Z · LW · GW

Why not just have people spend some time working with their existing goals?

Comment by Matt Goldenberg (mr-hire) on "How could I have thought that faster?" · 2024-03-13T06:34:08.026Z · LW · GW

I usually explain my process these days to clients with the acronym LIFE

Learn New Tools Integrate Resistance Forge an Identity Express Yourself

Learn New Tools is cognitive-emotional strategies, of which TYCS is an example. Fwiw a some of TYCS is actually deliberate practice to discover cognitive strategies ( as compared to something like CFAR which extracts and teaches them directly), but the result is the same.

The important thing is to just have a clear tool, give people something they know they can use in certain situations, that works immediately to solve their problems.

But the thing is, people don't use them, because they have resistance. That's where parts work and other resistance integration tools come into play.

Even when thata done, there's still the issue that you don't automatically use the techniques. This is where forge an Identity comes in, where you use identity change techniques to make the way you see yourself be in alignment with a way of being that the technique brings out. (This is one thing TYCS gets wrong in my opinion, trying to directly reinforce the cognitive strategies instead of creating an identity and reinforcing the strategies as affirming that identity.)

Finally that identity needs to propogate to every area of your life, so there's not situations where you fail to use the technique and way of being. This is just a process of looking at each area, seeing where it's not in alignment with the identity, then deliberately taking an action to bring it to that area.

IME all of these pieces are needed to make a life change from a technique, although it's rarely as linear as I describe it.

Comment by Matt Goldenberg (mr-hire) on "How could I have thought that faster?" · 2024-03-12T00:54:16.738Z · LW · GW

The way I do this with my clients is that we train cognitive tools first, then find the resistance to those habits and work on it using parts work

Comment by Matt Goldenberg (mr-hire) on leogao's Shortform · 2024-03-09T16:06:12.963Z · LW · GW

can you give examples?

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-02-29T16:44:08.476Z · LW · GW

I can hover over quick takes to get the full comment, but not popular comments.

Comment by Matt Goldenberg (mr-hire) on New LessWrong review winner UI ("The LeastWrong" section and full-art post pages) · 2024-02-29T03:39:03.460Z · LW · GW

Why not show the top-rated review, like you do at the top of the page?

Comment by Matt Goldenberg (mr-hire) on New LessWrong review winner UI ("The LeastWrong" section and full-art post pages) · 2024-02-28T22:10:46.211Z · LW · GW

The art change is pretty distracting, and having to hover to see the author is also a bummer, plus no way to get a summary (that I can see).

It's seemingly optimized for a "judge a book by it's cover" type of thing where I click around until I see a title and image I like

Comment by Matt Goldenberg (mr-hire) on How I internalized my achievements to better deal with negative feelings · 2024-02-27T23:21:08.338Z · LW · GW

Appreciated this writeup.

How long have you been using the tool, and do you find any resistance to using it?

Do you always assume the underlying issue, or do you do focusing each time? Or do you find a contradictory experience through intuition without knowing why it works?

Comment by Matt Goldenberg (mr-hire) on How I build and run behavioral interviews · 2024-02-26T16:26:47.026Z · LW · GW

I haven't looked into this recently, but last time I looked at the literature behavioral interviews were far more predictive of job performance than other interviewing methods.

It's possible that they've become less predictive as people started preparing for them more.

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-02-24T22:48:54.277Z · LW · GW

Thanks. Appreciate this. I'm going to give another shot at writing this

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-02-24T21:37:27.518Z · LW · GW

Request for feedback: Do I sound like a raving lunatic above?

Comment by Matt Goldenberg (mr-hire) on Matt Goldenberg's Short Form Feed · 2024-02-24T21:36:28.194Z · LW · GW

Surprising thing I've found as I begin to study and integrate skillful coercive motivation is the centrality of belief in providence and faith of this way of motivating yourself. Here are some central examples: the first from War of Art, the second from The Tools, the third from David Goggins. these aren't cherry picked (this is a whole section of War of Art and a whole chapter of The Tools).

Image
Image
Image

This has interesting implications given that as a society (at least in america) we've historically been motivated by this type of masculine, apollonian motivation - but have increasingly let go of faith in higher powers as a tenet of our central religion, secular humanism.This means the core motivation that drives us to build, create, transcend our nature... is running on fumes. We are motivated by gratitude, w/o sense of to what or whom we should be grateful, told to follow our calling w/o a since of who is calling.

We've tried to hide this contradiction. our seminaries separate our twin Religion (Secular Humanism and Scientific Materialism) into stem and humanities tracks to hide that what motivates The Humanities to create is invalidated by the philosophy that allows STEM o discover. But this is crumbling, the cold philosophy of scientific materialism is eroding the shaky foundations that allow secular humanists to connect to these higher forces - this is one of the drivers of the meaning the crisis.

I don't really see any way we can make it through the challenges we're facing with these powerful new technologies w/o a new religion that connects us to the mystical truly wise core that allows us to be motivated towards what's good and true. This is exactly what Marc Gafni is trying to do with Cosmo-Erotic Humanism, and what Monastic Academy is trying to do with a new, mystical form of dataism - but both these projects are moonshots to massively change the direction of culture.

Comment by Matt Goldenberg (mr-hire) on The Gemini Incident · 2024-02-23T19:11:49.965Z · LW · GW

Or invisible?

Comment by Matt Goldenberg (mr-hire) on I played the AI box game as the Gatekeeper — and lost · 2024-02-12T22:37:17.539Z · LW · GW

The original reasoning that Eliezer gave if I remember correctly was that it's better to make people realize there are unknown unknowns, rather than taking one specific strategy and saying "oh, I know how I would have stopped that particular strategy"

Comment by Matt Goldenberg (mr-hire) on Prediction Markets aren't Magic · 2024-02-12T00:15:22.831Z · LW · GW

Some quick calculations from chatGPT puts the value from a british government bond (considered the world power then) at about equal to the value of gold, assuming a fixed interest rate of 3% with gold coming out slightly ahead.

I haven't really checked these calculations but they pass the sniff test (except the part where chatGPT tried to adjust todays dollars for inflation).

 

Comment by Matt Goldenberg (mr-hire) on Upgrading the AI Safety Community · 2024-02-10T13:50:51.146Z · LW · GW

This is not what I mean by wisdom.

Comment by Matt Goldenberg (mr-hire) on Prediction Markets aren't Magic · 2024-02-02T17:01:28.224Z · LW · GW

Compared to what?  My guess is it's a better bet than most currencies during that time, aside from a few winners that it would have been hard to predict ahead of time.  E.g., if 200 years ago, you had taken the most powerful countries and their currencies, and put your money into those, I predict you'd be much worse off than gold.

Comment by Matt Goldenberg (mr-hire) on Universal Love Integration Test: Hitler · 2024-01-12T02:31:33.062Z · LW · GW

I think oftentimes what's needed to let go of grief is to stop pushing it away, in doing that, it may be felt more fully, which once the message is received, can allow you to let it go. This process may involve fully feeling pain that you were suppressing.

Comment by Matt Goldenberg (mr-hire) on Universal Love Integration Test: Hitler · 2024-01-11T20:25:00.017Z · LW · GW

. It doesn't hurt the way pity or lamenting might; there's no grief in it, just well-wishing.

While true, I think there's a caveat that often the thing preventing the feeling of true love from coming forth can be unprocessed grief that needs to be felt, or unprocessed pain that needs to be forgiven.

I think there's a danger in saying "if love feels painful you're doing this wrong" as often that's exactly the developmentally correct thing to be experiencing in order to get to the love underneath.

Comment by Matt Goldenberg (mr-hire) on On the Contrary, Steelmanning Is Normal; ITT-Passing Is Niche · 2024-01-10T12:01:49.243Z · LW · GW

I couldn't pass an ITT for advocates of Islam or extrasensory perception. On the one hand, this does represent a distinct deficit in my ability to model what the advocates of these ideas are thinking, a tragic gap in my comprehension of reality, which I would hope to remedy in the Glorious Transhumanist Future if that were a real thing. On the other hand, facing the constraints of our world, my inability to pass an ITT for Islam or ESP seems ... basically fine? I already have strong reasons to doubt the existence of ontologically fundamental mental entities. I accept my ignorance of the reasons someone might postulate otherwise, not out of contempt, but because I just don't have the time.

I think there's a hidden or assumed goal here that I don't understand. The goal clearly isn't truth for it's own sake because then there wouldn't be a distinction between the truth of what they believe and the truth of whats real. You can of course make a distinction such as Simulacra levels but ultimately it's all part of the territory.

If the goal is instrumental ability to impact the world, I think probably a good portion of the time it's as important to understand peoples beliefs as the reality, because a good portion of the time your impact will he based on not just knowing the truth, but convincing others to change their actions or beliefs.

So what actually is the goal you are after?

Comment by Matt Goldenberg (mr-hire) on What should a non-genius do in the face of rapid progress in GAI to ensure a decent life? · 2024-01-04T17:44:25.811Z · LW · GW

I think this post has decent financial advice if you believe in near term GAI.

 

https://www.lesswrong.com/posts/CTBta9i8sav7tjC2r/how-to-hopefully-ethically-make-money-off-of-agi

Comment by Matt Goldenberg (mr-hire) on Prediction Markets aren't Magic · 2023-12-31T05:06:17.138Z · LW · GW

I dunno I still think Bitcoin is actually a good store of value and hedge against problems in fiat currency. Probably as good a bet as gold as this point.

Comment by Matt Goldenberg (mr-hire) on Stupid Questions - April 2023 · 2023-12-27T16:39:24.538Z · LW · GW

I think one of the things rationalists try to do is take the numbers seriously from a consequentialist/utilitarian perspective. This means that even if there's a small chance of doom, you should put vast resources towards preventing it since the expected loss is high.

I think this makes people think that the expectations of doom in the community are much higher than they actually are, because the expected value of preventing doom is so high.