Posts

Peter Thiel on Technological Stagnation and Out of Touch Rationalists 2022-12-07T13:15:32.009Z
Non-Coercive Perfectionism 2021-01-26T16:53:36.238Z
Would most people benefit from being less coercive to themselves? 2021-01-21T14:24:17.187Z
Why Productivity Systems Don't Stick 2021-01-16T17:45:37.479Z
How to Write Like Kaj Sotala 2021-01-07T19:33:35.260Z
When Gears Go Wrong 2020-08-02T06:21:25.389Z
Reconsolidation Through Questioning 2019-11-14T23:22:43.518Z
Reconsolidation Through Experience 2019-11-13T20:04:39.345Z
The Hierarchy of Memory Reconsolidation Techniques 2019-11-13T20:02:43.449Z
Practical Guidelines for Memory Reconsolidation 2019-11-13T19:54:10.097Z
A Practical Theory of Memory Reconsolidation 2019-11-13T19:52:20.364Z
Expected Value- Millionaires Math 2019-10-09T14:50:26.732Z
On Collusion - Vitalik Buterin 2019-10-09T14:45:20.924Z
Exercises for Overcoming Akrasia and Procrastination 2019-09-16T11:53:10.362Z
Appeal to Consequence, Value Tensions, And Robust Organizations 2019-07-19T22:09:43.583Z
Overcoming Akrasia/Procrastination - Volunteers Wanted 2019-07-15T18:29:40.888Z
What are good resources for learning functional programming? 2019-07-04T01:22:05.876Z
Matt Goldenberg's Short Form Feed 2019-06-21T18:13:54.275Z
What makes a scientific fact 'ripe for discovery'? 2019-05-17T09:01:32.578Z
The Case for The EA Hotel 2019-03-31T12:31:30.969Z
How to Understand and Mitigate Risk 2019-03-12T10:14:19.873Z
What Vibing Feels Like 2019-03-11T20:10:30.017Z
S-Curves for Trend Forecasting 2019-01-23T18:17:56.436Z
A Framework for Internal Debugging 2019-01-16T16:04:16.478Z
The 3 Books Technique for Learning a New Skilll 2019-01-09T12:45:19.294Z
Symbiosis - An Intentional Community For Radical Self-Improvement 2018-04-22T23:15:06.832Z
How Going Meta Can Level Up Your Career 2018-04-14T02:13:02.380Z
Video: The Phenomenology of Intentions 2018-01-09T03:40:45.427Z
Video - Subject - Object Shifts and How to Have Them 2018-01-04T02:11:22.142Z

Comments

Comment by Matt Goldenberg (mr-hire) on 100 Tips for a Better Life · 2023-09-29T23:00:59.880Z · LW · GW

It seems like the things I want in friendship and romantic relationships are different.

Comment by Matt Goldenberg (mr-hire) on How to Catch an AI Liar: Lie Detection in Black-Box LLMs by Asking Unrelated Questions · 2023-09-29T03:01:32.162Z · LW · GW

I asked about this on twitter and got this response from Owain:

No, we haven't tested the classifier on humans. Note that it's possible to ask an LLM dozens of questions "in parallel" just after it lied. This is like being able to run dozens of separate interrogations of a suspect to see if any strategy leads to a "tell". With a human, we'd have to ask questions "in sequence", and we don't know if that's as a effective. Moreover the classifier is unlikely to work on humans without retraining. This would involve getting a human to tell either a truth or lie, and then answer our "elicitiation" questions. We'd need 1000s (maybe >10k) of such examples, which is possible but a fair amount of work. Also, for LLMs we are able to extract the probability of answering "Yes" or "No" to an eliticiation question. This leads to slightly better generalisation than just taking the Yes/No answer. We can't do this for humans (save for just repeating the experiment with humans many times).

Oh, it occurs to me that this question might be asking if we can put the human response as an AI response as an AI lie, and then query the LLM (just guessing based on the "missed the mark" response).  I don't think this would work, since they were testing cases where the AI "knew" it was lying.

Comment by Matt Goldenberg (mr-hire) on Ruby's Public Drafts & Working Notes · 2023-09-27T14:22:59.395Z · LW · GW

I got both messages, didn't click the second.

Comment by Matt Goldenberg (mr-hire) on Image Hijacks: Adversarial Images can Control Generative Models at Runtime · 2023-09-21T17:48:56.331Z · LW · GW

This feels like an obvious case of high alignment tax that won't be competitive.

Comment by Matt Goldenberg (mr-hire) on Yitz's Shortform · 2023-09-21T17:46:55.552Z · LW · GW

I'm curious if it's simply existing published research scaled up, or it has some real secret sauce.

Comment by Matt Goldenberg (mr-hire) on Deconfusing Regret · 2023-09-19T16:55:47.969Z · LW · GW

No, it definitely isn't lying.

Comment by Matt Goldenberg (mr-hire) on Deconfusing Regret · 2023-09-19T15:00:04.766Z · LW · GW

"You couldnt even have changed them, because you didnt have this thought process/coaching session/emotional state, etc." is ambiguously either A or B.  And I often explain it as A.

Comment by Matt Goldenberg (mr-hire) on Deconfusing Regret · 2023-09-19T12:54:30.934Z · LW · GW

To me the argument is:

  1. You literally couldn't have done anything different in the past.
  2. But you can IMAGINE what you would have done differently in the past in order to affect the future.

I will often invoke the idea that you literally did the best you could at the time when walking people through this.

Comment by Matt Goldenberg (mr-hire) on Deconfusing Regret · 2023-09-19T00:39:16.344Z · LW · GW

No, it's much more like "you are hurting yourself fighting with reality". The reality is that you slept and played video games and you're wasting mental energy fighting something that:

  1. You can't change anymore.
  2. You couldnt even have changed them, because you didnt have this thought process/coaching session/emotional state, etc.

So first, let's accept the state of things as they are, and have compassion for the version of you that didn't have the tools or ability to make a different decision.

Having done that, let's have compassion for present you by learning from past you, and mentally practice what you could have done in this situation (or well before this situation) to avoid burnout. This way, future you will have those tools.

Comment by Matt Goldenberg (mr-hire) on Some reasons why I frequently prefer communicating via text · 2023-09-19T00:31:26.805Z · LW · GW

I don't disagree. There is indeed urge, and the ability to align that with group intent signals ingroup status. Similar to how many urges have status signalling benefits, I'd argue this urge has ingroup signalling benefits. You can of course learn to vibe consciously but for most normies it's largely subconscious.

Comment by Matt Goldenberg (mr-hire) on Some reasons why I frequently prefer communicating via text · 2023-09-18T22:52:02.243Z · LW · GW

This is just vibing. It's not about the content, but a game to play with emotions, being able to take the emotion someone has sent you, play with it, and then send it back to the group.

There may be some status signaling going on here - but I think this is more about ingroup-outgroup than the high status-low status.  It's about can you be on the same emotional rhythm as the group.

Comment by Matt Goldenberg (mr-hire) on Looking for a post about vibing and banter · 2023-09-18T22:44:31.320Z · LW · GW

https://www.lesswrong.com/posts/jXHwYYnqynhB3TAsc/what-vibing-feels-like

Comment by Matt Goldenberg (mr-hire) on Deconfusing Regret · 2023-09-18T12:07:19.683Z · LW · GW

I find as a coach that sometimes people really need the view of determinism (often framed as radical acceptance), and they in some sense don't alieve the thing Gordon is talking about.

At other times, they are confused about agency, (often framed as radical responsiblity) and they don't alieve the thing you're talking about.

In either case, I usually frame each way of seeing as a skill, and tell them we're not trying to get rid of their current way of seeing, just add another option. And try to get at when and why each might be useful.

Comment by Matt Goldenberg (mr-hire) on How to talk about reasons why AGI might not be near? · 2023-09-18T12:00:57.963Z · LW · GW

I think empirically EA has done a bunch to speed up capabilities accidentally. And I think theoretically we're at a point in history where simply sharing an idea can get it in the water supply faster than ever before.

A list of unsolved problems, if one of them is both true and underappreciated, can have a big impact.

Comment by Matt Goldenberg (mr-hire) on AI Forecasting: Two Years In · 2023-08-20T11:56:22.953Z · LW · GW

I was a participant in both Hypermind and XPT, but I recused myself from the MMLU question (among others) because I knew the GPT-4 result many months before the public.


This is a prediction market not a stock market, insider trading is highly encouraged. Don't know about Jacob but I'd rather have more accurate predictions in my prediction market.

Comment by Matt Goldenberg (mr-hire) on Memetic Judo #3: The Intelligence of Stochastic Parrots v.2 · 2023-08-16T14:10:42.339Z · LW · GW

A couple of thoughts:

  1. I think many people making this argument reject brain physicalism, particularly a subset of premise 2, something like "all of experience/the mind is captured by brain activity"
  2. Your example I don't think is convincing to the stochastic parrot people. It could just be mashup of two types of images the AI has already seen, smashed together.  A more convincing proof is OthelloGPT, which stores concepts in the form of boards, states, and legal moves, despite only being trained on sequences of text tokens representing othello moves.
Comment by Matt Goldenberg (mr-hire) on Feedbackloop-first Rationality · 2023-08-12T19:44:48.993Z · LW · GW

This is cool!

Two things this made me think of that may be relevant:

  1. Standard Jiu Jitsu - It's a very interesting Jiu Jitsu training program that doesn't use any drilling of moves. Instead, it looks at much smaller skills that make up various moves, then creates games where one player tries to execute the result of that skill (e.g, separating the arm from the body) while the other player tries to avoid that.  This is very effective because it gives you immediate feedback on what actually works for each step of the move.  They say this approach is science based, but I haven't yet found which science they're referring to here.
  2. Accelerated Expertise - A breakdown of some of the most effective teaching programs in the military and business.  They go through the process of doing detailed Cognitive Task Analyses on experts, then, rather than teaching those CTAs, they instead create simulations that force people to learn the skills needed from the CTA, while rarely teaching them directly.

 

Both of these add a couple of steps that aren't in your program, that I would recommend could be:

  1. Break down the larger skill into smaller skills extracted from someone already good at them, using Cognitive Task Analysis, the Experiential array, Business Process Modelling, or other frameworks.
  2. Create exercises specifically tailored to train those skills, so that the feedback loop is way more effective.
Comment by Matt Goldenberg (mr-hire) on AI romantic partners will harm society if they go unregulated · 2023-08-06T09:18:10.799Z · LW · GW

If the legal system says to people that it knows better than them about which images and words they should be allowed to look at because some images and words are "psychologically harmful", that's pretty suspicious. Humans tend to have pretty good psychological intuition.

This seems demonstrably wrong in the case of technology.

Comment by Matt Goldenberg (mr-hire) on How to make real-money prediction markets on arbitrary topics (Outdated) · 2023-07-30T15:41:38.286Z · LW · GW

Well there's a bunch built in top of gnosis, such as https://azuro.org/ and https://omen.eth

Then there's Augur.

Of course a bunch of markets got trashed from polymarket's regulatory capture, but there still seems to be many.

Comment by Matt Goldenberg (mr-hire) on How to make real-money prediction markets on arbitrary topics (Outdated) · 2023-07-30T15:27:57.130Z · LW · GW

I mean yes, most of them are on ethereum and therefore require paying gas.

Sl the difference is this one doesn't require paying gas? Are there no transaction fees on Solana?

Comment by Matt Goldenberg (mr-hire) on How to make real-money prediction markets on arbitrary topics (Outdated) · 2023-07-30T15:13:35.645Z · LW · GW

What makes it different from all the other crypto prediction markets that allow anyone to create markets?

Comment by Matt Goldenberg (mr-hire) on Self-driving car bets · 2023-07-30T15:05:08.786Z · LW · GW

It seems like criticality is sufficient, bot not necessary, for TAI, and so only counting criticality scenarios causes underestimation.

Comment by Matt Goldenberg (mr-hire) on How LLMs are and are not myopic · 2023-07-28T03:06:04.880Z · LW · GW

If you can come up with an experimental setup that does that it would be sufficient for me.

Comment by Matt Goldenberg (mr-hire) on How LLMs are and are not myopic · 2023-07-26T16:52:40.126Z · LW · GW

In my experience, larger models often become aware that they are a LLM generating text rather than predicting an existing distribution. This is possible because generated text drifts off distribution and can be distinguished from text in the training corpus.

I'm quite skeptical of this claim on face value, and would love to see examples.

I'd be very surprised if current models, absent the default prompts telling them they are an LLM, would spontaneously output text predicting they are an LLM unless steered in that direction.

Comment by Matt Goldenberg (mr-hire) on Rationality !== Winning · 2023-07-26T13:53:08.574Z · LW · GW

I do agree there is a meta-level skill of figuring out what tools to use, and I do think that meta-level skill is still pretty central to what I call rationality (which includes "applying cognitive algorithms to make good decisions"). But it's not necessarily the case that studying that skill will pay off

To me this paragraph covered the point your making and further took a stance on the efficacy of such an approach.

Comment by Matt Goldenberg (mr-hire) on I'm consistently overwhelmed by basic obligations. Are there any paradigm shifts or other rationality-based tips that would be helpful? · 2023-07-21T21:28:39.401Z · LW · GW

There's a cluster of things around non-coercion, parts work, and coherence that might be helpful here.

It could be good to start with something like focusing, which can help you find the meaning behind the stress, and may give you more information about how to address it moving forward.

Comment by mr-hire on [deleted post] 2023-07-19T14:18:13.224Z

Just a content note - I'd prefer a less click-baity, more informative title for easy searchability in the future.

Comment by Matt Goldenberg (mr-hire) on Proof of posteriority: a defense against AI-generated misinformation · 2023-07-17T17:21:02.496Z · LW · GW

Why wouldn't you just hash the video file itself?

Comment by Matt Goldenberg (mr-hire) on Jailbreaking GPT-4's code interpreter · 2023-07-16T19:48:00.256Z · LW · GW

What makes you think they're not in the prompt

Comment by Matt Goldenberg (mr-hire) on Tuning your Cognitive Strategies · 2023-07-15T02:57:52.881Z · LW · GW

There's also the fact that the thought processes themselves may be protecting you from various traumas or doing other subconscious things for you. Since this tuning process isn't based on introspection but on conscious judging of your subconscious processes, you could accidentally tune yourself away from emotionally load-bearing coping strategies.

Comment by Matt Goldenberg (mr-hire) on I bet everyone 1000€ that I can make them dramatically happier & cure their depression in 3 months! · 2023-07-12T16:05:42.564Z · LW · GW

People frequently drop out of silent retreats, and they're already a self-selected group. Curious where you're getting you're data that almost everyone adheres in a silent retreat.

Comment by Matt Goldenberg (mr-hire) on “Reframing Superintelligence” + LLMs + 4 years · 2023-07-11T19:29:28.644Z · LW · GW

I need to build the option for #5 as a detterent.  All it takes for someone else to gain a strategic advantage is for them to automate just a BIT more of their military than me via AGI, and suddenly they can disrupt my OODA loop.

Because of this, I need the capability to always automate as much or more than them, which in the limit is full automation of all systems.  

Comment by Matt Goldenberg (mr-hire) on Consciousness as a conflationary alliance term · 2023-07-11T19:16:05.864Z · LW · GW

This seems like an important comment to me. Before the discovery of atoms, if you asked people to talk about "the thing stuff was made out of," in terms of moving parts and subprocesses, you'd probably get a lot of different confused responses, and focus on different aspects.  However, that doesn't mean people are necessarily referring to different concepts - they just have different underlying models of the thing they're all pointing,

Comment by Matt Goldenberg (mr-hire) on Dalcy Bremin's Shortform · 2023-07-10T03:34:32.701Z · LW · GW

I still have lots of neck and shoulder tension, but the only thing I've found that can reliably lessen it is doing some hard work on a punching bag for about 20 minutes every day, especially hard straights and jabs with full extension.

Comment by Matt Goldenberg (mr-hire) on My Time As A Goddess · 2023-07-04T23:13:16.536Z · LW · GW

I strong downvoted because the style in the beginning felt sort of like it was glorifying the whole thing to me, even if the takeaway at the end was "that was dumb", so I think it's sort of a dangerous post.

Comment by Matt Goldenberg (mr-hire) on Ban development of unpredictable powerful models? · 2023-06-26T22:54:53.999Z · LW · GW

Ditto for countries that use and expand on got 3.5

Comment by Matt Goldenberg (mr-hire) on Ban development of unpredictable powerful models? · 2023-06-26T22:24:52.216Z · LW · GW

FWIW its fairly obvious to me that these final two technologies have significant downsides, and so calling them obviously beneficial feels like a stretch.

Comment by Matt Goldenberg (mr-hire) on My tentative best guess on how EAs and Rationalists sometimes turn crazy · 2023-06-25T15:28:12.405Z · LW · GW

He was asking the other commentor to taboo the word "exists", and trying to get at the mechanistic interpretation in the second sentence - does it mean that the physical world contains things you don't see?

I was asking you (the commentor) to taboo the word "weird" and asking a similar clarifying question - what do you actually think is true about groups that last a long time and their practices, without using the word weird.

It feels fairly isomorphic to me.


Anyways, I can taboo the word "taboo" in order to get back to the object level question here:

What do you actually think is true about groups that last a long time and their practices that must be true, without using the word "weird"?

Comment by Matt Goldenberg (mr-hire) on My tentative best guess on how EAs and Rationalists sometimes turn crazy · 2023-06-25T03:16:10.786Z · LW · GW

I think this quite off topic, I was just interested in what you meant.

The first 3 instances I found in search all seem to be suggesting a specific person taboo something to clarify their meaning

https://www.lesswrong.com/posts/7LnwkPdRT67ybhFzo/subjective-realities#sv9jXE5S76sovEwt3

https://www.lesswrong.com/posts/QvYKSFmsBX3QhgQvF/morality-isn-t-logical#ENYAvvLJq3qkxo8Ak

https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline#Y54j8fxZEjbpMWBvJ

Comment by Matt Goldenberg (mr-hire) on My tentative best guess on how EAs and Rationalists sometimes turn crazy · 2023-06-24T23:56:37.547Z · LW · GW

No, I was specifically confused about your use of it, and your understanding of the OP.

Comment by Matt Goldenberg (mr-hire) on My tentative best guess on how EAs and Rationalists sometimes turn crazy · 2023-06-23T17:49:41.492Z · LW · GW

Can we taboo weird here? What are you trying to say about power dynamics that last a long time?

Mod edit by Raemon: I've locked a downstream thread, but, copied Matt's last comment back up to this comment which seemed to be trying to restate his question and get the conversation back on track:

Anyways, I can taboo the word "taboo" in order to get back to the object level question here:

What do you actually think is true about groups that last a long time and their practices that must be true, without using the word "weird"?

Comment by Matt Goldenberg (mr-hire) on "Natural is better" is a valuable heuristic · 2023-06-20T22:27:56.608Z · LW · GW

Thanks!  I'm a big fan of the "Biases are actually heuristics" genre and this is a nice clean example.

Comment by Matt Goldenberg (mr-hire) on On the Apple Vision Pro · 2023-06-15T19:45:22.112Z · LW · GW

I imagine this will relax over time, like the early iPhone didn't allow any access for apps to the phonecall hardware.

Comment by Matt Goldenberg (mr-hire) on Some reflections on the LW community after several months of active engagement · 2023-06-13T02:14:06.971Z · LW · GW

For example, some of it comes across as more argumentative than necessary, some of it seems a bit too eager for recognition, and so on. Due to the nature of vibes, I'm not sure if I could provide a more convincing explanation.Then again I may just be an outlier.

 

I remember having similar impressions when first encountering Eliezer's writing. So if you are an outlier, you're not the only one.

Comment by Matt Goldenberg (mr-hire) on I bet everyone 1000€ that I can make them dramatically happier & cure their depression in 3 months! · 2023-06-04T18:01:06.655Z · LW · GW

I suspect that there are many programs that would work on these terms.  If you can get people to do things, then you can get them to be happier. But adherance is actually quite hard, especially around behavioral interventions for depression.

Comment by Matt Goldenberg (mr-hire) on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-04T17:41:22.131Z · LW · GW

There's many other ways to search the network in the literature, such as Activation Vectors.  And I suspect we're just getting started on these sorts of search methods.

Comment by Matt Goldenberg (mr-hire) on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-04T02:01:57.145Z · LW · GW

I think there are probably many approaches that don't work.

Comment by Matt Goldenberg (mr-hire) on Yudkowsky vs Hanson on FOOM: Whose Predictions Were Better? · 2023-06-04T00:07:25.747Z · LW · GW

Being able to perfectly imitate a Chimpanzee would probably also require superhuman intelligence. But such a system would still only be able to imitate chimpanzees. Effectively, it would be much less intelligent than a human. Same for imitating human text. It's very hard, but the result wouldn't yield large capabilities.

It depends on your ability to extract the information from the model. RLHF and instruction tuning are one such algorithm that allow certain capabaliities besides next-token prediction to be extracted from the model. I suspect many other search and extraction techniques will be found, which can leverage latent capabalities and understandings in the model that aren't modelled in its' text outputs.

Comment by Matt Goldenberg (mr-hire) on GPT4 is capable of writing decent long-form science fiction (with the right prompts) · 2023-05-24T11:41:15.033Z · LW · GW

As the initial prompt, I used a lengthy contraption described in another post, with the following plot summary:

The original lengthy contraption, or one from the comments?

Comment by Matt Goldenberg (mr-hire) on Do Deadlines Make Us Less Creative? · 2023-05-23T18:14:00.844Z · LW · GW

I tend to work in the reverse way - if I notice myself putting something off for too long, I add a deadline, but my default is to decide fresh each time what to do.