Posts
Comments
I think almost all startups are really great! I think there really is a very small set of startups that end up harmful for the world
I think you're kind of avoiding the question. What startups are really great for AI safety?
In American English (AE), "quite" is an intensifier, while in British English (BE) it's a mild deintensifier.
This does depend on context. In formal or old-fashioned British English, "quite" is also an intensifier. For example:
"Sir, you quite misunderstand me," said Mrs. Bennet, alarmed.
from Pride and Prejudice by Jane Austen.
"Graft" implies corruption in AE but hard work in BE.
I think "graft" also often implies corruption in British English.
Rationalist twitter rage-bait recipe:
Rationalist: *reasonable, highly decoupling point about the holocaust*
Everyone: *highly coupling rage*
Rationalist: *shocked pikachu face*
Then there’s the AI regulation activists and lobbyists. They lobby and protest and stuff, pretending like they’re pushing for regulations on AI, but really they’re mostly networking and trying to improve their social status with DC People.
The activists and the lobbyists are two very different groups. The activists are not trying to network with the DC people (yet). Unless you mean Encode, who I would call lobbyists, not activists.
If the animal specific features form an overcomplete basis, isn't the set of animals + attributes just an even more overcomplete basis?
From the Caro biography, it's pretty clear Lyndon Johnson had extraordinary political talent.
You accidentally touch a hot stove and don't feel any pain. It's been months since your sensory inputs have congealed into pain.
Is this something you have achieved? Could you give more details about what this means?
- If you touch a hot stove will you reflexively remove your hand?
- If I inflict on you what to most people would be extreme physical pain (that is not physically damaging) (capsaicin?) would this be at worst a mild annoyance to you?
- Do you ever take painkillers? Would you in an extreme situation like a medical operation?
This is sort of content I come to LessWrong for.
Does this still seem wrong to you?
Yes. I plan to write down my views properly at some point. But roughly I subscribe to non-cognitivism.
Moral questions are not well defined because they are written in ambiguous natural language, so they are not truth apt. Now you could argue that many reasonable questions are also ambiguous in this sense. Eg the question "how many people live in Sweden" is ultimately ambiguous because it is not written in a formal system (ie. the borders of Sweden are not defined down to the atomic level).
But you could in theory define the Sweden question in formal terms. You could define arbitrarily at how many nanoseconds after conception a fetus becomes a person and resolve all other ambiguities until the only work left would be empirical measurement of a well defined quantity.
And technically you could do the same for any moral question. But unlike the Sweden question, it would be hard to pick formal definitions that everyone can agree are reasonable. You could try to formally define the terms in "what should our values be?". Then the philosophical question becomes "what is the formal definition of 'should'?". But this suffers the same ambiguity. So then you must define that question. And so on in an endless recursion. It seems to me that there cannot be any One True resolution to this. At some point you just have to arbitrarily pick some definitions.
The underlying philosophy here is that I think for a question to be one on which you can make progress, it must be one in which some answers can be shown to be correct and others incorrect. ie. questions where two people who disagree in good faith will reliably converge by understanding each other's view. Questions where two aliens from different civilizations can reliably give the same answer without communicating. And the only questions like this seem to be those defined in formal systems.
Choosing definitions does not seem like such a set of questions. So resolving the ambiguities in moral questions is not something on which progress can be made. So we will never finally arrive at the One True answer to moral questions.
The unemployment pool that resulted from this efficiency wage made it easier to discipline officers by moving them back to the captains list.
I don't understand this point or how it explains captains' willingness to fight.
the One True Form of Moral Progress
Have you written about this? This sounds very wrong to me.
DeepMind says boo SAEs, now Anthropic says yay SAEs![1]
Reading this paper pushed me a fair amount in the yay direction. We may still be at the unsatisfying level where we can only say "this cluster of features seems to roughly correlate with this type of thing" and "the interaction between this cluster and this cluster seems to mostly explain this loose group of behaviors". But it looks like we're actually pointing at real things in the model. And therefore we are beginning to be able to decompose the computation of LLMs in meaningful ways. The Addition Case Study is seriously cool and feels like a true insight into the model's internal algorithms.
Maybe we will further decompose these explanations until we can get down to satisfying low-level descriptions like "this mathematical object is computed by this function and is used in this algorithm". Even if we could still interpret circuits at this level of abstraction, humans probably couldn't hold in their heads all the relevant parts of a single forward pass at once. But AIs could or maybe that won't be required for useful applications.
The prominent error terms and simplifying assumptions are worrying, but maybe throwing enough compute and hill-climbing research at the problem will eventually shrink them to acceptable sizes. It's notable that this paper contains very few novel conceptual ideas and is mostly just a triumph of engineering schlep, massive compute and painstaking manual analysis.
- ^
This is obviously a straw man of both sides. They seem to be thinking about it from pretty different perspectives. DeepMind is roughly judging them by their immediate usefulness in applications, while Anthropic is looking at them as a stepping stone towards ambitious moonshot interp.
Claude 3.7's annoying personality is the first example of accidentally misaligned AI making my life worse. Claude 3.5/3.6 was renowned for its superior personality that made it more pleasant to interact with than ChatGPT.
3.7 has an annoying tendency to do what it thinks you should do, rather than following instructions. I've run into this frequently in two coding scenarios:
- In Cursor, I ask it to implement some function in a particular file. Even when explicitly instructed not to, it guesses what I want to do next and changes other parts of the code as well.
- I'm trying to fix part of my code and I ask it to diagnose a problem and suggest debugging steps. Even when explicitly instructed not to, it will suggest alternative approaches that circumvent the issue, rather than trying to fix the current approach.
I call this misalignment, rather than a capabilities failure, because it seems a step back from previous models and I suspect it is a side effect of training the model to be good at autonomous coding tasks, which may be overriding its compliance with instructions.
This means that the Jesus Christ market is quite interesting! You could make it even more interesting by replacing it with "This Market Will Resolve No At The End Of 2025": then it would be purely a market on how much Polymarket traders will want money later in the year.
It's unclear how this market would resolve. I think you meant something more like a market on "2+2=5"?
I read this and still don't understand what an acceptable target slot is.
Then it will often confabulate a reason why the correct thing it said was actually wrong. So you can never really trust it, you have to think about what makes sense and test your model against reality.
But to some extent that's true for any source of information. LLMs are correct about a lot of things and you can usually guess which things they're likely to get wrong.
LLM hallucination is good epistemic training. When I code, I'm constantly asking Claude how things work and what things are possible. It often gets things wrong, but it's still helpful. You just have to use it to help you build up a gears level model of the system you are working with. Then, when it confabulates some explanation you can say "wait, what?? that makes no sense" and it will say "You're right to question these points - I wasn't fully accurate" and give you better information.
See No convincing evidence for gradient descent in activation space
It's not really feasible for the feature to rely on people reading this PSA to work well. The correct usage needs to be obvious.
When I go on LessWrong, I generally just look at the quick takes and then close the tab. Quick takes cause me to spend more time on LessWrong but spend less time reading actual posts.
On the other hand, sometimes quick takes are very high quality and I read them and get value from them when I may not have read the same content as a full post.
I find it very annoying that standard reference culture seems to often imply giving extremely positive references unless someone was truly awful, since it makes it much harder to get real info from references
Agreed, but also most of the world does operate in this reference culture. If you choose to take a stand against it, you might screw over a decent candidate by providing only a quite positive recommendation.
Hey, long time no see! Thanks, I've correct it:
Set , ie.
It's surprising he bought the gun so long in advance. There should be footage of him buying it I think as required by California law.
You can see what he's referring to in the pictures Webb published of the scene.
What is prospective memory training?
I think there's a spectrum between great man theory and structural forces theory and I would classify your view as much closer to the structural forces view, rather than a combination of the two.
The strongest counter-example might be Mao. It seems like one man's idiosyncratic whims really did set the trajectory for hundreds of millions of people. Although of course as soon as he died most of the power vanished, but surely China and the world would be extremely different today without him.
The Duke of Wellington said that Napoleon's presence on a battlefield “was worth forty thousand men”.
This would be about 4% of France's military size in 1812.
I first encountered it in chapter 18 of The Looming Tower by Lawrence Wright.
But here's a easily linkable online source: https://ctc.westpoint.edu/revisiting-al-qaidas-anthrax-program/
"Despite their extreme danger, we only became aware of them when the enemy drew our attention to them by repeatedly expressing concerns that they can be produced simply with easily available materials."
Ayman al-Zawahiri, former leader of Al-Qaeda, on chemical/biological weapons.
I don't think this is a knock-down argument against discussing CBRN risks from AI, but it seems worth considering.
This is great, thanks. I think these could be very helpful for interpretability.
Thanks I enjoyed this.
The main thing that seems wrong to me, similar to some of your other recent posts, is that AI progress seems to mysteriously decelerate around 2030. I predict that things will look much more sci-fi after that point than in your story (if we're still alive).
xAI claims to have a cluster of 200k GPUs, presumably H100s, online for long enough to train Grok 3.
I think this is faster datacenter scaling than any predictions I've heard.
DM'd
In that case I would consider applying for EA funds if you are willing to do the work professionally or set up a charity to do it. I think you could make a strong case that it meets the highest bar for important, neglected and tractable work.
How long does it take you to save one life on average? GiveWell's top charities save a life for about $5000. If you can get close to that there should be many EA philanthropists willing to fund you or a charity you create.
And I think they should be willing to go up to like $10-20k at least because murders are probably especially bad deaths in terms of their effects on the world.
I just found the paper BERT's output layer recognizes all hidden layers? Some Intriguing Phenomena and a simple way to boost BERT, which precedes this post by a few months and invents essentially the same technique as the logit lens.
So consider also citing that paper when citing this post.
As an aside, I would guess that this is the most cited LessWrong post in the academic literature, but it would be cool if anyone had stats on that.
Yeah I guess, but actually the more I think about it, the more impractical it seems.
I think the solution would be something like adopting a security mindset with respect to preventing community members going off the rails.
The costs would be great because then everyone would be under suspicion by default, but maybe it would be worth it.
The next international PauseAI protest is taking place in one week in London, New York, Stockholm (Sunday 9th Feb), Paris (Mon 10 Feb) and many other cities around the world.
We are calling for AI Safety to be the focus of the upcoming Paris AI Action Summit. If you're on the fence, take a look at Why I'm doing PauseAI.
For those in Europe, Tomorrow Biostasis makes the process a lot easier and they have people who will talk you through step by step.
A good example of surprising detail I just read.
It turns out that the UI for a simple handheld calculator is a large design space with no easy solutions.
https://lcamtuf.substack.com/p/ui-is-hell-four-function-calculators
- Following OpenAI Twitter freakouts is a colossal, utterly pointless waste of your time and you shouldn't do it ever.
I feel like for the same reasons, this shortform is kind of an engaging waste of my time. One reason I read LessWrong is to avoid twitter garbage.
we thought that forecasting AI trends was important to be able to have us taken seriously
This might be the most dramatic example ever of forecasting affecting the outcome.
Similarly I'm concerned that a lot of alignment people are putting work into evals and benchmarks which may be having some accelerating affect on the AI capabilities which they are trying to understand.
"That which is measured improves. That which is measured and reported improves exponentially."
Just did a debugging session IRL with Gurkenglas and it was very helpful!
correctness and beta-coherence can be rolled up into one specific property
Is that rolling up two things into one, or is that just beta-coherence?
I agree that the ultimate goal is to understand the weights. Seems pretty unclear whether trying to understand the activations is a useful stepping stone towards that. And it's hard to be sure how relevant theoretical toy example are to that question.
- Ilya Sutskever had two armed bodyguards with him at NeurIPS.
Some people are asking for a source on this. I'm pretty sure I've heard it from multiple people who were there in person but I can't find a written source. Can anyone confirm or deny?
Well, it seems quite important whether the DROS registration could possibly have been staged.
That would be difficult. To purchase a gun in California you have to provide photo ID[1], proof of address[2] and a thumbprint[3]. Also it looks like the payment must be trackable[4] and gun stores have to maintain video surveillance footage for up to year.[5]
My guess is that the police haven't actually invested this as a potential homicide, but if they did, there should be very strong evidence that Balaji bought a gun. Potentially a very sophisticated actor could fake this evidence but it seems challenging (I can't find any historical examples of this happening). It would probably be easier to corrupt the investigation. Or the perpetrators might just hope that there would be no investigation.
There is a 10-day waiting period to purchase guns in California[5], so Balaji would probably have started planning his suicide before his hiking trip (I doubt someone like him would own a gun for recreational purposes?).
Is the interview with the NYT going to be published?
I think it's this piece that was published before his death.
Is any of the police behavior actually out of the ordinary?
Epistemic status: highly uncertain: my impressions from searching with LLMs for a few minutes.
It's fairly common for victim's families to contest official suicide rulings. In cases with lots of public attention police generally try to justify their conclusions. So we might expect the police to publicly state if there is footage of Balaji purchasing the gun shortly before his death. It could be that this will still happen with more time or public pressure.
land in space will be less valuable than land on earth until humans settle outside of earth (which I don't believe will happen in the next few decades).
Why would it take so long? Is this assuming no ASI?
Wow that's great, thanks. @L Rudolf L you should link this in this post.