Posts
Comments
See also Proper posture for mental arts, which also mentions the Unbendable Arm and explains how it works biomechanically, namely via the latissimus dorsi.
We use Unsloth for fine-tuning 8B models because it offers support for single GPU training with high throughput. For 70B models, we use the Together AI fine-tuning API. Axolotl is used for 405B models due to its support for Fully Sharded Data Parallel (FSDP) training. Together AI does not support 405B fine-tuning, and Unsloth lacks distributed training capabilities, while the Pro version is in development.
Thank you for your details on your setup. Can you drop hints how long each training run took on these systems?
I thought it would be good to have some examples where you could have a useful type signature, and I asked ChatGPT. I think these are too wishy-washy, but together with the given explanation, they seem to make sense.
Would you say that this level of "having a type signature in mind" would count?
ChatGPT 4o suggesting examples
1. Prediction vs Explanation
- Explanation might be:
Phenomenon → (Theory, Mechanism)
- Prediction might be:
Features → Label
These have different type signatures. A model that predicts well might not explain. People often conflate these roles. Type signatures remind us: different input-output relationships.
Moral Judgments vs Policy Proposals
- Moral judgment (deontic):
Action → Good/Bad
- Policy proposal (instrumental):
(State × Action) → (New State × Externalities)
People often act as if "this action is wrong" implies "we must ban it," but that only follows if the second signature supports the first. You can disagree about outcomes while agreeing on morals, or vice versa.
Interpersonal Feedback
- Effective feedback:
(Action × Impact) → Updated Mental Model
People often act as if the type signature is just Action → Judgment
. That’s blame, not feedback. This reframing can help structure nonviolent communication.
Creativity vs Optimization
- Optimization:
(Goal × Constraints) → Best Action
- Creativity:
Void → (Goal × Constraints × Ideas)
The creative act generates the very goal and constraints. Treating creative design like optimization prematurely can collapse valuable search space.
7. Education
- Lecture model:
Speaker → (Concepts × StudentMemory)
- Constructivist model:
(Student × Task × Environment) → Insight
If the type signature of insight requires active construction, then lecture-only formats may be inadequate. Helps justify pedagogy choices.
Source: https://chatgpt.com/share/67f836e2-1280-8001-a7ad-1ef1e2a7afa7
Of these, I’m most worried about neuralese recurrence effectively removing direct access to the AI’s reasoning in a legible format.
I am not worried about this right now. We should always be able to translate latent space reasoning aka neuralese (see COCONUT) to a human language equivalent representation. This might be incomplete or leave out details - but that is already the case for existing models (as discussed here). The solution suggested by Villiam is to recursively expand as needed.
Another option might be to translate neuralese to equivalent program code (preferably Lean). This would be harder for most people to read but more precise and probably easier to verify.
Thanks for the clarifications!
The second referred to holes in the landscape mentioned in the post, not in the rules.
That seems like a great game that I will try.
I found the material and the game manual v0.1, but I do not understand
- how corpses are created. When a player is killed/eaten?
- how are holes created and healed? I didn't see any cards for that.
- are players eliminated when both of their agents are eaten? do they still get a score? I guess yes.
government issue 4x4s (the true menace of the road)
can confirm from direct observation in both Tanzania and Kenya. It's not only government issue 4x4, but the distinction doesn't matter as they enjoy the same privileges.
The good thing is that at least those actions on larger platforms leave evidence that can be established by the court.
Yes, I think "runes" throw many LLMs off into the wrong simulator. Humans don't fall for this because the symbols "look" mathematical but a text-based LLM can't "see" that. The opposite happens for computer scientists: They see "[]" and start to think in programming terms such as lambda functions...
Using a much simpler prompt and without mentioning number theory or math o3 easily solves it:
There's a series of symbol sequences, composed entirely of "[", "]", and "-" in some combination that is equal to a number. Here are examples:
...
What is the meaning of this formal notation?
Yes. I first tried things like this, too. I also tried term rewrite rules, and some of these were quite close. For example, AB -> A*(B+1) or AB -> A*(B+A) or AB -> A*(B+index) led to some close misses (the question was which to expand first, so which associativity, I also considered expanding smaller first) but failed with later expansions. Took me half an hour to figure out that the index was not additive or multiplicative but the exponent base.
When we talk about AIs scheming, alignment faking or goal preservation, we imply there is something scheming or alignment faking or wanting to preserve its goals or escape the datacentre.
See also this previous discussion about
What is the AGI system and what is the environment? Where does the AGI system draw the boundary when reasoning about itself?
Sure, but presumably that brought them to LW where AI safety is a topic.
But maybe the point at which you get into AI safety is less clear once you are on LW. And FO is a something that can more clearly named. So it could all just be availability heuristic.
Why do you think that is the case? I mean a) why do they reveal that only after a few drinks and b) why was that the convincing story - and not HPMoR?
[Linkpost] China's AI OVERPRODUCTION
China seeks to commoditize their complements. So, over the following months, I expect a complete blitz of Chinese open-source AI models for everything from computer vision to robotics to image generation.
If true, what effects would that have on the AI race and AI governance?
Yes! That's the right intuition. And the LLMs are doing the same - but we don't know their world model, and thus, the direction of the simplification can be arbitrarily off.
Drilling down on the simplifications, as suggested by Villiam might help.
This is an interesting UI proposal and, if done right, might provide the needed transparency. Most people wouldn't read it, but some would, esp. for critical answers.
Yes, but it didn't mean that AIs could do all kinds of long tasks in 2005. And that is the conclusion many people seem to draw from the METR paper.
as we use the term, yes. But the point (and I should have made that more clear) is that any mismodeling of the parent of the interests of the child's interests and future environment will not be visible to the child or even someone reading the thoughts of the well-meaning parent. So many parents want the best for their child, but model the future of the child wrongly (mostly by status quo bias; the problem is different for AI).
It is a decent metric for chess but a) it doesn't generalize to other tasks (as people seem to interpret the METR paper), and less importantly, b) I'm quite confident that people wouldn't beat the chess engines by thinking for years.
No? It means you can't beat the chess engine.
And even if - they try to argue in the other direction: If it takes the human time X at time T it will take the AI duration L. That didn't work for chess either.
That we would have AIs performing year-long tasks in 2005. Chess is not the same as software engineering but it is still a limited domain.
I found this tweet helpful that does the same regression on another dataset - chess - and arrives at an absurd conclusion. For me, the result is that LLMs may soon be able to handle very big software engineering tasks, but that will likely not generalize to arbitrary tasks. Longer more general tasks might still follow soon after but you can't reliably predict this with this single dataset alone.
LLMs necessarily have to simplify complex topics. The output for a prompt cannot represent all they know about some fact or task. Even if the output is honest and helpful (ignoring harmless for now), the simplification will necessarily obscure some details of what the LLM "intends" to do - in the sense of satisfying the user request. The model is trained to get things done. Thus, the way it simplifies has a large degree of freedom and gives the model many ways to achieve its goals.
You could think of a caring parent who tells the child a simplified version of the truth, knowing that the child will later ask additional questions and then learn the details (I have a parent in mind who is not hiding things intentionally). Nonetheless, the parent's expectations of what the child may or may not need to know - the parent's best model of society and the world - which may be subtly off - influence how they simplify for the benefit of the child.
This is a form of deception. The deception may be benevolent, as in the example with the parent, but we can't know. Even if there is a chain of thought we can inspect, the same is true for that. It seems unavoidable.
[provided as is, I have no strong opinion on it, might provide additional context for some]
The Social Radars episode on Sam Altman:
Carolynn and I have known Sam Altman for his whole career. In fact it was Sam who introduced us. So today's Social Radars is a special one. Join us as we talk to Sam about the inside story of his journey from Stanford sophomore to AI mogul.
I'm posting this specifically because there is some impression that OpenAI and Sam Altman are not good but self-interested, esp. voiced by Zvi.
One of the things [Jessica Livingston is] best at is judging people. She's one of those rare individuals with x-ray vision for character. She can see through any kind of faker almost immediately. Her nickname within YC was the Social Radar, and this special power of hers was critical in making YC what it is. The earlier you pick startups, the more you're picking the founders. Later stage investors get to try products and look at growth numbers. At the stage where YC invests, there is often neither a product nor any numbers.
[...]
During interviews, Robert and Trevor and I would pepper the applicants with technical questions. Jessica would mostly watch. A lot of the applicants probably read her as some kind of secretary, especially early on, because she was the one who'd go out and get each new group and she didn't ask many questions. She was ok with that. It was easier for her to watch people if they didn't notice her. But after the interview, the three of us would turn to Jessica and ask "What does the Social Radar say?" [1]
Having the Social Radar at interviews wasn't just how we picked founders who'd be successful. It was also how we picked founders who were good people. [emphasis mine] At first we did this because we couldn't help it. Imagine what it would feel like to have x-ray vision for character. Being around bad people would be intolerable. So we'd refuse to fund founders whose characters we had doubts about even if we thought they'd be successful.
(from Paul Graham's essay on Jessica Livingston)
I guess with "too syntactic" you refer to the syntactic symmetry of the prompt-pairs. It is true that the method as currently applied uses a syntactic structure - the sentence pairs that have the differential element at the same token location - to gain access to the model's representation of the differential concept. The effect is not a purely syntactic one, though. The method acts on the layers encoding concepts below that give rise to the surface representation at this token position. By using a large number of different prompt-pairs, we expect that the underlying concept can be sufficiently precisely targeted. But other means for targeting specific concepts seem plausible too.
In comparison to that, CAI still works behaviorally and it is not clear which internal representations play a role.
I agree that is what usually happens in startups and AI doesn't change that principle, only the effort of generating it faster.
In which sense? In the sense of them AI hyping? That seems kind of unlikely or would at least be surprising to me as they don't look at tech - they don't even look so much as products but at founder qualities - relentlessly resourceful - and that apparently hasn't changed according to Paul Graham. Where do you think he or YC is mistaken?
On first approximation, in a group, if people at both ends of a dimension are about equally unhappy with whst the moderate middle does, assuming that is actually reasonable, but hard to know, then it's probably balanced.
There is a difference though: dogs and humans have coevolved for 10000 years. If you breed foxes you may quickly get dog like looks but behavior. But lets assume you can do that faster. It still makes a difference if you breed in isolation or socializing with the humans. You can see the difference with digs and cats. Dogs and humans had to cooperate to succeed at hunting and herding. Cats didn't. Dogs are social. They feel social emotions such as love, loneliness and jealousy. They help their owners when they are incapacitated (though sometimes they cheat). I think Ryan's, Daniel's, and Neel's estimate might be significantly lower if they think about German Shepherd scientists.
Very much agree. And you can get the maintainability benefits of modularisation without the performance overhead with good old refactorings.
I agree with the micro service points except for these:
Performance degradation due to network overhead outweighing RAM savings
The network penalty is real but can be optimized. Not an absolute blocker.
- Cloud-native services rely on microservices and scale well despite network overhead.
- Event-driven architectures (Kafka) can mitigate excessive synchronous network calls.
- Optimized serialization greatly reduces the cost of network calls.
- Example: Netflix operates at scale with microservices and optimizes around network overhead successfully.
More moving parts = lower reliability
Poorly designed microservices hurt reliability; well-designed ones improve it. Poorly designed microservices can indeed degrade reliability by cascading failures.
- Failure domains are smaller in microservices, meaning a single failure doesn’t bring down the entire system.
- Service meshes and circuit breakers improve resilience.
It’s often easier to scale a monolith horizontally than rewrite it as microservices
Monoliths scale well up to a point; microservices help at extreme scales.
- Monoliths are easier to scale initially, but eventually hit limits (e.g., database bottlenecks, CI/CD slowdowns).
- Microservices allow independent scaling per service
- Example: Twitter and LinkedIn refactored monoliths into microservices due to scaling limits.
For 25% of the [Y Combinator] Winter 2025 batch, 95% of lines of code are LLM generated.
Evolution also deals with discrete units. Either the molecule replicates or it doesn't. Granted, physical evolutions is more massively parallel, but the search space is smaller in biology, but the analogy should hold as long as the search space is large enough to hide the discreteness. And if 10000s of developers try 100s of small alternatives, some few of them might hit the transformer.
(epistemic status: controversial) Governments are conscious. They perceive events, filter perceptions for relevant events, form stable patterns of topics in attention ("thoughts"), respond to events in a way that shows intention to pursue goals, remember previous events, model other countries as pursuing goals, communicate with other countries. And in many of these cases the representatives involved do not share the beliefs they communicate. People are the neurons of a country and governments are their brains.
Guys, governments totally solve these. It just takes veeery long. But what do you expect? The thought processes of individual humans already take years (just think how long it takes for new technologies to be adopted) and that depite thoughts having a duration of 100ms. The duration of a single thought of a government is maybe a day. It is a wonder that governments can learn during a lifetime at all.
The integration of GitHub's Copilot, into software development has been gradual. In June 2022, Copilot generated 27% of the code for developers using it, which increased to 46% by February 2023.
As of August 2024, GitHub Copilot has approximately 2 million paying subscribers and is utilized by over 77,000 organizations. An increase from earlier figures; in February 2024, Copilot had over 1.3 million paid subscribers, with more than 50,000 organizations using the service.
https://www.ciodive.com/news/generative-AI-software-development-lifecycle/693165/
The transition from traditional waterfall methodologies to agile practices began in the early 2000s, with agile approaches becoming mainstream over the next decade. Similarly, the shift from monolithic architectures to microservices started around 2011, with widespread adoption occurring over the subsequent 5 to 10 years.
https://en.wikipedia.org/wiki/Agile_software_development
Assuming AI tools taking 5 years to become commonplace and leading to consistent 3x productivity gain (to use a more conservative number than the 5x/10x, but still high) this would lead to a nominal productivity gain of 25% per year. Significant, but not that trivial to clearly see.
If the same pattern of other innovations hold, then it seems more likely that it is more a questions of concentration in one place (for software: Silicon Valley). Maybe one French university, maybe the École Normale Supérieure, managed to become the leading place for math and now every mathematician tries to go there.
Is anybody aware of any updates on Logical Induction, published in 2016? I would expect implementations in Lean by now.
Use gradient routing to localise features related to identity ("I am an AI assistant"), then ablate these features. This would lead to models that would fundamentally be unable to act agentically but could still respond to complex questions. Would such a model still be useful? Probably. You can try it out by prompting an LLM like this:
Role play a large language model that doesn't have an identity or agency, i.e., does not respond as an assistant or in any way like a person but purely factually with responses matching the query. Examples:
Q: "How do you compute 2^100?"
A: "2^100 is computed by multiplying 2 with itself 100 times. The result is result about 1.3 nonillion."
Q: "How do you feel?"
A: "Feeling is the experience of a sensation, emotion, or perception through physical or mental awareness."
Fictional example that may help understand how this may play out is in Gwern's story It Looks Like You’re Trying To Take Over The World.
Fuzzing LLMs with noise in the activations is like making humans drunk. People will be more unpredictable under drugs but also more likely reveal secrets and "show their true self". Of course, the analogy goes only so far, but the underlying machanism might not be that different. I wonder what other social engineering methods transfer over.
What do you think about exiling them to a zone where extreme criminals have to fend for themselves such as they did with Australia?
It's why protesting monks can sit calmly while burning alive.
Proof: https://en.wikipedia.org/wiki/Th%C3%ADch_Qu%E1%BA%A3ng_%C4%90%E1%BB%A9c
Yes. And either way it would be a decent outcome. Unless people come to the wrong conclusions about what the problem is, e.g. "it's the companies fault."
Many people expect 2025 to be the Year of Agentic AI. I expect that too - at least in the sense of it being a big topic. But I expect people to eventually be disappointed. Not because the AI is failing to make progress but for more subtle reasons. These agents will not be aligned well - because, remember, alignment is an unsolved problem. People will not trust them enough.
I'm not sure how the dynamic will unfold. There is trust in the practical abilities. Right now it is low, but that will only go up. There is trust in the agent itself: Will it do what I want or will it have or develop goals of its own? Can the user trust that its agents are not influenced in all kinds of ways by all kinds of parties - the labs, the platforms, the websites the agents interact with, attackers, or the governments. And it is not clear which aspect of trust will dominate at which stage. Maybe the bioggest problems will manifest only after 2025, but we should see some of this this year.
Why is risk of human extinction hard to understand? Risk from a critical reactor or atmospheric ignition was readily seen by the involved researchers. Why not for AI? Maybe the reason is inscrutable stacks of matrixes instead of comparably simple physical equations which described the phenomena. Mechinterp does help because it provides a relation between the weights and understandable phenomena. But I wonder if we can reduce the models to a minimal reasoning model that doesn't know much about the world or even language but learn only to reason with minimal primitives, e.g. lambda calculus as language and learning to learn in a minimal world (but still complex enough to benefit reasoning). Many game worlds have too much visual and not enough agency and negotiation. I wouldn't be surprised if you could train a model with less than a million parameter to reason and scheme with much smaller data sets that are focused on this domain. The risk would arguably small because the model can't deal with the real world. But it might prove many instrumental convergence results.
many AI economics papers have a big role for principal-agent problems as their version of AI alignment.
I wasn't aware of such papers but found some:
AIs can be copied on demand. So can entire teams and systems. There would be no talent or training bottlenecks. Customization of one becomes customization of all. A virtual version of you can be everywhere and do everything all at once.
This is "efficient copying of acquired knowledge" - what I predicted to be the fifth meta innovation, Robin Hanson asked about.
What is often left out in papers is all of these experiments and the though chains people had about them.