Darklight's Shortform
post by Darklight · 2021-02-20T15:01:33.890Z · LW · GW · 28 commentsContents
28 comments
28 comments
Comments sorted by top scores.
comment by Darklight · 2024-09-21T00:34:24.644Z · LW(p) · GW(p)
So, a while back I came up with an obscure idea I called the Alpha Omega Theorem and posted it on the Less Wrong forums. Given how there's only one post about it, it shouldn't be something that LLMs would know about. So in the past, I'd ask them "What is the Alpha Omega Theorem?", and they'd always make up some nonsense about a mathematical theory that doesn't actually exist. More recently, Google Gemini and Microsoft Bing Chat would use search to find my post and use that as the basis for their explanation. However, I only have the free version of ChatGPT and Claude, so they don't have access to the Internet and would make stuff up.
A couple days ago I tried the question on ChatGPT again, and GPT-4o managed to correctly say that there isn't a widely known concept of that name in math or science, and basically said it didn't know. Claude still makes up a nonsensical math theory. I also today tried telling Google Gemini not to use search, and it also said it did not know rather than making stuff up.
I'm actually pretty surprised by this. Looks like OpenAI and Google figured out how to reduce hallucinations somehow.
Replies from: Darklight↑ comment by Darklight · 2024-10-03T13:55:55.913Z · LW(p) · GW(p)
I ran out of the usage limit for GPT-4o (seems to just be 10 prompts every 5 hours) and it switched to GPT-4o-mini. I tried asking it the Alpha Omega question and it made some math nonsense up, so it seems like the model matters for this for some reason.
comment by Darklight · 2022-09-05T17:46:56.819Z · LW(p) · GW(p)
I recently interviewed with Epoch, and as part of a paid work trial they wanted me to write up a blog post about something interesting related to machine learning trends. This is what I came up with:
http://www.josephius.com/2022/09/05/energy-efficiency-trends-in-computation-and-long-term-implications/
comment by Darklight · 2024-03-10T19:01:44.156Z · LW(p) · GW(p)
Recently I tried out an experiment using the code from the Geometry of Truth paper to try to see if using simple label words like "true" and "false" could substitute for the datasets used to create truth probes. I also tried out a truth probe algorithm based on classifying with the higher cosine similarity to the mean vectors.
Initial results seemed to suggest that the label word vectors were sorta acceptable, albeit not nearly as good (around 70% accurate rather than 95%+ like with the datasets). However, testing on harder test sets showed much worse accuracy (sometimes below chance, somehow). So I can probably conclude that the label word vectors alone aren't sufficient for a good truth probe.
Interestingly, the cosine similarity approach worked almost identically well as the mass mean (aka difference in means) approach used in the paper. Unlike the mass mean approach though, the cosine similarity approach can be extended to a multi-class situation. Though, logistic regression can also be extended similarly, so it may not be particularly useful either, and I'm not sure there's even a use case for a multi-class probe.
Anyways, I just thought I'd write up the results here in the unlikely event someone finds this kind of negative result as useful information.
comment by Darklight · 2024-10-03T15:04:13.753Z · LW(p) · GW(p)
I've been looking at the numbers with regards to how many GPUs it would take to train a model with as many parameters as the human brain has synapses. The human brain has 100 trillion synapses, and they are sparse and very efficiently connected. A regular AI model fully connects every neuron in a given layer to every neuron in the previous layer, so that would be less efficient.
The average H100 has 80 GB of VRAM, so assuming that each parameter is 32 bits, then you have about 20 billion per GPU. So, you'd need 10,000 GPUs to fit a single instance of a human brain in RAM, maybe. If you assume inefficiencies and need to have data in memory as well you could ballpark another order of magnitude so 100,000 might be needed.
For comparison, it's widely believed that OpenAI trained GPT4 on about 10,000 A100s that Microsoft let them use from their Azure supercomputer, most likely the one listed as third most powerful in the world by the Top500 list.
Recently though, Microsoft and Meta have both moved to acquire more GPUs that put them in the 100,000 range, and Elon Musk's X.ai recently managed to get a 100,000 H100 GPU supercomputer online in Memphis.
So, in theory at least, we are nearly at the point where they can train a human brain sized model in terms of memory. However, keep in mind that training such a model would take a ton of compute time. I haven't done to calculations yet for FLOPS so I don't know if it's feasible yet.
Just some quick back of the envelope analysis.
Replies from: Vladimir_Nesov, Darklight↑ comment by Vladimir_Nesov · 2024-10-04T02:03:53.749Z · LW(p) · GW(p)
it's widely believed that OpenAI trained GPT4 on about 10,000 A100s
What I can find is 20,000 A100s. With 10K A100s, which are 300e12 FLOP/s in BF16, you'd need 6 months (so this is still plausible) at 40% utilization to get the rumored 2e25 FLOPs. We know Llama-3-405B is 4e25 FLOPs and approximately as smart, and it's dense, so you can get away with fewer FLOPs in a MoE model to get similar capabilities, which supports the 2e25 FLOPs figure from the premise that original GPT-4 is MoE.
The average H100 has 80 GB of VRAM
H200s are 140 GB, and there are now MI300Xs with 192 GB. B200s will also have 192 GB.
assuming that each parameter is 32 bits
Training is typically in BF16, though you need enough space for gradients in addition to parameters (and with ZeRO, optimizer states). On the other hand, inference in 8 bit quantization is essentially indistinguishable from full precision.
Recently though, Microsoft and Meta have both moved to acquire more GPUs that put them in the 100,000 range
The word is, next year it's 500K B200s[1] for Microsoft. And something in the gigawatt range from Google as well.
He says 500K GB200s, but also that it's 1 gigawatt all told, and that they are 2-3x faster than H100s, so I believe he means 500K B200s. In various places, "GB200" seems to ambiguously refer either to a 2-GPU board with a Grace CPU, or to one of the B200s on such a board. ↩︎
↑ comment by Darklight · 2024-10-03T15:28:00.255Z · LW(p) · GW(p)
Also, even if we can train and run a model the size of the human brain, it would still be many orders of magnitude less energy efficient than an actual brain. Human brains use barely 20 watts. This hypothetical GPU brain would require enormous data centres of power, and each H100 GPU uses 700 watts alone.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-10-04T02:09:29.335Z · LW(p) · GW(p)
Also, even if we can train and run a model the size of the human brain, it would still be many orders of magnitude less energy efficient than an actual brain. Human brains use barely 20 watts.
For inference on a GPT-4 level model, GPUs use much less than a human brain [LW(p) · GW(p)], about 1-2 watts (across all necessary GPUs), if we imagine slowing them down to human speed and split the power among the LLM instances that are being processed at the same time. Even for a 30 trillion parameter model, it might only get up to 30-60 watts [LW(p) · GW(p)] in this sense.
each H100 GPU uses 700 watts alone
Should count the rest of the datacenter as well, which gets it up to 1200-1400 watts per H100, about 2000 watts for B200s in GB200 systems. (It's hilarious how some model training papers make calculations using 700 watts for CO2 emission estimates. They feel obliged to make the calculations, but then cheat like there's no tomorrow.)
Replies from: Darklightcomment by Darklight · 2024-01-15T17:58:33.385Z · LW(p) · GW(p)
Okay, so I decided to do an experiment in Python code where I modify the Iterated Prisoner's Dilemma to include Death, Asymmetric Power, and Aggressor Reputation, and run simulations to test how different strategies do. Basically, each player can now die if their points falls to zero or below, and the payoff matrix uses their points as a variable such that there is a power difference that affects what happens. Also, if a player defects first in any round of any match against a non-aggressor, they get the aggressor label, which matters for some strategies that target aggressors.
Long story short, there's a particular strategy I call Avenger, which is Grim Trigger but also retaliates against aggressors (even if the aggression was against a different player) that ensures that the cooperative strategies (ones that never defect first against a non-aggressor) win if the game goes enough rounds. Without Avenger though, there's a chance that a single Opportunist strategy player wins instead. Opportunist will Defect when stronger and play Tit-For-Tat otherwise.
I feel like this has interesting real world implications.
Interestingly, Enforcer, which is Tit-For-Tat but also opens with Defect against aggressors, is not enough to ensure the cooperative strategies always win. For some reason you need Avenger in the mix.
Edit: In case anyone wants the code, it's here.
Replies from: Dagon, Darklight↑ comment by Dagon · 2024-01-15T21:25:41.582Z · LW(p) · GW(p)
So, the aggressor tag is a way to keep memory across games, so they're not independent. I wonder what happens when you start allowing more complicated reputation (including false accusations of aggression).
I feel like any interesting real-world implications are probably fairly tenuous. I'd love to hear some and learn that I'm wrong.
Replies from: Darklight, Darklight↑ comment by Darklight · 2024-01-22T15:25:24.395Z · LW(p) · GW(p)
So, I adjusted the aggressor system to work like alliances or defensive pacts instead of a universal memory tag. Basically, now players make allies when they both cooperate and aren't already enemies, and make enemies when defected against first, which sets all their allies to also consider the defector an enemy. This, doesn't change the result much. The alliance of nice strategies still wins the vast majority of the time.
I also tried out false flag scenarios where 50% of the time the victim of a defect first against non-enemy will actually be mistaken for the attacker. This has a small effect. There is a slight increase in the probability of an Opportunist strategy winning, but most of the time the alliance of nice strategies still wins, albeit with slightly fewer survivors on average.
My guess for why this happens is that nasty strategies rarely stay in alliances very long because they usually attack a fellow member at some point, and eventually, after sufficient rounds one of their false flag attempts will fail and they will inevitably be kicked from the alliance and be retaliated against.
The real world implications of this remain that it appears that your best bet of surviving in the long run as a person or civilization is to play a nice strategy, because if you play a nasty strategy, you are much less likely to survive in the long run.
In the limit, if the nasty strategies win, there will only be one survivor, dog eat dog highlander style, and your odds of being that winner are 1/N, where N is the number of players. On the other hand, if you play a nice strategy, you increase the strength of the nice alliance, and when the nice alliance wins as it usually does, you're much more likely to be a survivor and have flourished together.
My simulation currently by default has 150 players, 60 of which are nice. On average about 15 of these survive to round 200, which is a 25% survival rate. This seems bad, but the survival rate of nasty strategies is less than 1%. If I switch the model to use 50 Avengers and 50 Opportunists, on average 25 Avengers survive to zero Opportunists, a 50% survival rate for the Avengers.
Thus, increasing the proportion of starting nice players increases the odds of nice players surviving, so there is an incentive to play nice.
↑ comment by Darklight · 2024-01-15T22:28:01.098Z · LW(p) · GW(p)
Admittedly this is a fairly simple set up without things like uncertainty and mistakes, so yes, it may not really apply to the real world. I just find it interesting that it implies that strong coordinated retribution can, at least in this toy set up, be useful for shaping the environment into one where cooperation thrives, even after accounting for power differentials and the ability to kill opponents outright, which otherwise change the game enough that straight Tit-For-Tat doesn't automatically dominate.
It's possible there are some situations where this may resemble the real world. Like, if you ignore mere accusations and focus on just actual clear cut cases where you know the aggression has occurred, such as with countries and wars, it seems to resemble how alliances form and retaliation occurs when anybody in the alliance is attacked?
I personally also see it as relevant for something like hypothetical powerful alien AGIs that can see everything that happens from space, and so there could be some kind of advanced game theoretic coordination at a distance with this. Though that admittedly is highly speculative.
It would be nice though if there was a reason to be cooperative even to weaker entities as that would imply that AGI could possibly have game theoretic reasons not to destroy us.
↑ comment by Darklight · 2024-01-28T23:19:06.479Z · LW(p) · GW(p)
Update: I made an interactive webpage where you can run the simulation and experiment with a different payoff matrix and changes to various other parameters.
comment by Darklight · 2023-12-27T19:22:00.449Z · LW(p) · GW(p)
I have some ideas and drafts for posts that I've been sitting on because I feel somewhat intimidated by the level of intellectual rigor I would need to put into the final drafts to ensure I'm not downvoted into oblivion (something a younger me experienced in the early days of Less Wrong).
Should I try to overcome this fear, or is it justified?
For instance, I have a draft of a response to Eliezer's List of Lethalities post that I've been sitting on since 2022/04/11 because I doubted it would be well received given that it tries to be hopeful and, as a former machine learning scientist, I try to challenge a lot of LW orthodoxy about AGI in it. I have tremendous respect for Eliezer though, so I'm also uncertain if my ideas and arguments aren't just hairbrained foolishness that will be shot down rapidly once exposed to the real world, and the incisive criticism of Less Wrongers.
The posts here are also now of such high quality that I feel the bar is too high for me to meet with my writing, which tends to be more "interesting train-of-thought in unformatted paragraphs" than the "point-by-point articulate with section titles and footnotes" style that people tend to employ.
Anyone have any thoughts?
Replies from: mike_hawke↑ comment by mike_hawke · 2023-12-27T20:22:01.922Z · LW(p) · GW(p)
Personally, I find shortform to be an invaluable playground for ideas. When I get downvoted, it feels lower stakes. It's easier to ignore aloof and smugnorant comments, and easier to update on serious/helpful comments. And depending on how it goes, I sometimes just turn it into a regular post later, with a note at the top saying that it was adapted from a shortform.
If you really want to avoid smackdowns, you could also just privately share your drafts with friends first and ask for respectful corrections.
Spitballing other ideas, I guess you could phrase your claims as questions, like "have objections X, Y, or Z been discussed somewhere already? If so, can anyone link me to those discussions?" Seems like that could fail silently though, if an over-eager commenter gives you a link to low-quality discussion. But there are pros and cons for every course of action/inaction.
comment by Darklight · 2024-05-24T15:18:42.144Z · LW(p) · GW(p)
I'm wondering what people's opinions are on how urgent alignment work is. I'm a former ML scientist who previously worked at Maluuba and Huawei Canada, but switched industries into game development, at least in part to avoid contributing to AI capabilities research. I tried earlier to interview with FAR and Generally Intelligent, but didn't get in. I've also done some cursory independent AI safety research in interpretability and game theoretic ideas my spare time, though nothing interesting enough to publish yet.
My wife also recently had a baby, and caring for him is a substantial time sink, especially for the next year until daycare starts. Is it worth considering things like hiring a nanny, if it'll free me up to actually do more AI safety research? I'm uncertain if I can realistically contribute to the field, but I also feel like AGI could potentially be coming very soon, and maybe I should make the effort just in case it makes some meaningful difference.
comment by Darklight · 2024-01-15T17:57:03.801Z · LW(p) · GW(p)
I was recently trying to figure out a way to calculate my P(Doom) using math. I initially tried just making a back of the envelope calculation by making a list of For and Against arguments and then dividing the number of For arguments by the total number of arguments. This led to a P(Doom) of 55%, which later got revised to 40% when I added more Against arguments. I also looked into using Bayes Theorem and actual probability calculations, but determining P(E | H) and P(E) to input into P(H | E) = P(E | H) * P(H) / P(E) is surprisingly hard and confusing.
comment by Darklight · 2021-02-20T15:01:34.314Z · LW(p) · GW(p)
So, I had a thought. The glory system [LW · GW] idea that I posted about earlier, if it leads to a successful, vibrant democratic community forum, could actually serve as a kind of dataset for value learning. If each post has a number attached to it that indicates the aggregated approval of human beings, this can serve as a rough proxy for a kind of utility or Coherent Aggregated Volition.
Given that individual examples will probably be quite noisy, but averaged across a large amount of posts, it could function as a real world dataset, with the post content being the input, and the post's vote tally being the output label. You could then train a supervised learning classifier or regressor that could then be used to guide a Friendly AI model, like a trained conscience.
This admittedly would not be provably Friendly, but as a vector of attack for the value learning problem, it is relatively straightforward to implement and probably more feasible in the short-run than anything else I've encountered.
Replies from: Darklightcomment by Darklight · 2024-10-19T21:30:02.244Z · LW(p) · GW(p)
I tried asking ChatGPT, Gemini, and Claude to come up with a formula that converts between correlation space to probability space while preserving the relationship 0 = 1/n. I came up with such a formula a while back, so I figure it shouldn't be hard. They all offered formulas, all of which were shown to be very much wrong when I actually graphed them to check.
Replies from: cubefox↑ comment by cubefox · 2024-10-19T22:36:56.892Z · LW(p) · GW(p)
What's correlation space, as opposed to probability space?
Replies from: Darklight↑ comment by Darklight · 2024-10-20T13:34:52.624Z · LW(p) · GW(p)
Correlation space is between -1 and 1, with 1 being the same (definitely true), -1 being the opposite (definitely false), and 0 being orthogonal (very uncertain). I had the idea that you could assume maximum uncertainty to be 0 in correlation space, and 1/n (the uniform distribution) in probability space.
Replies from: cubefox↑ comment by cubefox · 2024-10-20T14:08:42.555Z · LW(p) · GW(p)
Not sure what you mean here, but would linearly transform a probability from [0..1] to [-1..1]. You could likewise transform a correlation coefficient to [0..1] with . For , this would correspond to the probability of A occuring if and only if B occurs. I.e. when .
Replies from: Darklight↑ comment by Darklight · 2024-10-20T15:59:38.001Z · LW(p) · GW(p)
So, my main idea is that the principle of maximum entropy aka the principle of indifference suggests a prior of 1/n where n is the number of possibilities or classes. P x 2 - 1 leads to p = 0.5 for c = 0. What I want is for c = 0 to lead to p = 1/n rather than 0.5, so that it works in the multiclass cases where n is greater than 2.
Replies from: cubefox↑ comment by cubefox · 2024-10-20T16:45:30.853Z · LW(p) · GW(p)
What's the solution?
Replies from: Darklight↑ comment by Darklight · 2024-10-20T16:53:31.926Z · LW(p) · GW(p)
p = (n^c * (c + 1)) / (2^c * n)
As far as I know, this is unpublished in the literature. It's a pretty obscure use case, so that's not surprising. I have doubts I'll ever get around to publishing the paper I wanted to write that uses this in an activation function to replace softmax in neural nets, so it probably doesn't matter much if I show it here.