Posts
Comments
AI systems can presumably be given at least as much access to company data as human employees at that company. So if rapidly scaling up the number and quality of human workers at a given company would be transformative, AI agents with >=human-level intelligence can also be transformative.
I think a little more explanation is required on why there isn't already a model with 5-10x* more compute than GPT-4 (which would be "4.5 level" given that GPT version numbers have historically gone up by 1 for every two OOMs, though I think the model literally called GPT-5 will only be a roughly 10x scale-up).
You'd need around 100,000 H100s (or maybe somewhat fewer; Llama 3.1 was 2x GPT-4 and trained using 16,000 H100s) to train a model at 10x GPT-4. This has been available to the biggest hyperscalers since sometime last year. Naively it might take ~9 months from taking delivery of chips to releasing a model (perhaps 3 months to set up the cluster, 3 months for pre-training, 3 months of post-training, evaluations, etc). But most likely the engineering challenges in building a cluster that big, which is unprecedented, and perhaps high demand for inference, has prevented them from concentrating that much compute into one training run in time to release a model by now.
*I'm not totally sure the 5x threshold (1e26 FLOP) hasn't been breached but most people think it hasn't.
Llama 405B was trained on a bunch of synthetic data in post-training for coding, long-context prompts, and tool use (see section 4.3 of the paper).
AI that can rewrite CUDA is a ways off. It's possible that it won't be that far away in calendar time, but it is far away in terms of AI market growth and hype cycles. If GPT-5 does well, Nvidia will reap the gains more than AMD or Google.
The US is currently donating doses to other countries in large quantities. Domestically, it has around 54m doses distributed but not used right now. (https://covid.cdc.gov/covid-data-tracker/#vaccinations). Some but certainly not all of those are at risk of expiration. If US authorities recommended booster shots for the general population then that would easily use up the currently unused supply and reduce vaccine exports.
I did it, I did it, I did it, yay!
A compromise that I find appealing and might implement for myself is giving a fixed percentage over a fixed amount, with that fixed percentage being relatively high (well above ten percent). You could also have multiple "donation brackets" with an increased marginal donation rate as your income increases.
I doubt an IQ test would be useful at all. One has to be quite intelligent to be a real candidate for presidency.
He also likes arguing with Jeff Kaufman about effective altruism.
Probably shouldn't say someone "probably" has an IQ between 145 and 160 unless you have pretty good evidence.
I think it makes a big difference if the preferred theory is gender/racial equality as opposed to fundamentalist Christianity, and whether the opposition to those perceived challenges result from emotional sensitivity as opposed to blind faith. At the very least, the blog post doesn't indicate that the author would be irrational about issues other than marginalization.
I don't see how the fact that the permissiveness principle is only based on one (two, actually, including the third one) of the six foundations would imply that it's not a widely-held intuition.
How risk-averse are you? But even if you aren't, I suspect that right now bitcoins aren't a great investment strictly in expected-value terms due to the high risk that they will decline in value by a lot. No one really knows what will happen, though.
Another possible critique is that the philosophical arguments for ethical egoism are (I think) at least fairly plausible. The extent to which this is a critique of EA is debatable (since people within the movement state that it's compatible with non-utilitarian ethical theories and that it appeals to people who want to donate for self-interested reasons) but it's something which merits consideration.
Ehh, I think that's pretty much what rule util means, though I'm not that familiar with the nuances of the definition so take my opinion with a grain of salt. Rule util posits that we follow those rules with the intent of promoting the good; that's why it's called rule utilitarianism.
That would be a form of deontology, yes. I'm not sure which action neo-Kantians would actually endorse in that situation, though.
I think that's accurate, though maybe not because the programming jargon is unnecessarily obfuscating. The basic point is that following the rule is good in and of itself. You shouldn't kill people because there is a value in not killing that is independent of the outcome of that choice.
Your description of deontological ethics sounds closer to rule consequentialism, which is a different concept. Deontology means that following certain rules is good in and of itself, not because they lead to better decisionmaking (in terms of promoting some other good) in situations of uncertainty.
Survey taken. Defected since I'm neutral as to whether the money goes to Yvain or a random survey-taker, but would prefer the money going to me over either of those two.