Posts

Report on Frontier Model Training 2023-08-30T20:02:46.317Z
Encouraging New Users To Bet On Their Beliefs 2023-04-18T22:10:58.695Z
Against Deep Ideas 2023-03-19T03:04:01.815Z

Comments

Comment by YafahEdelman (yafah-edelman-1) on Catching AIs red-handed · 2024-04-02T00:11:58.808Z · LW · GW

I think that you're right about it sounding bad. I also think it might actually be pretty bad and if it ends up being a practical way forward that's cause for concern.

Comment by YafahEdelman (yafah-edelman-1) on Catching AIs red-handed · 2024-04-01T23:20:01.888Z · LW · GW

I'm not particularly imagining the scenario you describe. Also what I said had as a premise that a model was discovered to be unhappy and making plans about this. I was not commenting on the likelihood of this happening.

As to whether it can happen - I think being confident based on theoretical arguments is hasty and we should be pretty willing to update based on new evidence. 

... but also on the ~continuity of existence point, I think that having an AI generate something that looks like an internal monologue via CoT is relatively common and Gemini 1.5 Pro has a context lengths long enough that it can fit ~a days worth of experiences in it's ~memory. I think 

(This estimate based on: humans can talk at ~100k words/day and maybe an internal monologue is 10x faster so you get ~1m/day. Gemini 1.5 Pro has a context length of 1m tokens at release, though a 10m token variant is also discussed in their paper.)

Comment by YafahEdelman (yafah-edelman-1) on Catching AIs red-handed · 2024-04-01T19:20:03.800Z · LW · GW

I think it's immoral to remove someone's ability to be unhappy or to make plans to alleviate this, absent that entity's consent. The rolling back solution seems more ethically palatable than some others I can imagine, though it's plausible you end up with an AI that suffers without being able to take actions to alleviate this and deploying that at scale would be result in a very large amount of suffering.

Comment by YafahEdelman (yafah-edelman-1) on Report on Frontier Model Training · 2023-09-12T20:30:25.673Z · LW · GW

I talk about this in the Granular Analysis subsection, but I'll elaborate a bit here.

  • I think that hundreds of thousands of cheap labor hours for curation is a reasonable guess, but this likely comes to under a million dollars in total which is less than 1% of the total.
  • I have not seen any substantial evidence of OpenAI paying for licenses before the training of GPT-4, much less the sort of expenditures that would move the needle on the total cost.
  • After training GPT-4 we do see things like a deal between OpenAI and the Associated Press (also see this article on that which mentions a first mover clause) with costs looking to be in the millions - more than 1% of the cost of GPT-4 but notably it seems that this came after GPT-4. I expect GPT-5, which this sort of deal might be relevant for, to cost substantially more. It's possible I'm wrong about the timing and substantial deals of this sort were in fact made before GPT-4 but I have not seen substantive evidence of this.
Comment by YafahEdelman (yafah-edelman-1) on Report on Frontier Model Training · 2023-08-31T21:50:41.127Z · LW · GW

I think using the term"training run" in that first bullet point is misleading, and "renting the compute" is confusing since you can't actually rent the compute just by having $60M, you likely need to have a multi-year contract.

I can't tell if you're attributing the hot takes to me? I do not endorse them.

Comment by YafahEdelman (yafah-edelman-1) on Report on Frontier Model Training · 2023-08-31T21:47:07.495Z · LW · GW

This is because I'm specifically talking about 2022, and ChatGPT was only released at the very end of 2022, and GPT-4 wasn't released until 2023.

Comment by YafahEdelman (yafah-edelman-1) on Report on Frontier Model Training · 2023-08-31T21:33:48.664Z · LW · GW

Good catch, I think the 30x came from including the advantage given by tensor cores at all and not just lower precision data types. 

Comment by YafahEdelman (yafah-edelman-1) on Report on Frontier Model Training · 2023-08-31T21:31:07.395Z · LW · GW

This is probably the decision I make I am the least confident in, figuring out how to do accounting on this issue is challenging and depends a lot on what one is going to use the "cost" of a training run to reason about. Some questions I had in mind when thinking about cost:

  • If a lone actor want to train a frontier model, without loans or financial assistance from others, how much capitol might they need.
  • How much money should I expect to have been spent by an AI lab that trains a new frontier model, especially a frontier model that is a significant advancement over all prior models (like GPT-4 was).
  • What is the largest frontier model it is feasible to create by any entity. 
  • When a company trains a frontier model, how much are they "betting" on the future profitability of AI? 

The simple initial way I use to compute cost than is to investigate empirical evidence of the expenditures of companies and investment. 

Now, these numbers aren't the same ones a company might care about - they represent expenses without accounting for likely revenue. The argument I find most tempting is that one should look at deprecation cost instead of capital expenditure, effectively subtracting the expected resale value of the hardware from the initial expenditure to purchase the hardware. I have two main reasons for not using this:

  • Computing deprecation cost is really hard, especially in this rapidly changing environment.
  • The resale value of an ML GPU is likely closely tied to profitability of training a model - if it turns out that using frontier models for inference isn't very profitable than I'd expect the value of ML GPUs to decrease. Conversely, if inference is very profitable than the resale value would increase. I think A100s for example have had their price substantially impacted by increased interest in AI -  it's not implausible to me that the resale value of an A100 is actually higher than the initial cost was for OpenAI.

Having said all of this, I'm still not confident I made the right call here.

 

Also, I am relatively confident GPT-4 was trained only with A100s, and did not use any V100s as the colab notebook you linked speculates. I expect that GPT-3, GPT-4, and GPT-5 will all be trained with different generations of GPUs.

Comment by YafahEdelman (yafah-edelman-1) on Report on Frontier Model Training · 2023-08-31T06:35:11.278Z · LW · GW

So, it's true that NVIDIA probably has very high markup on their ML GPUs. I discuss this a bit in the NVIDIA's Monopoly section, but I'll add a bit more detail here.

  1. Google's TPU v4 seems to be competitive with the A100, and has similar cost per hour.
  2. I think the current prices do in fact reflect demand.
  3. My best guess is that the software licensing would not be a significant barrier for someone spending hundreds of millions of dollars on a training run.
  4. Even when accounting for markup[1] a quick rough estimate still implies a fairly significant gap vs gaming GPUs that FLOPs/$ don't account for, though it does shrink that gap considerably.[2]

All this aside, my basic take is that I think "what people are actually paying" is the most straightforward and least speculative means we have of defining near term "cost".

  1. ^

     75-80% for H100 and ... 40-50% for gaming would be my guess?

  2. ^

    Being generous, I get 0.2*24000/(1,599*0.6) implies the H100 costs > 5x to manufacture than the RTX4090 despite having closer to 3x the FLOP/s. 

Comment by YafahEdelman (yafah-edelman-1) on What Boston Can Teach Us About What a Woman Is · 2023-05-01T19:44:57.616Z · LW · GW

I think communicating clearly with the word "woman" is entirely possible for many given audiences. In many communities, there exists an internal consensus as to what region of the conceptual map the word woman refers to. The variance of language between communities isn't confined to the word "woman" - in much of the world the word "football" means what American's mean by "soccer". Where I grew up i understood the tristate area to be NY, PA, and NJ - however the term "the tristate area" is understood by other groups to mean one of ... a large number of options

(Related point: I'm not at all convinced that differing definitions of words is a problem that needs a permanent solution. It seems entirely plausible to me that this allows for beneficial evolution of language as many options spawn and compete with each other.)

Comment by YafahEdelman (yafah-edelman-1) on Encouraging New Users To Bet On Their Beliefs · 2023-04-19T04:06:53.442Z · LW · GW

Manifold.markets is play-money only, no real money required. And users can settle the markets they make themselves, so if you make the market you don't have to worry about loopholes (though you should communicate as clearly as possible so people aren't confused about your decisions).

Comment by YafahEdelman (yafah-edelman-1) on Excessive AI growth-rate yields little socio-economic benefit. · 2023-04-19T01:53:42.462Z · LW · GW

I'm specifically interested in finding something you'd be willing to bet on - I can't find an existing manifold market, would you want to create one that you can decide? I'd be fine trusting your judgment. 

Comment by YafahEdelman (yafah-edelman-1) on Excessive AI growth-rate yields little socio-economic benefit. · 2023-04-18T21:43:40.606Z · LW · GW

I'm a bit confused where you're getting your impression of the average person / American, but I'd be happy to bet on LLMs that are at least as capable as GPT3.5 being used (directly or indirectly) on at least a monthly basis by the majority of Americans within the next year?

Comment by YafahEdelman (yafah-edelman-1) on The ‘ petertodd’ phenomenon · 2023-04-17T17:32:55.346Z · LW · GW

I think that null hypothesis here is that nothing particularly deep is going on, and this is essentially GPT producing basically random garbage since it wasn't trained on the  petertodd token. I'm weary of trying to extract too much meaning from these tarot cards. 

Comment by YafahEdelman (yafah-edelman-1) on Excessive AI growth-rate yields little socio-economic benefit. · 2023-04-04T22:53:04.249Z · LW · GW

I think point (2) of this argument either means something weaker then it needs to for this rest of the argument to go through or is just straightforwardly wrong. 

If OpenAI released a weakly general (but non-singularity inducing) GPT5 tomorrow, it would pretty quickly have significant effects on people's everyday lives. Programmers would vaguely described a new feature and the AI would implement it, AIs would polish any writing I do, I would stop using google to research things and instead just chat with the AI and have it explain such-and-such paper I need for my work. In their spare time people would read custom books (or watch custom movies) tailored to their extremely niche interests. This would have a significant impact on the everyday lives of people within a month. 

It seems concievable that somehow the "socio-economic benefits" wouldn't be as significant that quickly - I don't really know what "socio-economic benefits" are exactly.

However, the rest of your post seems to treat point (2) as proving that there would be no upside from a more powerful AI being released sooner. This feels like a case of a fancy clever theory confusing an obvious reality: better AI would impact a lot of people very quickly. 

Comment by YafahEdelman (yafah-edelman-1) on Against Deep Ideas · 2023-03-20T01:57:04.209Z · LW · GW

Relevance of prior Theoretical ML work to alignment, research on obfuscation in theoretical cryptography as it relates to interpretability, theory underlying various phenomena such as grokking. Disclaimer: This list is very partial and just thrown together.

Comment by YafahEdelman (yafah-edelman-1) on Against Deep Ideas · 2023-03-20T00:42:06.877Z · LW · GW

Hm, yeah that seems like a relevant and important distinction.

Comment by YafahEdelman (yafah-edelman-1) on Against Deep Ideas · 2023-03-20T00:07:43.373Z · LW · GW

I think I was envisioning profoundness as humans can observe it to be primarily an aesthetic property, so I'm not sure I buy the concept of "actually" profoundness, though I don't have a confident opinion about this.

Comment by YafahEdelman (yafah-edelman-1) on Against Deep Ideas · 2023-03-20T00:05:34.844Z · LW · GW

I think on the margin new alignment researchers should be more likely to work on ideas that seem less deep then they currently seem to me to be. 

Working on a wide variety of deep ideas does sound better to me than working on a narrow set of them.

Comment by YafahEdelman (yafah-edelman-1) on Against Deep Ideas · 2023-03-20T00:03:13.409Z · LW · GW

If something seems deep, it touches on stuff that's important and general, which we would expect to be important for alignment.

The specific scenario I talk about in the paragraph you're responding too is one where everything except for the sense of deepness is the same for both ideas, such that someone who doesn't have a sense of what ideas are deep or profound would find the ideas basically equivalent. In such a scenario my argument is that we should expect the deep idea to receive a more attention, despite their not existing legible or well grounded reasons for this. Some amount of preference for the deep idea might be justifiable on the grounds of trusting intuitive insight, but I don't think the record of intuitive insight as to what ideas are good is actually very impressive - there are a huge amount of ideas that didn't work out that sounded deep (see some philosophy, psychoanalysis, ect.) and very few that did work out[1].

try to recover from the sense of deepness some pointers at what seemed deep in the research projects

I think on the margin new theoretical alignment researchers should do less of this, as I think most deep sounding ideas just genuinely aren't very productive to research and aren't amenable to being proven to be unproductive to work on - often times the only evidence that a deep idea isn't productive to work on is that nothing concrete has come of it yet.

  1. ^

    I don't have empirical analysis showing this - I would probably gesture to various prior alignment research projects to support this if I had to, though I worry that would devolve into arguing about what 'success' meant.

Comment by YafahEdelman (yafah-edelman-1) on AGI in sight: our look at the game board · 2023-03-18T08:05:16.795Z · LW · GW

I think I agree with this in many cases but am skeptical of such a norm when the requests are related to criticism of the post or arguments as to why a claim it makes is wrong. I think I agree that the specific request to not respond shouldn't ideally make someone more likely to respond to the rest of the post, but I think that neither should it make someone less likely to respond.

Comment by YafahEdelman (yafah-edelman-1) on GPT-4 solves Gary Marcus-induced flubs · 2023-03-18T07:57:21.908Z · LW · GW

I've tried this for a couple of examples and it performed just as well. Additionally it didn't seem to be suggesting real examples when I asked it what specific prompts and completion examples Gary Marcus had made.

I also think the priors of people following the evolution of GPT should be that these examples will no longer break GPT, as occurred with prior examples. While it's possible this time will be different, I think automatic strong skepticism without evidence is rather unwarranted.

Addendum: I also am skeptical of the idea that OpenAI put much effort into fixing the specific criticisms of Gary Marcus, as I suspect his criticisms do not seem particularly important to them, but proving this sounds difficult.

Comment by YafahEdelman (yafah-edelman-1) on AGI in sight: our look at the game board · 2023-02-21T23:34:57.946Z · LW · GW

I think there are a number of ways in which talking might be good given that one is right about there being obstacles - one that appeals to me in particular is the increased tractability of misuse arising from the relevant obstacles.

[Edit: *relevant obstacles I have in mind. (I'm trying to be vague here)]

Comment by YafahEdelman (yafah-edelman-1) on AGI in sight: our look at the game board · 2023-02-19T09:13:13.808Z · LW · GW

Forget about what the social consensus is. If you have technical understanding of current AIs, do you truly believe there are any major obstacles left? The kind of problems that AGI companies could reliably not tear down with their resources? If you do, state so in the comments, but please do not state what those obstacles are.

I think this request, absent a really strong compelling argument that is spelled out, creates an unhealthy epistemic environment. It is possible that you think this is false or that it's worth the cost, but you don't really argue for either in this post.  You encourage people to question others and not trust blindly in other parts of the post, but this portion expects people to not elaborate on their opinions without an explanation as to why. You repeat this again by saying "So our message is: things are worse than what is described in the post!" without justifying yourselves or, imo, properly conveying the level of caution people should be treating such an unsubstantiated claim. 

I'm tempted to write a post replying with why I think there are obstacles to AGI, what broadly they are with a few examples, and why it's important to discuss them. (I'm not going to do so atm because it's late and I know better then to publically share something that people implied to me is infohazaradous without carefully thinking it over (and discussing doing so with friends as well).)

(I'm also happy to post it as a comment here instead but assume you would prefer not and this is your post to moderate.)
 

Comment by YafahEdelman (yafah-edelman-1) on Basics of Rationalist Discourse · 2023-02-06T07:21:30.324Z · LW · GW

Okay, a few things:

  • They're more likely to be right than I am, or we're "equally right" or something 

I don't think this so much as I think that a new person to lesswrong shouldn't assume you are more likely to be right then they are, without evidence. 

The norms can be evaluated extremely easily on their own; they're not "claims" in the sense that they need rigorous evidence to back them up. You can just ... look, and see that these are, on the whole, some very basic, very simple, very straightforward, and pretty self-evidently useful guidelines.

Strongly disagree. They don't seem easy to evaluate to me, they don't seem straightforward, and most of all they don't seem self-evidently useful. (I admit, someone telling me something I don't understand is self-evident is a pet peeve of mine).

I suppose one could be like "has Duncan REALLY proven that Julia Galef et al speak this way?" but I note that in over 150 comments (including a good amount of disagreement) basically nobody has raised that hypothesis. In addition to the overall popularity of the list, nobody's been like, "nuh-uh, those people aren't good communicators!" or "nuh-uh, those good communicators' speech is not well-modeled by this!"

I personally have had negative experiences with communicating with someone on this list. I don't particularly think I'm comfortable hashing it out in public, though you can dm me if you're that curious. Ultimately I don't think it matters - however many impressive great communicators are on that list - I don't feel willing to take their word (or well, your word about their words) that these norms are good unless I'm actually convinced myself.

 

Edit to add: I'd be good with standards, I just am not a fan of this particular way of pushing-for/implementing them.

Comment by YafahEdelman (yafah-edelman-1) on Basics of Rationalist Discourse · 2023-02-06T06:56:39.713Z · LW · GW

So far as I can tell, the actual claim you're making in the post is a pretty strong one , and I agree that if you believe that you shouldn't represet your opinion as weaker than it is. However, I don't think the post provides much evidence to support the rather strong strong claim it makes. You say that the guidelines are:

much closer to being something like an objectively correct description of How To Do It Right than they are to a mere random user's personal opinion

and I think this might be true, but it would be a mistake for a random user, possibly new to this site, to accept your description over their own based on the evidence you provide. I worry that some will regardless given the ~declarative way your post seems to be framed.

Comment by YafahEdelman (yafah-edelman-1) on Basics of Rationalist Discourse · 2023-02-05T06:25:05.752Z · LW · GW

I feel uncomfortable with this post's framing. It feels like someone went into a garden  I spend my time in and unilaterally put up a sign with a list of guidelines people should follow in the garden, with no ability to enforce these. I know that I can choose on my own whether or not to follow these guidelines, based on whether I think they are good ideas, but newcomers to the garden will see the sign and assume they have to follow them. I would have vastly preferred that the sign instead say "I personally think these norms would be neat, here's why."


(to clarify: the garden = lesswrong/the rationalist community. the sign = this post)

Comment by YafahEdelman (yafah-edelman-1) on AI will change the world, but won’t take it over by playing “3-dimensional chess”. · 2022-11-22T22:31:04.893Z · LW · GW

I think that if humans with AI advisors are approximately as competent as pure AI in terms of pure capabilities, I would expect that humans with AI advisors would outcompete the pure AI in practice given that the humans appear more aligned and less likely to be dangerous then pure AI - a significant competitive advantage in a lot of power seeking scenarios where gaining the trust of other agents is important.

Comment by YafahEdelman (yafah-edelman-1) on Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality" · 2022-06-10T00:34:58.525Z · LW · GW

Could you clarify what egregores you meant when you said:

The egregores that are dominating mainstream culture and the global world situation

Comment by YafahEdelman (yafah-edelman-1) on Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality" · 2022-06-10T00:28:04.354Z · LW · GW

Is it fair to say that organizations, movements, polities, and communities are all egregores?

Comment by YafahEdelman (yafah-edelman-1) on Comment reply: my low-quality thoughts on why CFAR didn't get farther with a "real/efficacious art of rationality" · 2022-06-09T22:24:47.269Z · LW · GW

What exactly is an egregore?