Posts

Changes in College Admissions 2024-04-24T13:50:03.487Z
On Llama-3 and Dwarkesh Patel’s Podcast with Zuckerberg 2024-04-22T13:10:02.645Z
AI #60: Oh the Humanity 2024-04-18T14:10:02.281Z
Childhood and Education Roundup #5 2024-04-17T13:00:03.015Z
Monthly Roundup #17: April 2024 2024-04-15T12:10:03.126Z
AI #59: Model Updates 2024-04-11T14:20:06.339Z
RTFB: On the New Proposed CAIP AI Bill 2024-04-10T18:30:08.410Z
Medical Roundup #2 2024-04-09T13:40:05.908Z
On the 2nd CWT with Jonathan Haidt 2024-04-05T17:30:05.223Z
AI #58: Stargate AGI 2024-04-04T13:10:06.342Z
Fertility Roundup #3 2024-04-02T14:50:04.554Z
Notes on Dwarkesh Patel’s Podcast with Sholto Douglas and Trenton Bricken 2024-04-01T19:10:12.193Z
AI #57: All the AI News That’s Fit to Print 2024-03-28T11:40:05.435Z
Economics Roundup #1 2024-03-26T14:00:06.332Z
On Lex Fridman’s Second Podcast with Altman 2024-03-25T12:20:08.780Z
AI #56: Blackwell That Ends Well 2024-03-21T12:10:05.412Z
On the Gladstone Report 2024-03-20T19:50:05.186Z
Monthly Roundup #16: March 2024 2024-03-19T13:10:05.529Z
On Devin 2024-03-18T13:20:04.779Z
AI #55: Keep Clauding Along 2024-03-14T15:40:09.335Z
On the Latest TikTok Bill 2024-03-13T18:50:05.398Z
OpenAI: The Board Expands 2024-03-12T14:00:04.110Z
What do we know about the AI knowledge and views, especially about existential risk, of the new OpenAI board members? 2024-03-11T14:55:05.128Z
AI #54: Clauding Along 2024-03-07T16:00:05.066Z
On Claude 3.0 2024-03-06T18:50:04.766Z
Read the Roon 2024-03-05T13:50:04.967Z
Housing Roundup #7 2024-03-04T15:00:08.192Z
Notes on Dwarkesh Patel’s Podcast with Demis Hassabis 2024-03-01T16:30:08.687Z
AI #53: One More Leap 2024-02-29T16:10:04.049Z
The Gemini Incident Continues 2024-02-27T16:00:05.648Z
AI #52: Oops 2024-02-22T21:50:07.393Z
The Gemini Incident 2024-02-22T21:00:04.594Z
Sora What 2024-02-22T18:10:05.397Z
The One and a Half Gemini 2024-02-22T13:10:04.725Z
A Tale of Two Restaurant Types 2024-02-21T13:50:05.133Z
AI #51: Altman’s Ambition 2024-02-20T19:50:07.439Z
The Third Gemini 2024-02-20T19:50:05.195Z
Monthly Roundup #15: February 2024 2024-02-20T13:10:05.286Z
More on the Apple Vision Pro 2024-02-13T17:40:05.388Z
On the Proposed California SB 1047 2024-02-12T16:40:04.854Z
One True Love 2024-02-09T15:10:05.298Z
AI #50: The Most Dangerous Thing 2024-02-08T14:30:13.168Z
On the Debate Between Jezos and Leahy 2024-02-06T14:40:05.487Z
On Dwarkesh’s 3rd Podcast With Tyler Cowen 2024-02-02T19:30:05.974Z
AI #49: Bioweapon Testing Begins 2024-02-01T15:30:04.690Z
Childhood and Education Roundup #4 2024-01-30T13:50:06.033Z
AI #48: The Talk of Davos 2024-01-25T16:20:26.625Z
Monthly Roundup #14: January 2024 2024-01-24T12:50:09.231Z
AI #48: Exponentials in Geometry 2024-01-18T14:20:07.869Z
On Anthropic’s Sleeper Agents Paper 2024-01-17T16:10:05.145Z

Comments

Comment by Zvi on On Llama-3 and Dwarkesh Patel’s Podcast with Zuckerberg · 2024-04-23T19:28:17.722Z · LW · GW

It is better than nothing I suppose but if they are keeping the safeties and restrictions on then it will not teach you whether it is fine to open it up.

Comment by Zvi on RTFB: On the New Proposed CAIP AI Bill · 2024-04-11T20:26:38.499Z · LW · GW

My guess is that different people do it differently, and I am super weird.

For me a lot of the trick is consciously asking if I am providing good incentives, and remembering to consider what the alternative world looks like. 

Comment by Zvi on RTFB: On the New Proposed CAIP AI Bill · 2024-04-11T14:15:31.276Z · LW · GW

I don't see this response as harsh at all? I see it as engaging in detail with the substance, note the bill is highly thoughtful overall, with a bunch of explicit encouragement, defend a bunch of their specific choices, and I say I am very happy they offered this bill. It seems good and constructive to note where I think they are asking for too much? While noting that the right amount of 'any given person reacting thinks you went too far in some places' is definitely not zero.

Comment by Zvi on On the Gladstone Report · 2024-03-23T15:45:37.644Z · LW · GW

Excellent. On the thresholds, got it, sad that I didn't realize this, and that others didn't either from what I saw.

I appreciate the 'long post is long' problem but I do think you need the warnings to be in all the places someone might see the 10^X numbers in isolation, if you don't want this to happen, and it probably happens anyway, on the grounds of 'yes that was technically not a proposal but of course it will be treated like one.' And there's some truth in that, and that you want to use examples that are what you would actually pick right now if you had to pick what to actually do (or propose).

I do think the numbers I suggest are about as low as one could realistically get until we get much stronger evidence of impending big problems.

Comment by Zvi on [deleted post] 2024-03-22T12:23:42.236Z
Comment by Zvi on [deleted post] 2024-03-22T12:23:28.770Z

Secrecy is the exception. Mostly no one cares about your startup idea or will remember your hazardous brainstorm, no one is going to cause you trouble, and so on, and honesty is almost always the best policy.  

That doesn't mean always tell everyone everything, but you need to know what you are worried about if you are letting this block you. 

On infohazards, I think people were far too worried for far too long. The actual dangerous idea turned out to be that AGI was a dangerous idea, not any specific thing. There are exceptions, but you need a very good reason, and an even better reason if it is an individual you are talking with.

Trust in terms of 'they won't steal from me' or 'they will do what they promise' is another question with no easy answers.

If you are planning something radical enough to actually get people's attention (e.g. breaking laws, using violence, fraud of various kinds, etc) then you would want to be a lot more careful who you tell, but also - don't do that?

Comment by Zvi on [deleted post] 2024-03-22T12:23:10.383Z

Secrecy is the exception. Mostly no one cares about your startup idea or will remember your hazardous brainstorm, no one is going to cause you trouble, and so on, and honesty is almost always the best policy.  

That doesn't mean always tell everyone everything, but you need to know what you are worried about if you are letting this block you. 

On infohazards, I think people were far too worried for far too long. The actual dangerous idea turned out to be that AGI was a dangerous idea, not any specific thing. There are exceptions, but you need a very good reason, and an even better reason if it is an individual you are talking with.

Trust in terms of 'they won't steal from me' or 'they will do what they promise' is another question with no easy answers.

If you are planning something radical enough to actually get people's attention (e.g. breaking laws, using violence, fraud of various kinds, etc) then you would want to be a lot more careful who you tell, but also - don't do that?

Comment by Zvi on Monthly Roundup #14: January 2024 · 2024-01-30T23:37:44.565Z · LW · GW

Sounds like your scale is stingier than mine is a lot of it. And it makes sense that the recommendations come apart at the extreme high end, especially for older films. The 'for the time' here is telling. 

Comment by Zvi on Monthly Roundup #14: January 2024 · 2024-01-30T00:22:54.794Z · LW · GW

On my scale, if I went 1 for 7 on finding 4.0+ films in a year, then yeah I'd find that a disappointing year. 

In other news, I tried out Scaruffi. I figured I'd watch the top pick. Number was Citizen Kane which I'd already watched (5.0 so that was a good sign), which was Repulsion. And... yeah, that was not a good selection method. Critics and I do NOT see eye to eye. 

I also scanned their ratings of various other films, which generally seemed reasonable for films I'd seen, although with a very clear 'look at me I am a movie critic' bias, including one towards older films. I don't know how to correct for that properly. 

Comment by Zvi on Monthly Roundup #14: January 2024 · 2024-01-26T15:25:58.094Z · LW · GW

Real estate can definitely be a special case, because (1) you are also doing consumption, (2) it is non-recourse and you never get a margin call, which provides a lot of protection and (3) The USG is massively subsidizing you doing that...

Comment by Zvi on Monthly Roundup #14: January 2024 · 2024-01-26T15:24:16.785Z · LW · GW

There are lead times to a lot of these actions, costs to do so are often fixed, and no reason to expect the rules changes not to happen. I buy that it is efficient to do so early.

'Greed' I consider a non-sequitur here, the manager will profit maximize.

Comment by Zvi on Monthly Roundup #14: January 2024 · 2024-01-26T15:19:39.858Z · LW · GW

I'm curious how many films you saw - having only one above 3.5 on that scale seems highly disappointing. 

Comment by Zvi on AI #48: Exponentials in Geometry · 2024-01-23T12:50:42.685Z · LW · GW

Argument from incredulity? 

Comment by Zvi on On Anthropic’s Sleeper Agents Paper · 2024-01-17T20:30:13.175Z · LW · GW

Thanks for the notes!

As I understand that last point, you're saying that it's not a good point because it is false (hence my 'if it turns out to be true'). Weird that I've heard the claim from multiple places in these discussions. I assumed there was some sort of 'order matters in terms of pre-training vs. fine-tuning obviously, but there's a phase shift in what you're doing between them.' I also did wonder about the whole 'you can remove Llama-2's fine tuning in 100 steps' thing, since if that is true then presumably order must matter within fine tuning.

Anyone think there's any reason to think Pope isn't simply technically wrong here (including Pope)? 

Comment by Zvi on Medical Roundup #1 · 2024-01-17T15:55:58.540Z · LW · GW

Yep, whoops, fixing.

Comment by Zvi on Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training · 2024-01-16T14:46:38.269Z · LW · GW

That seems rather loaded in the other direction. How about “The evidence suggests that if current ML systems were going to deceive us in scenarios that do not appear in our training sets, we wouldn’t be able to detect this or change them not to unless we found the conditions where it would happen.”? 

Comment by Zvi on Announcing Balsa Research · 2024-01-15T00:41:34.439Z · LW · GW

Did you see (https://thezvi.substack.com/p/balsa-update-and-general-thank-you)? That's the closest thing available at the moment.

Comment by Zvi on Criticism of EA Criticism Contest · 2024-01-09T14:21:23.847Z · LW · GW

This post was, in the end, largely a failed experiment. It did win a lesser prize, and in a sense that proved its point, and I had fun doing it, but I do not think it successfully changed minds, and I don't think it has lasting value, although someone gave it a +9 so it presumably worked for them. The core idea - that EA in particular wants 'criticism' but it wants it in narrow friendly ways and it discourages actual substantive challenges to its core stuff - does seem important. But also this is LW, not EA Forum. If I had to do it over again, I wouldn't bother writing this.

Comment by Zvi on Announcing Balsa Research · 2024-01-09T14:16:53.152Z · LW · GW

I am flattered that someone nominated this but I don't know why. I still believe in the project, but this doesn't match at all what I'd look to in this kind of review? The vision has changed and narrowed substantially. So this is a historical artifact of sorts, I suppose, but I don't see why it would belong.

Comment by Zvi on Jailbreaking ChatGPT on Release Day · 2024-01-09T14:15:02.296Z · LW · GW

I think this post did good work in its moment, but doesn't have that much lasting relevance and can't see why someone would revisit at this point. It shouldn't be going into any timeless best-of lists.

Comment by Zvi on On Bounded Distrust · 2024-01-09T14:13:29.891Z · LW · GW

I continue to frequently refer back to my functional understanding of bounded distrust. I now try to link to 'How To Bounded DIstrust' instead because it's more compact, but this is I think the better full treatment for those who have the time. I'm sad this isn't seeing more support, presumably because it isn't centrally LW-focused enough? But to me this is a core rationalist skill not discussed enough, among its other features.

Comment by Zvi on On OpenAI’s Preparedness Framework · 2024-01-03T15:24:19.955Z · LW · GW

I do not monitor the EA Forum unless something triggers me to do so, which is rare, so I don't know which threads/issues this refers to. 

Comment by Zvi on AI #43: Functional Discoveries · 2023-12-24T19:22:30.222Z · LW · GW

Yes, I mean the software (I am not going to bother fixing it)

Comment by Zvi on AI #43: Functional Discoveries · 2023-12-21T20:52:17.410Z · LW · GW

I'm skeptical, but I love a good hypothesis, so: 

Comment by Zvi on OpenAI: Preparedness framework · 2023-12-20T14:34:16.583Z · LW · GW

I wrote up a ~4k-word take on the document that I'll likely post tomorrow - if you'd like to read the draft today you can PM/DM/email me. 

(Basic take: Mixed bag, definitely highly incomplete, some definite problems, but better than I would have expected and a positive update)

Comment by Zvi on Balsa Update and General Thank You · 2023-12-16T20:36:54.074Z · LW · GW

It's based on the general preference to be in place X instead of Y. If you could get equally attractive jobs in lousy places Y, that would take away that factor. There would still be many other reasons, but it would help.

Comment by Zvi on Balsa Update and General Thank You · 2023-12-14T13:02:22.155Z · LW · GW

They seem mutually compatible to me, same way you need food and water and oxygen. Economically, the median person needs (1) a decent job and (2) affordable housing and other necessary expenses, without any one thing that is so bad it binds and eats everything. Right now housing does that, and we also have big issues with education and health care, whereas food and clothing used to be problems and no longer are.

Comment by Zvi on The Best of Don’t Worry About the Vase · 2023-12-14T12:59:49.041Z · LW · GW

You're welcome. That's a reasonable point (I think that the LW mod team assembled the sequence here for me, and made different choices on what to include). I think they belong but also that one often has to make cuts.

Comment by Zvi on The Best of Don’t Worry About the Vase · 2023-12-13T22:46:12.969Z · LW · GW

Indeed, and the sequence is there, called Slack and the Sabbath.

I think I've given people enough hints to longtime readers for them to mostly know what Moloch's Army is, but unfortunately I doubt I'll be in the headspace to be able to write that one any time soon. 

Comment by Zvi on AI #41: Bring in the Other Gemini · 2023-12-08T22:41:18.150Z · LW · GW

From my perspective here's what happened: I spent hours trying to parse his arguments. I then wrote an effort post, responding to something that seemed very wrong to me, that took me many hours, that was longer than the OP, and attempted to explore the questions and my model in detail. 

He wrote a detailed reply, which I thanked him for, ignoring the tone issues in question here and focusing on thee details and disagreements. I spent hours processing it and replied in detail to each of his explanations in the reply, including asking many detailed questions, identifying potential cruxes, making it clear where I thought he was right about my mistakes, and so on. I read all the comments carefully, by everyone. 

This was an extraordinary, for me, commitment of time, by this point, while the whole thing was stressful. He left it at that. Which is fine, but I don't know how else I was supposed to 'follow up' at that point? I don't know what else someone seeking to understand is supposed to do. 

I agree Nate's post was a mistake, and said so in OP here - either take the time to engage or don't engage. That was bad. But in general no, I do not think that the thing I am observing from Pope/Belrose is typical of LW/AF/rationalist/MIRI/etc behaviors to anything like the same degree that they consistently do it.

Nor do I get the sense that they are open to argument. Looking over Pope's reply to me, I basically don't see him changing his mind about anything, agreeing a good point was made, addressing my arguments or thoughts on their merits rather than correcting my interpretation of his arguments, asking me questions, suggesting cruxes and so on. Where he notes disagreement he says he's baffled anyone could think such a thing and doesn't seem curious why I might think it.

If people want to make a higher bid for me to engage more after that, I am open to hearing it. Otherwise, I don't see how to usefully do so in reasonable time in a way that would have value.

Comment by Zvi on AI #41: Bring in the Other Gemini · 2023-12-07T21:07:55.058Z · LW · GW

I agree on the margin I fall into the trap of doing more of this than I should. I do curate my Twitter feed to try and make this a better form of reaction than it would otherwise be, but I should raise the bar for that relative to my other bars. 

Always good to get reminders on this.

However, as you allude to, you're in the spot where you're already checking many of the same sources on Twitter, whereas one of the points of these posts for a lot of readers is so they don't have to do that. I'd definitely do it radically differently if I thought most readers of mine were going to be checking Twitter a lot anyway. 

Comment by Zvi on On ‘Responsible Scaling Policies’ (RSPs) · 2023-12-06T01:12:30.108Z · LW · GW

Ah, thanks for clearing that up. That definitely wasn't made clear to me.

Comment by Zvi on AI #39: The Week of OpenAI · 2023-11-25T13:38:40.316Z · LW · GW

Ah, he didn't realize he was getting signal boosted and edited after he got a bunch of inquiries. Under the old wording, I didn't think they had no alignment teams, but I read it as 'a new alignment team.' It makes sense under Google's general structure to have multiples, in fact it would be weird if you didn't. 

Comment by Zvi on Possible OpenAI's Q* breakthrough and DeepMind's AlphaGo-type systems plus LLMs · 2023-11-24T01:34:12.039Z · LW · GW

How far does this go? Does this mean if I e.g. had stupid questions or musings about Q learning, I shouldn't talk about that in public in case I accidentally hit upon something or provoked someone else to say something?

Comment by Zvi on OpenAI: The Battle of the Board · 2023-11-22T19:12:51.713Z · LW · GW

My presumption is that doing this while leaving Altman in place as CEO risks Altman engaging in hostile action, and it represents a vote of no confidence in any case. It isn't a stable option. But I'd have gamed it out?

Comment by Zvi on OpenAI: Facts from a Weekend · 2023-11-22T18:14:12.545Z · LW · GW

It would be sheer insanity to have a rule that you can't vote on your own removal, I would think, or else a tied board will definitely shrink right away.

Comment by Zvi on OpenAI: Facts from a Weekend · 2023-11-20T16:27:54.988Z · LW · GW

Now claim that it's up to 650/770.

Comment by Zvi on OpenAI: Facts from a Weekend · 2023-11-20T16:22:59.257Z · LW · GW

Thanks.

Comment by Zvi on OpenAI: Facts from a Weekend · 2023-11-20T16:22:47.995Z · LW · GW

Yeah, should have put that in the main, forgot. Added now.

Comment by Zvi on OpenAI: Facts from a Weekend · 2023-11-20T16:19:12.125Z · LW · GW

Initially I saw it from Kara Swisher (~1mm views) then I saw it from a BB employee. I presume it is genuine.

Comment by Zvi on Bostrom Goes Unheard · 2023-11-14T12:05:59.798Z · LW · GW

I definitely do not think this is on the level of the EO or Summit. 

Comment by Zvi on Bostrom Goes Unheard · 2023-11-13T14:13:05.566Z · LW · GW

Vote via reactions agree or disagree (or unsure etc) to the following proposition: This post should also go on my Substack.

EDIT: Note that this is currently +5 agreement, but no one actually used a reaction (the icons available at the bottom right corner). Please use the reactions instead, this is much more useful than the +/-.

Comment by Zvi on Zvi's Manifold Markets House Rules · 2023-11-13T13:56:47.197Z · LW · GW

For my own markets, it is not retroactive if I didn't say it at the time (which I did for many markets). In that case, I would resist doing so exactly because I think this is a low-probability but possible event, and I continue to find it interesting. If it was 3% (trading at Superconductor) I would be tempted to early resolve, but 5% isn't there yet.

To be clear, I WOULD likely resolve the Superconductor market now under this rule, I think it is trading at interest rate.

For the bet, at 93% with active arguing on both sides and real trading, definitely not. Even if this were 95%, I wouldn't resolve, because it is based on a 150-to-1 baseline bet and there is a clear contingent arguing the other way. So if I made a UFO market like this I would say 'This cannot resolve early to YES, period.' 

For the election case, if I saw the desks collectively resolving I would resolve, but if something is going to be 99.99% a day later and it's 99% now, might as well wait. If it's going to be two months, do it now.

Comment by Zvi on AI #36: In the Background · 2023-11-04T13:41:07.616Z · LW · GW

Definitely good to keep this in mind, but to me some of this stuff seems obviously super impressive even if you do not know the technical details. Generating complex rich pictures on demand that mostly match requested details not being impressive doesn't parse for me.  

Comment by Zvi on AI #36: In the Background · 2023-11-04T13:38:49.722Z · LW · GW

Yep, as the edit says I don't think we disagree on the first point - there are versions that are oppressive, but also versions that are not that still have large positive effects. 

On the second point, I believe this is because it is much harder to introduce safeguards than to remove them, because removing them is a highly blunt target, whereas good safeguards have to be detailed to avoid false positives (which Llama-2 did not do a good job avoiding, but they did try). This is the key asymmetry here, the amount Meta (or anyone else) spends tuning does not help here.

Comment by Zvi on On the Executive Order · 2023-11-01T20:41:19.320Z · LW · GW

I think part 2 that details the reactions will provide important color here - if this had impacted those other than the major labs right away, I believe the reaction would have been quite bad, and that setting it substantially lower would have been a strategic error and also a very hard sell to the Biden Administration. But perhaps I am wrong about that. They do reserve the ability to change the threshold in the future. 

Comment by Zvi on Book Review: Going Infinite · 2023-10-31T18:34:17.679Z · LW · GW

This seems to be misunderstanding several points I was attempting to make so I'll clear those up here. Apologies if I gave the wrong idea.

  1. On longtermism I was responding to Lewis' critique, saying that you do not need full longtermism to care about the issues longtermists care about, that there were also (medium term?) highly valuable issues at stake that would already be sufficient to care about such matters. It was not intended as an assertion that longtermism is false, nor do I believe that. 
  2. I am asserting there that I believe that things other than subjective experience of pleasure/suffering matter, and that I think the opposite position is nuts both philosophically and in terms of it causing terrible outcomes. I don't think this requires belief in personhood mattering per se, although I would indeed say that it matters. And when people say 'I have read the philosophical literature on this and that's why nothing you think matters matters, why haven't you done your homework'... well, honestly, that's basically why I almost never talk philosophy online and most other people don't either, and I think that sucks a lot. But if you want to know what's behind that on a philosophical level? I mean, I've written quite a lot of words both here and in various places. But I agree that this was not intended to turn someone who had read 10 philosophy books and bought Benthamite Utilitarianism into switching.
  3. On Alameda, I was saying this from the perspective of Jane Street Capital. Sorry if that was unclear. As in, Lewis said JS looked at EAs suspiciously for not being greedy. Whereas I said no, that's false, EAs got looked at suspiciously because they left in the way they did. Nor is this claiming they were not doing it for the common good - it is saying that from the perspective of JSC, them saying it was 'for the common good' doesn't change anything, even if true. My guess, as is implied elsewhere, is that the EAs did believe this consciously. As for whether they 'should have been' loyal to JSC, my answer is they shouldn't have stayed out of loyalty, but they should have left in a more cooperative fashion.
Comment by Zvi on Book Review: Going Infinite · 2023-10-26T18:26:35.874Z · LW · GW

I would be ecstatic to learn that only 2% of Y-Combinator companies that ever hit $100mm were engaged in serious fraud, and presume the true number is far higher.

And yes, YC does do that and Matt Levine frequently talks about the optimal amount of fraud (from the perspective of a VC) being not zero. For them, this is a feature, not a bug, up to a (very high) point.

I would hope we would feel differently, and also EA/rationality has had (checks notes) zero companies/people bigger than FTX/SBF unless you count any of Anthropic, OpenAI and DeepMind. In which case, well, other issues, and perhaps other types of fraud. 

Comment by Zvi on AI #35: Responsible Scaling Policies · 2023-10-26T17:48:50.719Z · LW · GW

If it wasn't Guzey I would have dismissed the whole thing as trolling or gaslighting, and I wouldn't have covered it beyond one line and a link. He's definitely very confused somewhere.

Comment by Zvi on Book Review: Going Infinite · 2023-10-25T22:44:46.300Z · LW · GW

Pretty big if true. If EV actively is censoring attempts to reflect upon what happened, then that is important information to pin down. 

I would hope that if someone tried to do that to me, I would resign.