Posts
Comments
Why do you focus on this particular guy?
Because I saw a few posts discussing his trades, vs none for anyone else's, which in turn is presumably because he moved the market by ten percentage points or so. I'm not arguing that this "should" make him so salient, but given that he was salient I stand by my sense of failure.
https://www.cleanairkits.com/products/luggables is basically one side of a Corsi-Rosenthal box, takes up very little floor space if placed by a wall, and is quiet, affordable, and effective.
SQLite is ludicrously well tested; similar bugs in other databases just don't get found and fixed.
I don't remember anyone proposing "maybe this trader has an edge", even though incentivising such people to trade is the mechanism by which prediction markets work. Certainly I didn't, and in retrospect it feels like a failure not to have had 'the multi-million dollar trader might be smart money' as a hypothesis at all.
(4) is infeasible, because voting systems are designed so that nobody can identify which voter cast which vote - including that voter. This property is called "coercion resistance", which should immediately suggest why it is important!
I further object that any scheme to "win" an election by invalidating votes (or preventing them, etc) is straightforwardly unethical and a betrayal of the principles of democracy. Don't give the impression that this is acceptable behavior, or even funny to joke about.
let's not kill the messenger, lest we run out of messengers.
Unfortunately we're a fair way into this process, not because of downvotes[1] but rather because the comments are often dominated by uncharitable interpretations that I can't productively engage with.[2]. I've had researchers and policy people tell me that reading the discussion convinced them that engaging when their work was discussed on LessWrong wasn't worth the trouble.
I'm still here, sad that I can't recommend it to many others, and wondering whether I'll regret this comment too.
I also feel there's a double standard, but don't think it matters much. Span-level reacts would make it a lot easier to tell what people disagree with though. ↩︎
Confidentiality makes any public writing far more effortful than you might expect. Comments which assume ill-faith are deeply unpleasant to engage with, and very rarely have any actionable takeaways. I've written and deleted a lot of other stuff here, and can't find an object-level description that I think is worth posting, but there are plenty of further reasons. ↩︎
I'd find the agree/disagree dimension much more useful if we split out "x people agree, y disagree" - as the EA Forum does - rather than showing the sum of weighted votes (and total number on hover).
I'd also encourage people to use the other reactions more heavily, including on substrings of a comment, but there's value in the anonymous dis/agree counts too.
(2) ✅ ... The first is from Chief of Staff at Anthropic.
The byline of that piece is "Avital Balwit lives in San Francisco and works as Chief of Staff to the CEO at Anthropic. This piece was written entirely in her personal capacity and does not reflect the views of Anthropic."
I do not think this is an appropriate citation for the claim. In any case, They publicly state that it is not a matter of “if” such artificial superintelligence might exist, but “when” simply seems to be untrue; both cited sources are peppered with phrases like 'possibility', 'I expect', 'could arrive', and so on.
If grading I'd give full credit for (2) on the basis of "documents like these" referring to Anthopic's constitution + system prompt and OpenAI's model spec, and more generous partials for the others. I have no desire to litigate details here though, so I'll leave it at that.
Proceeding with training or limited deployments of a "potentially existentially catastrophic" system would clearly violate our RSP, at minimum the commitment to define and publish ASL-4-appropriate safeguards and conduct evaluations confirming that they are not yet necessary. This footnote is referring to models which pose much lower levels of risk.
And it seems unremarkable to me for a US company to 'single out' a relevant US government entity as the recipient of a voluntary non-public disclosure of a non-existential risk.
More importantly, the average price per plate is not just a function of costs, it's a function of the value that people receive.
No, willingness to pay is (ideally) a function of value, but under reasonable competition the price should approach the cost of providing the meal. "It's weird" that a city with many restaurants and consumers, easily available information, low transaction costs, lowish barriers to entry, minimal externalities or returns to atypical scale, and good factor mobility (at least for labor, capital, and materials) should still have substantially elevated prices. My best guess is barriers to entry aren't that low, but mostly that profit-seekers prefer industries with fewer of there conditions!
A more ambitious procedural approach would involve strong third-party auditing.
I'm not aware of any third party who could currently perform such an audit - e.g. METR disclaims that here. We committed to soliciting external expert feedback on capabilities and safeguards reports (RSP §7), and fund new third-party evaluators to grow the ecosystem. Right now though, third-party audit feels to me like a fabricated option rather than lack of ambition.
Thanks Daniel (and Dean) - I'm always glad to hear about people exploring common ground, and the specific proposals sound good to me too.
I think Anthropic already does most of these, as of our RSP update this morning! While I personally support regulation to make such actions universal and binding, I'm glad that we have voluntary commitments in the meantime:
- Disclosure of in-development capabilities - in section 7.2 (Transparency and External Input) of our updated RSP, we commit to public disclosures for deployed models, and to notify a relevant U.S. Government entity if any model requires stronger protections than the ASL-2 Standard. I think this is a reasonable balance for a unilateral commitment.
- Disclosure of training goal / model spec - as you note, Anthropic publishes both the constitution we train with and our system prompts. I'd be interested in also exploring model-spec-style aspirational documents too.
- Public discussion of safety cases and potential risks - there's some discussion in our Core Views essay and RSP; our capability reports and plans for safeguards and future evaluations are published here starting today (with some redactions for e.g. misuse risks).
- Whistleblower protections - RSP section 7.1.5 lays out our noncompliance reporting policy, and 7.1.6 a commitment not to use non-disparagement agreements which could impede or discourage publicly raising safety concerns.
This sounds to me like the classic rationalist failure mode of doing stuff which is unusually popular among rationalists, rather than studying what experts or top performers are doing and then adopting the techniques, conceptual models, and ways of working that actually lead to good results.
Or in other words, the primary thing when thinking about how to optimize a business is not being rationalist; it is to succeed in business (according to your chosen definition).
Happily there's considerable scholarship on business, and CommonCog has done a fantastic job organizing and explaining the good parts. I highly recommend reading and discussing and reflecting on the whole site - it's a better education in business than any MBA program I know of.
I further suggest that if using these defined terms, instead of including a table of definitions somewhere you include the actual probability range or point estimate in parentheses after the term. This avoids any need to explain the conventions, and makes it clear at the point of use that the author had a precise quantitative definition in mind.
For example: it's likely (75%) that flipping a pair of fair coins will get less than two heads, and extremely unlikely (0-5%) that most readers of AI safety papers are familiar with the quantitative convention proposed above - although they may (>20%) be familiar with the general concept. Note that the inline convention allows for other descriptions if they make the sentence more natural!
For what it's worth, I endorse Anthopic's confidentiality policies, and am confident that everyone involved in setting them sees the increased difficulty of public communication as a cost rather than a benefit. Unfortunately, the unilateralist's curse and entangled truths mean that confidential-by-default is the only viable policy.
This feels pretty nitpick-y, but whether or not I'd be interested in taking a bet will depend on the odds - in many cases I might take either side, given a reasonably wide spread. Maybe append at p >= 0.5
to the descriptions to clarify?
The shorthand trading syntax "$size @ $sell_percent / $buy_percent" is especially nice because it expresses the spread you'd accept to take either side of the bet, e.g. "25 @ 85/15 on rain tomorrow" to offer a bet of $25 dollars, selling if you think probability of rain is >85%, buying if you think it's <15%. Seems hard to build this into a reaction though!
Locked in! Whichever way this goes, I expect to feel pretty good about both the process and the outcome :-)
Nice! I look forward to seeing how this resolves.
Ah, by 'size' I meant the stakes, not the number of locks - did you want to bet the maximum $1k against my $10k, or some smaller proportional amount?
Article IV of the Certificate of Incorporation lists the number of shares of each class of stock, and as that's organized by funding round I expect that you could get a fair way by cross-referencing against public reporting.
Someone suggested that I point out that this is misleading. The board is not independent: it's two executives, one investor, and one other guy.
As of November this year, the board will consist of the CEO, one investor representative, and three members appointed by the LTBT. I think it's reasonable to describe that as independent, even if the CEO alone would not be, and to be thinking about the from-November state in this document.
If you're interested in this area, I suggest looking at existing scholarship such as Manon Revel's.
I think we're agreed then, if you want to confirm the size? Then we wait for 2027!
See e.g. Table 1 of https://nickbostrom.com/information-hazards.pdf
That works for me - thanks very much for helping out!
I don't recall any interpretability experiments with TinyStories offhand, but I'd be surprised if there aren't any.
I don't think that a thing you can only manufacture once is a practically usable lock; having multiple is also practically useful to facilitate picking attempts and in case of damage - imagine that a few hours into an open pick-this-lock challenge, someone bent a part such that the key no longer opens the lock. I'd suggest resolving neutral in this case as we only saw an partial attempt.
Other conditions:
- I think it's important that the design could have at least a thousand distinct keys which are non-pickable. It's fine if the theoretical keyspace is larger so long as the verified-secure keyspace is large enough to be useful, and distinct keys/locks need not be manufactured so long as they're clearly possible.
- I expect the design to be available in advance to people attempting to pick the lock, just as the design principles and detailed schematics of current mechanical locks are widely known - security through obscurity would not demonstrate that the design is better, only that as-yet-secret designs are harder to exploit.
I nominate @raemon as our arbiter, if both he and you are willing, and the majority vote or nominee of the Lightcone team if Raemon is unavailable for some reason (and @habryka approves that).
I think both Zach and I care about labs doing good things on safety, communicating that clearly, and helping people understand both what labs are doing and the range of views on what they should be doing. I shared Zach's doc with some colleagues, but won’t try for a point-by-point response. Two high-level responses:
First, at a meta level, you say:
- [Probably Anthropic (like all labs) should encourage staff members to talk about their views (on AI progress and risk and what Anthropic is doing and what Anthropic should do) with people outside Anthropic, as long as they (1) clarify that they're not speaking for Anthropic and (2) don't share secrets.]
I do feel welcome to talk about my views on this basis, and often do so with friends and family, at public events, and sometimes even in writing on the internet (hi!). However, it takes way more effort than you might think to avoid inaccurate or misleading statements while also maintaining confidentiality. Public writing tends to be higher-stakes due to the much larger audience and durability, so I routinely run comments past several colleagues before posting, and often redraft in response (including these comments and this very point!).
My guess is that most don’t do this much in public or on the internet, because it’s absolutely exhausting, and if you say something misremembered or misinterpreted you’re treated as a liar, it’ll be taken out of context either way, and you probably can’t make corrections. I keep doing it anyway because I occasionally find useful perspectives or insights this way, and think it’s important to share mine. That said, there’s a loud minority which makes the AI-safety-adjacent community by far the most hostile and least charitable environment I spend any time in, and I fully understand why many of my colleagues might not want to.
Imagine, if you will, trying to hold a long conversation about AI risk - but you can’t reveal any information about, or learned from, or even just informative about LessWrong. Every claim needs an independent public source, as do any jargon or concepts that would give an informed listener information about the site, etc.; you have to find different analogies and check that citations are public and for all that you get pretty regular hostility anyway because of… well, there are plenty of misunderstandings and caricatures to go around.
I run intro-to-AGI-safety courses for Anthropic employees (based on AGI-SF), and we draw a clear distinction between public and confidential resources specifically so that people can go talk to family and friends and anyone else they wish about the public information we cover.
Second, and more concretely, many of these asks are unimplementable for various reasons, and often gesture in a direction without giving reasons to think that there’s a better tradeoff available than we’re already making. Some quick examples:
-
Both AI Control and safety cases are research areas less than a year old; we’re investigating them and e.g. hiring safety-case specialists, but best-practices we could implement don’t exist yet. Similarly, there simply aren’t any auditors or audit standards for AI safety yet (see e.g. METR’s statement); we’re working to make this possible but the thing you’re asking for just doesn’t exist yet. Some implementation questions that “let auditors audit our models” glosses over:
-
If you have dozens of organizations asking to be auditors, and none of them are officially auditors yet, what criteria do you use to decide who you collaborate with?
-
What kind of pre deployment model access would you provide? If it’s helpful-only or other nonpublic access, do they meet our security bar to avoid leaking privileged API keys? (We’ve already seen unauthorized sharing or compromise lead to serious abuse.)
-
How do you decide who gets to say what about the testing? What if they have very different priorities than you and think that a different level of risk or a different kind of harm is unacceptable?
-
-
I strongly support Anthropic’s nondisclosure of information about pretraining. I have never seen a compelling argument that publishing this kind of information is, on net, beneficial for safety.
-
There are many cases where I’d be happy if Anthropic shared more about what we’re doing and what we’re thinking about. Some of the things you’re asking about I think we’ve already said, e.g. for [7] LTBT changes would require an RSP update, and for [17] our RSP requires us to “enforce an acceptable use policy [against …] using the model to generate content that could cause severe risks to the continued existence of humankind”.
So, saying “do more X” just isn’t that useful; we’ve generally thought about it and concluded that that the current amount of X is our best available tradeoff at the moment. For many more of the other asks above, I just disagree with implicit or explicit claims about the facts in question. Even for the communication issues where I’d celebrate us sharing more—and for some I expect we will—doing so is yet another demand on heavily loaded people and teams, and it can take longer than we’d like to find the time.
I agree with you that this feels like a 'compact crux' for many parts of the agenda. I'd like to take your bet, let me reflect if there's any additional operationalizations or conditioning.
quick proposals:
- I win at the end of 2026, if there has not been a formally-verified design for a mechanical lock, OR the design does not verify it cannot be mechanically picked, OR less than three consistent physical instances have been manufactured. (e.g. a total of three including prototypes or other designs doesn't count)
- You win if at the end of 2027, there have been credible and failed expert attempts to pick such a lock (e.g. an open challenge at Defcon). I win if there is a successful attempt.
- Bet resolves neutral, and we each donate half our stakes to a mutally-agreed charity, if it's unclear whether production actually happened, or there were no credible attempts to pick a verified lock.
- Any disputes resolved by the best judgement of an agreed-in-advance arbiter; I'd be happy with the LessWrong team if you and they also agree.
Hmm, usually when I go looking it's because I remember reading a particular post, but there's always some chance of getting tab-sniped into reading a just a few more pages...
tags: used them semi-regularly to find related posts when I want to refer to previous discussions of a topic. They work well for that, and I've occasionally added tags when the post I was looking for wasn't tagged yet.
I think it's worth distinguishing between what I'll call 'intrinsic preference discounting', and 'uncertain-value discounting'. In the former case, you inherently care less about what happens in the (far?) future; in the latter case you are impartial but rationally discount future value based on your uncertainty about whether it'll actually happen - perhaps there'll be a supernova or something before anyone actually enjoys the utils! Economists often observe the latter, or some mixture, and attribute it to the former.
I am a big fan of verification in principle, but don't think it's feasible in such broad contexts in practice. Most of these proposals are far too vague for me to have an opinion on, but I'd be happy to make bets with anyone interested that
- (3) won't happen (with nontrivial bounds) due to the difficulty of modeling nondeterministic GPUs, varying floating-point formats, etc.
- (8) won't be attempted, or will fail at some combination of design, manufacture, or just-being-pickable. This is a great proposal and a beautifully compact crux for the overall approach.
- with some mutually agreed operationalization against the success of other listed ideas with a physical component
for up to 10k of my dollars at up to 10:1 odds (i.e. min your $1k), placed by end of 2024, to resolve by end of 2026 or as agreed. (I'm somewhat dubious about very long-term bets for implementation reasons) I'm also happy to discuss conditioning on a registered attempt.
For what it's worth, I was offered and chose to decline co-authorship of Towards Guaranteed Safe AI. Despite my respect for the authors and strong support for this research agenda (as part of a diverse portfolio of safety research), I disagree with many of the central claims of the paper, and see it as uncritically enthusiastic about formal verification and overly dismissive of other approaches.
I do not believe that any of the proposed approaches actually guarantee safety. Safety is a system property,[1] which includes the surrounding social and technical context; and I see serious conceptual and in-practice-fatal challenges in defining both a world-model and a safety spec. (exercise: what could these look like for a biology lab? Try to block unsafe outcomes without blocking Kevin Esvelt's research!)
I do think it's important to reach appropriately high safety assurances before developing or deploying future AI systems which would be capable of causing a catastrophe. However, I believe that the path there is to extend and complement current techniques, including empirical and experimental approaches alongside formal verification - whatever actually works in practice.
- My standard recommendation on this is ch. 1-3 of Engineering a Safer World; I expect her recent book is excellent but haven't yet read enough to recommend it. ↩︎
Mark Zuckerberg has concluded that it is better to open source powerful AI models for the white hats than to try to keep them secret from the black hats.
Of course, LLama models aren't actually open source - here's my summary, here's LeCun testifying under oath (which he regularly contradicts on twitter). I think this substantially undercuts Zuckerberg's argument and don't believe that it's a sincere position, just tactically useful right now.
I'm just guessing from affect, yep, though I still think that "large project and many years of effort" typically describes considerably smaller challenges than my expectation for producing a complete autofac.
On the steel-vs-vitamins question, I'm thinking about "effort" - loose proxy measurements would be the sale value or production headcount rather than kilograms of output. Precisely because steel is easier to transform, it's much less valuable to do so and thus I expect the billions-of-autofacs to be far less economically valuable than a quick estimate might show. Unless of course they start edging in on vitamin production, but then that's the hard rest-of-the-industrial-economy problem...
To be clear, I think the autofac concept - with external "vitamins" for electronics etc - is in fact technically feasible right now and if teleoperated has been for decades. It's not economically competitive, but that's a totally different target.
I think you're substantially underestimating the difficulty here, and the proportion of effort which goes into the "starter pack" (aka vitamins) relative to steelworking.
If you're interested in taking this further, I'd suggest:
- getting involved in the RepRap project
- initially focusing on just one of producing parts, assembling parts, or controlling machinery. If your system works when teleoperated, or can assemble but not produce a copy of itself, etc., that's already a substantial breakthrough.
- reading up on NASA's studies on self-replicating machinery, e.g. Freitas 1981 or this paper from 1982 or later work like Chirikjian 2004.
Well, it seems like a no-brainer to store money you intend to spend after age 60 in such an account; for other purposes it does seem less universally useful. I'd also check the treatment of capital gains, and whether it's included in various assets tests; both can be situationally useful and included in some analogues elsewhere.
Might be worth putting a short notice at the top of each post saying that, with a link to this post or whatever other resource you'd now recommend? (inspired by the 'Attention - this is a historical document' on e.g. this PEP)
I don't think any of these amount to a claim that "to reach ASI, we simply need to develop rules for all the domains we care about". Yes, AlphaGo Zero reached superhuman levels on the narrow task of playing Go, and that's a nice demonstration that synthetic data could be useful, but it's not about ASI and there's no claim that this would be either necessary or sufficient.
(not going to speculate on object-level details though)
I think that personal incentives is an unhelpful way to try and think about or predict board behavior (for Anthropic and in general), but you can find the current members of our board listed here.
Is there an actual way to criticize Dario and/or Daniela in a way that will realistically be given a fair hearing by someone who, if appropriate, could take some kind of action?
For whom to criticize him/her/them about what? What kind of action are you imagining? For anything I can imagine actually coming up, I'd be personally comfortable raising it directly with either or both of them in person or in writing, and believe they'd give it a fair hearing as well as appropriate follow-up. There are also standard company mechanisms that many people might be more comfortable using (talk to your manager or someone responsible for that area; ask a maybe-anonymous question in various fora; etc). Ultimately executives are accountable to the board, which will be majority appointed by the long-term benefit trust from late this year.
Makes sense - if I felt I had to use an anonymous mechanism, I can see how contacting Daniela about Dario might be uncomfortable. (Although to be clear I actually think that'd be fine, and I'd also have to think that Sam McCandlish as responsible scaling officer wouldn't handle it)
If I was doing this today I guess I'd email another board member; and I'll suggest that we add that as an escalation option.
OK, let's imagine I had a concern about RSP noncompliance, and felt that I needed to use this mechanism.
(in reality I'd just post in whichever slack channel seemed most appropriate; this happens occasionally for "just wanted to check..." style concerns and I'm very confident we'd welcome graver reports too. Usually that'd be a public channel; for some compartmentalized stuff it might be a private channel and I'd DM the team lead if I didn't have access. I think we have good norms and culture around explicitly raising safety concerns and taking them seriously.)
As I understand it, I'd:
- Remember that we have such a mechanism and bet that there's a shortcut link. Fail to remember the shortlink name (reports? violations?) and search the list of "rsp-" links; ah, it's rsp-noncompliance. (just did this, and added a few aliases)
- That lands me on the policy PDF, which explains in two pages the intended scope of the policy, who's covered, the proceedure, etc. and contains a link to the third-party anonymous reporting platform. That link is publicly accessible, so I could e.g. make a report from a non-work device or even after leaving the company.
- I write a report on that platform describing my concerns[1], optionally uploading documents etc. and get a random password so I can log in later to give updates, send and receive messages, etc.
- The report by default goes to our Responsible Scaling Officer, currently Sam McCandlish. If I'm concerned about the RSO or don't trust them to handle it, I can instead escalate to the Board of Directors (current DRI Daniella Amodei)
- Investigation and resolution obviously depends on the details of the noncompliance concern.
There are other (pretty standard) escalation pathways for concerns about things that aren't RSP noncompliance. There's not much we can do about the "only one person could have made this report" problem beyond the included strong commitments to non-retaliation, but if anyone has suggestions I'd love to hear them.
I clicked through just now to the point of cursor-in-textbox, but not submitting a nuisance report. ↩︎
I am a current Anthropic employee, and I am not under any such agreement, nor has any such agreement ever been offered to me.
If asked to sign a self-concealing NDA or non-disparagement agreement, I would refuse.
He talked to Gladstone AI founders a few weeks ago; AGI risks were mentioned but not in much depth.
see also: the tyranny of the marginal user
Incorporating as a Public Benefit Corporation already frees directors' hands; Delaware Title 8, §365 requires them to "balance the pecuniary interests of the stockholders, the best interests of those materially affected by the corporation’s conduct, and the specific public benefit(s) identified in its certificate of incorporation".
Without wishing to discourage these efforts, I disagree on a few points here:
Still, the biggest opportunities are often the ones with the lowest probability of success, and startups are the best structures to capitalize on them.
If I'm looking for the best expected value around, that's still monotonic in the probability of success! There are good reasons to think that most organizations are risk-averse (relative to the neutrality of linear $=utils) and startups can be a good way to get around this.
Nonetheless, I remain concerned about regressional Goodhart; and that many founders naively take on the risk appetite of funders who manage a portfolio, without the corresponding diversification (if all your eggs are in one basket, watch that basket very closely). See also Inadequate Equilibria and maybe Fooled by Randomness.
Meanwhile, strongly agreed that AI safety driven startups should be B corps, especially if they're raising money.
Technical quibble; "B Corp" is a voluntary private certification; PBC is a corporate form which imposes legal obligations on directors. I think many of the B Corp criteria are praiseworthy, but this is neither necessary nor sufficient as an alternative to PBC status - and getting certified is probably a poor use of time and attention for a startup when the founders' time and attention are at such a premium.
My personal opinion is that starting a company can be great, but I've also seen several fail due to the gaps between their personal goals, a work-it-out-later business plan, and the duties that you/your board owes to your investors.
IMO any purpose-driven company should be founded as a Public Benefit Corporation, to make it clear in advance and in law that you'll also consider the purpose and the interests of people materially affected by the company alongside investor returns. (cf § 365. Duties of directors)
Enforcement of mitigations when it's someone else who removes them won't be seen as relevant, since in this religion a contributor is fundamentally not responsible for how the things they release will be used by others.
This may be true of people who talk a lot about open source, but among actual maintainers the attitude is pretty different. If some user causes harm with an overall positive tool, that's on the user; but if the contributor has built something consistently or overall harmful that is indeed on them. Maintainers tend to avoid working on projects which are mostly useful for surveillance, weapons, etc. for pretty much this reason.
Source: my personal experience as a a maintainer and PSF Fellow, and the multiple Python core developers I just checked with at the PyCon sprints.