Posts
Comments
I don't know what legible/transferable evidence would be. I've audited a lot of courses at a lot of different universities. Anecdote, sorry.
One thing I like about this is making the actual difficulty deltas between colleges more felt/legible/concrete (by anyone who takes the exams). What I might do in your system at my IQ level (which is pretty high outside of EA but pretty mediocre inside EA) is knock out a degree at an easy university to get warmed up then study for years for a degree at a hard school[1].
In real life, I can download or audit courses from whatever university I want, but I don't know what the grading curve is, so when 5/6 exercises are too hard I don't know if that's because I'm dumb or if 1/6 is B+ level performance. This is a way that the current system underserves a credential-indifferent autodidact. It's really hard to know how difficult a course is supposed to be when you're isolated from the local conditions that make up the grading curve!
Another thing I like about your system is tutoring markets separated from assessment companies. Why is it that we bundle gatekeeping/assessment with preparation? Unbundling might help maintain objective standards, get rid of problems that look like "the professor feels too much affection for the student to fail them".
This is all completely separate for why your proposal is a hard social problem / a complete nonstarter, which is that I don't think the system is "broken" right now. There's an idea you might pick up if you read the smarter leftists, which is that credentialism especially at elite levels preserves privilege and status as a first class use case. This is not completely false today, not least because the further you go back in time in western universities the truer it is.
my prior, 15 years ago, looked like "stanford has a boating scholarship, so obviously selectivity is a wealth/status thing and not reflective of scholarship or rigor", so the fact that I now believe "more selective colleges have harder coursework" means I've seen a lot of evidence. It pains me, believe me, but reality doesn't care :) ↩︎
I get pretty intense visceral outrage at overreaches in immigration enforcement, just seems the height of depravity. Ive looked for a lot of different routes to mental coolness over the last decade (since Trump started his speeches), they mostly amount to staying busy and distracted. Just seems like a really cost ineffective kind of activism to get involved in. Bankrolling lawyers for random people isn't really in my action space and if it was i'd have opportunity cost to consider.
seems like there's more prior literature than I thought https://en.wikipedia.org/wiki/Role-based_access_control
are SOTA configuration languages sufficient for AI proliferation?
My main aim is to work on "hardening the box" i.e. eliminating software bugs so containment schemes don't fail for preventable reasons. But in the famous 4o system card example, the one that looks a little like docker exfiltration, the situation arose from user error, wild guess in compose.yaml
or the shell script invoking docker run
.
In a linux machine
Here's an example nix file
users.users =
let
authorized-key-files = [
"${keyspath}/id_server_ed25519.pub"
"${keyspath}/id_qd_ed25519.pub"
];
in
{
unpermissioneduser = {
isNormalUser = false;
extraGroups = [ "docker" ];
description = "AgentID=claude-0x0000";
};
coreuser = {
isNormalUser = true;
extraGroups = [
"wheel"
"networkmanager"
"docker"
"video"
];
home = "/home/coreuser";
description = "Core User (delegator of unpermissioneduser)";
shell = pkgs.fish;
openssh.authorizedKeys.keyFiles = authorized-key-files;
};
root = {
openssh.authorizedKeys.keyFiles = authorized-key-files;
shell = pkgs.fish;
};
};
You can see that unpermissioneduser
has less abilities than coreuser
. So you can imagine I just say that unpermissioneduser is an agent and coreuser is the human delegator.
Nix is simply a fully declarative way to do standard linux permissioning (a feature not in the snippet is allocating chmod/chown information for particular users to particular parts of the filesystem). There's no conceptual leaps from the status quo.
agents and delegation
is linux all that great for when you want to keep track of who's a delegatee and who's a delegator? do we need a more graph flavored version of linux userspace/permissions? I'm talking about once we're reasoning about proliferating agents and their permissions on various machines. Linux groups do not support inheritance, but a user can be a member of many groups. So you could in principle MVP a graph based permissions DSL (perhaps in Nix) on top of the existing Linux user/group ontology, 80% confident, but it could be hairier than making a new ontology. idk.
Examples of promising risk-targeted applications
This section reeks of the guaranteed safe AI agendas, a lot of agreement. For example, using formal methods to harden any box we try to put the AI in is a kind of defensive acceleration that doesn't work (too expensive) until certain pre-ASI stages of development. I'm working on formal verification agents along these lines right now.
@Tyra Burgess and I wrote down a royalty-aware payout function yesterday:
For a type , let be the "left closure under implication" or the admissible antecedents. I.e., the set of all the antecedents A in the public ledger such that . is the price that a proposition was listed for (admitting summing over duplicates). Suppose player have previously proven and is none other than the set of all from to .
We would like to fix an (could be fairly big, like ) and say that the royalty-aware payout given epsilon of upon an introduction of to the database is such that, where , is paid out to each player .
This seems vaguely like it has some desirable properties, like the decay of a royalty with length in implications separating it from the currently outpaying type. You might even be able to reconcile it with cartesian-closedness / currying, where behaves equivalently to under the payout function.
I think to be more theoretically classy, royalties would arise from recursive structure, but it may work well enough without recursion. It'd be fun to advance all the way to coherence and incentive-compatible proofs, but I certainly don't see myself doing that.
I want a name for the following principle:
the world-spec gap hurts you more than the spec-component gap
I wrote it out much like this a couple years ago and Zac recently said the same thing.
I'd love to be able to just say "the <one to three syllables> principle", yaknow?
I'm working on making sure we get high quality critical systems software out of early AGI. Hardened infrastructure buys us a lot in the slightly crazy story of "self-exfiltrated model attacks the power grid", but buys us even more in less crazy stories about all the software modules adjacent to AGI having vulnerabilities rapidly patched at crunchtime.
<standup_comedian>
What's the deal with evals </standup_comedian>
epistemic status: tell me I'm wrong.
Funders seem particularly enchanted with evals, which seems to be defined as "benchmark but probably for scaffolded systems and scoring that is harder than scoring most of what we call benchmarks".
I can conjure a theory of change. It's like, 1. if measurement is bad then we're working with vibes, so we'd like to make measurement good. 2. if measurement is good then we can demonstrate to audiences (especially policymakers) that warning shots are substantial signals and not base it on vibes. (question: what am I missing?)
This is an at least coherent reason why dangerous capability evals pay into governance strats in such a way that maybe philanthropic pressure is correct. It relies on cruxes that I don't share, like that a principled science of measurement would outperform vibes in a meme war in the first place, but it at least has a crux that works as a fulcrum.
Everything worth doing is at least a little dual use, I'm not attacking anybody. But it's a faustian game where, like benchmarks, evals pump up races cuz everyone loves it when number go up. The primal urge to see number go up infects every chart with an x and y axis, in other words, evals come with steep capabilities externalities because they spray the labs with more charts that number hasn't gone up on yet, daring and challenging the lab to step up their game. So the theory of change in which, in spite of this dynamic, an eval is differentially defensive just has to meet a really high standard.
A further problem: the theory of change where we can have really high quality / inarguable signals as warning shots instead of vibes as warning shots doesn't even apply to most of the evals I'm hearing about from the nonprofit and independent sector. I'm hearing about evals that make me go, "huh, I wonder what's differentially defensive about that?" and I don't get good answers. Moreover, an ancient wisdom says "never ask a philanthropist for something capitalism gives you for free". The case for an individual eval's unlikeliness to be created by default lab incentives needs to be especially strong, cuz when it isn't strong one is literally doing the lab's work for them.
The more I learn about measurement, the less seriously I take it
I'm impressed with models that accomplish tasks in zero or one shot with minimal prompting skill. I'm not sure what galaxy brained scaffolds and galaxy brained prompts demonstrate. There's so much optimization in the measurement space.
I shipped a benchmark recently, but it's secretly a synthetic data play so regardless of how hard people try in order to score on it, we get synthetic data out of it which leads to finetune jobs which leads to domain specific models that can do such tasks hopefully with minimal prompting effort and no scaffolding.
$PERSON at $LAB once showed me an internal document saying that there are bad benchmarks - dangerous capability benchmarks - that are used negatively, so unlike positive benchmarks where the model isn't shipped to prod if it performs under a certain amount, these benchmarks could block a model from going to prod that performs over a certain amount. I asked, "you create this benchmark like it's a bad thing, and it's a bad thing at your shop, but how do you know it won't be used in a sign-flipped way at another shop?" and he said "well we just call it EvilBench and no one will want to score high on EvilBench".
It sounded like a ridiculous answer, but is maybe actually true in the case of labs. It is extremely not true in the open weight case, obviously huggingface user Yolo4206969 would love to score high on EvilBench.
I'm surprised to hear you say that, since you write
Upfront, I want to clarify: I don’t believe or wish to claim that GSAI is a full or general panacea to AI risk.
I kinda think anything which is not a panacea is swiss cheese, that those are the only two options.
In a matter of what sort of portofolio can lay down slices of swiss cheese at what rate and with what uncorrelation. And I think in this way GSAI is antifragile to next year's language models, which is why I can agree mostly with Zac's talk and still work on GSAI (I don't think he talks about my cruxes).
Specifically, I think the guarantees of each module and the guarantees of each pipe (connecting the modules) isolate/restrict the error to the world-model gap or the world-spec gap, and I think the engineering problems of getting those guarantees are straightforward / not conceptual problems. Furthermore, I think the conceptual problems with reducing the world-spec gap below some threshold presented by Safeguarded's TA1 are easier than the conceptual problems in alignment/safety/control.
I gave a lightning talk with my particular characterization, and included "swiss cheese" i.e. that gsai sources some layers of swiss cheese without trying to be one magic bullet. But if people agree with this, then really guaranteed-safe ai is a misnomer, cuz guarantee doesn't evoke swiss cheese at all
For anecdata: id be really jazzed about 3 or 4, 5 might be a little crazy but somewhat open to that or more.
yeah last week was grim for a lot of people with r1's implications for proliferation and the stargate fanfare after inauguration. Had a palpable sensation of it pivoting from midgame to endgame, but I would doubt that sensation is reliable or calibrated.
Tmux allows you to set up multiple panes in your terminal that keep running in the background. Therefore, if you disconnect from a remote machine, scripts running in tmux will not be killed. We tend to run experiments across many tmux panes (especially overnight).
Does no one use suffix & disown
which sends a command to a background process that doesn't depend on the ssh process, or prefix nohup
which does the same thing? You have to make sure any logging that goes to stdout goes to a log file instead (and in this way tmux or screen are better)
Your remark about uv: forgot to mention that it's effectively a poetry replacement, too
Like htop: btm and btop are a little newer, nicer to look at
Also for json: jq
. cat file.json | jq
for pretty printing json to terminal
Feels like a MATS-like Program in india is a big opportunity. When I went to EAG in Singapore a while ago there were so many people underserved by the existing community building and mentorship organizations cuz of visa issues.
the story i roughly understand is that this was within Epoch's mandate in the first place because they wanted to forecast on benchmarks but didn't think existing benchmarks were compelling or good enough so had to take matters into their own hands. Is that roughly consensus, or true? Why is frontiermath a safety project? i haven't seen adequate discussion on this.
Can't relate. Don't particularly care for her content (tho audibly laughed at a couple examples that you hated), but I have no aversion to it. I do have aversion to the way you appealed to datedness as if that matters. I generally can't relate to people who find cringiness in the way you describe significantly problematic, really.
People like authenticity, humility, and irony now, both in the content and in its presentation.
I could literally care less, omg--- but im unusually averse to irony. Authenticity is great, humility is great most of the time, why is irony even in the mix?
Tho I'm weakly with you that engagement farming leaves a bad taste in my mouth.
Update: new funding call from ARIA calls out the Safeguarded/Gatekeeper stack in a video game directly
Creating (largely) self-contained prototypes/minimal-viable-products of a Safeguarded AI workflow, similar to this example but pushing for incrementally more advanced environments (e.g. Atari games).
I tried a little myself too. Hope I didn't misremembering.
Very anecdotally, I've talked to some extremely smart people who I would guess are very good at making progress on hard problems, but just didn't think too hard about what solutions help.
A few of the dopest people i know, who id love to have on the team, fall roughly into the category of "engaged and little with lesswrong, grok the core pset better than most 'highly involved' people, but are working on something irrelevant and not even trying cuz they think it seems too hard". They have some thoughtful p(doom), but assume they're powerless.
Richard ngo tweeted recently that it was a mistake to design the agi safety fundamentals curriculum to be broadly accessible, that if he could do it over again thered be punishing problem sets that alienate most people
The upvotes and agree votes on this comment updated my perception of the rough consensus about mats and streetlighting. I previously would have expected less people to evaluate mats that way
As someone who, isolated and unfunded, went on months-long excursions into the hard version of the pset multiple times and burned out each time, I felt extremely validated when you verbally told me a fragment of this post around a fire pit at illiad. The incentives section of this post is very grim, but very true. I know naive patches to the funding ecosystem would also be bad (easy for grifters, etc), but I feel very much like I and we were failed by funders. I could've been stronger etc, I could've been in berkeley during my attempts instead of philly, but "why not just be heroically poor and work between shifts at a waged job" is... idk man, maybe fine with the right infrastructure, but i didnt have that infrastructure. (Again, I don't know a good way to have fixed the funding ecosystem, so funders reading this shouldn't feel too attacked).
(Epistemic status: have given up on the hard pset, but i think I have a path to adding some layers of Swiss cheese)
(i'm guessing) super mario might refer to a simulation of the Safeguarded AI / Gatekeeper stack in a videogame. It looks like they're skipping videogames and going straight to cyberphysical systems (1, 2).
Ok. I'll wear a solid black shirt instead of my bright blue shirt.
talk to friends as a half measure
When it comes to your internal track record, it is often said that finding what you wrote at time t-k beats trying to remember what you thought at t-k. However, the activation energy to keep such a journal is kinda a hurdle (which is why products like https://fatebook.io are so good!).
I find that a nice midpoint between the full and correct internal track record practices (rigorous journaling) and completely winging it (leaving yourself open to mistakes and self delusion) is talking to friends, because I think my memory of conversations that are had out loud with other people is more detailed and honest than my memory of things I've thought / used to think, especially when it's a stressful and treacherous topic.[1]
I may be more socially attuned than average around here(?) so this may not work for people less socially attuned than me ↩︎
I was at an ARIA meeting with a bunch of category theorists working on safeguarded AI and many of them didn't know what the work had to do with AI.
epistemic status: short version of post because I never got around to doing the proper effort post I wanted to make.
A sketch I'm thinking of: asking people to consume information (a question, in this case) is asking them to do you a favor, so you should do your best to ease this burden, however, also don't be paralyzed so budget some leeway to be less than maximally considerate in this way when you really need to.
what's the best essay on asking for advice?
Going over etiquette and the social contract, perhaps if it's software specific it talks about minimal reproducers, whatever else the author thinks is involved.
Rumors are that 2025 lighthaven is jam packed. If this is the case, and you need money, rudimentary economics suggests only the obvious: raise prices. I know many clients are mission aligned, and there's a reasonable ideological reason to run the joint at or below cost, but I think it's aligned with that spirit if profits from the campus fund the website.
I also want to say in print what I said in person a year ago: you can ask me to do chores on campus to save money, it'd be within my hufflepuff budget. There are good reasons to not go totally "by and for the community" DIY like many say community libraries or soup kitchens, but nudging a little in that direction seems right.
EDIT: I did a mostly symbolic $200 right now, may or may not do more as I do some more calculations and find out my salary at my new job
ThingOfThings said that Story of Louis Pasteur is a very EA movie, but I think it also counts for rationality. Huge fan.
Guaranteed Safe AI paper club meets again this thursday
Event for the paper club: https://calendar.app.google/2a11YNXUFwzHbT3TA
blurb about the paper in last month's newsletter:
... If you’re wondering why you just read all that, here’s the juice: often in GSAI position papers there’ll be some reference to expectations that capture “harm” or “safety”. Preexpectations and postexpectations with respect to particular pairs of programs could be a great way to cash this out, cuz we could look at programs as interventions and simulate RCTs (labeling one program control and one treatment) in our world modeling stack. When it comes to harm and safety, Prop and bool are definitely not rich enough.
my dude, top level post- this does not read like a shortform
Yoshua Bengio is giving a talk online tomorrow https://lu.ma/4ylbvs75
by virtue of their technical chops, also care about their career capital.
I didn't understand this-- "their technical chops impose opportunity cost as they're able to build very safe successful careers if they toe the line" would make sense, or they care about career capital independent of their technical chops would make sense. But here, the relation between technical chops and caring about career capital doesn't come through clear.
did anyone draw up an estimate of how much the proportion of code written by LLMs will increase? or even what the proportion is today
I was thinking the same thing this morning! My main thought was, "this is a trap. ain't no way I'm pressing a big red button especially not so near to petrov day"
GSAI paper club is tomorrow (gcal ticket), summary (by me) and discussion of this paper
Alas, belief is easier than disbelief; we believe instinctively, but disbelief requires a conscious effort.
Yes, but this is one thing that I have felt being mutated as I read the sequences and continued to hang out with you lot (roughly 8 years ago, with some off and on)
By all means. Happy for that
discussion of the bet in Aug 2024 Progress in GSAI newsletter
Note in August 2024 GSAI newsletter
See Limitations on Formal Verification for AI Safety over on LessWrong. I have a lot of agreements, and my disagreements are more a matter of what deserves emphasis than the fundamentals. Overall, I think the Tegmark/Omohundro paper failed to convey a swisscheesey worldview, and sounded too much like “why not just capture alignment properties in ‘specs’ and prove the software ‘correct’?” (i.e. the vibe I was responding to in my very pithy post). However, I think my main reason I’m not using Dickson’s post as a reason to just pivot all my worldview and resulting research is captured in one of Steve’s comments:
I'm focused on making sure our infrastructure is safe against AI attacks.
Like, a very strong version I almost endorse is “GSAI isn’t about AI at all, it’s about systems coded by extremely powerful developers (which happen to be AIs)”, and ensuring safety, security, and reliability capabilities scale at similar speeds with other kinds of capabilities.
It looks like one can satisfy Dickson just by assuring him that GSAI is a part of a swiss cheese stack, and that no one is messianically promoting One Weird Trick To Solve Alignment. Of course, I do hope that no one is messianically promoting One Weird Trick…
august 2024 guaranteed safe ai newsletter
in case i forgot last month, here's a link to july
A wager you say
One proof of concept for the GSAI stack would be a well-understood mechanical engineering domain automated to the next level and certified to boot. How about locks? Needs a model of basic physics, terms in some logic for all the parts and how they compose, and some test harnesses that simulate an adversary. Can you design and manufacture a provably unpickable lock?
Zac Hatfield-Dodds (of hypothesis/pytest and Anthropic, was offered and declined authorship on the GSAI position paper) challenged Ben Goldhaber to a bet after Ben coauthored a post with Steve Omohundro. It seems to resolve in 2026 or 2027, the comment thread should get cleared up once Ben gets back from Burning Man. The arbiter is Raemon from LessWrong.
Zac says you can’t get a provably unpickable lock on this timeline. Zac gave (up to) 10:1 odds, so recall that the bet can be a positive expected value for Ben even if he thinks the event is most likely not going to happen.
For funsies, let’s map out one path of what has to happen for Zac to pay Ben $10k. This is not the canonical path, but it is a path:
- Physics to the relevant granularity (question: can human lockpicks leverage sub-newtownian issues?) is conceptually placed into type theory or some calculus. I tried a riemann integral in coq once (way once), so it occurs to me that you need to decide if you want just the functional models (perhaps without computation / with proof irrelevance) in your proof stack or if you want the actual numerical analysis support in there as well.
- Good tooling, library support, etc. around that conceptual work (call it mechlib) to provide mechanical engineering primitives
- A lock designing toolkit, depending on mechlib, is developed
- Someone (e.g. a large language model) is really good at programming in the lock designing toolkit. They come up with a spec L.
- You state the problem “forall t : trajectories through our physics simulation, if L(t) == open(L) then t == key(L)”
- Then you get to write a nasty gazillion line Lean proof
- Manufacture a lock (did I mention that the design toolkit has links to actual manufacturing stacks?)
- Bring a bunch to DefCon 2027 and send another to the lockpicking lawyer
- Everyone fails. Except Ben and the army of postdocs that $9,999 can buy.
Looks like after the magnificent research engineering in steps 1 and 2, the rest is just showing off and justifying those two steps. Of course, in a world where we have steps 1 and 2 we have a great deal of transformative applications of formal modeling and verification just in reach, and we’ll need a PoC like locks to practice and concretize the workflow.
Cryptography applications tend to have a curse of requiring a lot of work after the security context, permission set, and other requirements are frozen in stone, which means that when the requirements change you have to start over and throw out a bunch of work (epistemic status: why do you think so many defi projects have more whitepapers than users?). The provably unpickable lock has 2 to 10 x that problem– get the granularity wrong in step one, most of your mechlib implementation won’t be salvageable. As the language model iterates on the spec L in step 5, the other language model has to iterate on the proof in step 6, because the new spec will break most of the proof.
Sorry I don’t know any mechanical engineering, Ben, otherwise I’d take some cracks at it. The idea of a logic such that its denotation is a bunch of mechanical engineering primitives seems interesting enough that my “if it was easy to do in less than a year someone would’ve, therefore there must be a moat” heuristic is tingling. Perhaps oddly, the quantum semantics folks (or with HoTT!) seem to have been productive, but I don’t know how much of that is translatable to mechanical engineering.
Reinforcement learning from proof assistant feedback, and yet more monte carlo tree search
The steps are pretraining, supervised finetuning, RLPAF (reinforcement learning from proof assistant feedback), and MCTS (monte carlo tree search). RLPAF is not very rich: it’s a zero reward for any bug at all and a one for a happy typechecker. Glad they got that far with just that.
You can use the model at deepseek.com.
Harmonic ships their migration of miniF2F to Lean 4, gets 90% on it, is hiring
From their “one month in” newsletter. “Aristotle”, which has a mysterious methodology since I’ve only seen their marketing copy rather than an arxiv paper, gets 90% on miniF2F 4 when prompted with natural language proofs. It doesn’t look to me like the deepseek or LEGO papers do that? I could be wrong. It’s impressive just to autoformalize natural language proofs, I guess I’m still wrapping my head around how much harder it is (for an LLM) to implement coming up with the proof as well.
Jobs: research engineer and software engineer
Atlas ships their big google doc alluded to in the last newsletter
Worth a read! The GSAI stack is large and varied, and this maps out the different sub-sub-disciplines. From the executive summary:
You could start whole organizations for every row in this table, and I wouldn’t be a part of any org that targets more than a few at once for fear of being unfocused. See the doc for more navigation (see what I did there? Navigating like with an atlas, perhaps? Get it?) of the field’s opportunities.[1]
Efficient shield synthesis via state-space transformation
Shielding is an area of reactive systems and reinforcement learning that marks states as unsafe and synthesizes a kind of guarding layer between the agent and the environment that prevents unsafe actions from being executed in the environment. So in the rejection sampling flavored version, it literally intercepts the unsafe action and tells the agent “we’re not running that, try another action”. One of the limitations in this literature is computational cost, shields are, like environments, state machines plus some frills, and there may simply be too many states. This is the limitation that this paper focuses on.
Besides cost, demanding a lot of domain knowledge is another limitation of shields, so this is an especially welcome development.
Funding opportunities
ARIA jumped right to technical area three (TA3), prototyping the gatekeeper. Deadline October 2nd. Seems geared toward cyber-physical systems folks. In the document:
Note that verified software systems is an area which is highly suitable for a simplified gatekeeper workflow, in which the world-model is implicit in the specification logic. However, in the context of ARIA’s mission to “change the perception of what’s possible or valuable,” we consider that this application pathway is already perceived to be possible and valuable by the AI community. As such, this programme focuses on building capabilities to construct guaranteed-safe AI systems in cyber-physical domains. That being said, if you are an organisation which specialises in verified software, we would love to hear from you outside of this solicitation about the cyber-physical challenges that are just at the edge of the possible for your current techniques.
This is really cool stuff, I hope they find brave and adventurous teams. I had thought gatekeeper prototypes would be in minecraft or mujoco (and asked a funder if they’d support me in doing that), so it’s wild to see them going for actual cyberphysical systems so quickly.
Paper club
Add to your calendar. On September 19th we will read a paper about assume-guarantee contracts with learned components. I’m liable to have made a summary slide deck to kick us off, but if I don’t, we’ll quietly read together for the first 20-30 minutes then discuss. The google meet room in the gcal event by default.
Andrew Dickson’s excellent post
See Limitations on Formal Verification for AI Safety over on LessWrong. I have a lot of agreements, and my disagreements are more a matter of what deserves emphasis than the fundamentals. Overall, I think the Tegmark/Omohundro paper failed to convey a swisscheesey worldview, and sounded too much like “why not just capture alignment properties in ‘specs’ and prove the software ‘correct’?” (i.e. the vibe I was responding to in my very pithy post). However, I think my main reason I’m not using Dickson’s post as a reason to just pivot all my worldview and resulting research is captured in one of Steve’s comments:
I'm focused on making sure our infrastructure is safe against AI attacks.
Like, a very strong version I almost endorse is “GSAI isn’t about AI at all, it’s about systems coded by extremely powerful developers (which happen to be AIs)”, and ensuring safety, security, and reliability capabilities scale at similar speeds with other kinds of capabilities.
It looks like one can satisfy Dickson just by assuring him that GSAI is a part of a swiss cheese stack, and that no one is messianically promoting One Weird Trick To Solve Alignment. Of course, I do hope that no one is messianically promoting One Weird Trick…
- ^
One problem off the top of my head regarding the InterFramework section: Coq and Lean seems the most conceptually straightforward since they have the same underlying calculus, but even there just a little impredicativity or coinduction could lead to extreme headaches. Now you can have a model at some point in the future that steamrolls over these headaches, but then you have a social problem of the broader Lean community not wanting to upstream those changes– various forks diverging fundamentally seems problematic to me, would lead to a lot of duplicated work and missed opportunities for collaboration. I plan to prompt Opus 3.5 with “replicate flocq in lean4” as soon as I get access to the model, but how much more prompting effort will it be to ensure compliance with preexisting abstractions and design patterns, so that it can not only serve my purposes but be accepted by the community? At least there’s no coinduction in flocq, though some of the proofs may rely on set impredicativity for all I know (I haven’t looked at it in a while).
I do think it's important to reach appropriately high safety assurances before developing or deploying future AI systems which would be capable of causing a catastrophe. However, I believe that the path there is to extend and complement current techniques, including empirical and experimental approaches alongside formal verification - whatever actually works in practice.
for what it's worth, I do see GSAI as largely a swiss cheesey worldview. Though I can see how you might read some of the authors involved to be implying otherwise! I should knock out a post on this.
not just get it to sycophantically agree with you
i struggle with this, and need to attend a prompting bootcamp
i'm getting back into composing and arranging. send me rat poems to set to music!
sure -- i agree that's why i said "something adjacent to" because it had enough overlap in properties. I think my comment completely stands with a different word choice, I'm just not sure what word choice would do a better job.