Posts

Most Minds are Irrational 2024-12-10T09:36:33.144Z
Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn 2024-12-09T08:24:26.594Z
Mitigating Geomagnetic Storm and EMP Risks to the Electrical Grid (Shallow Dive) 2024-11-26T08:00:04.810Z
Proveably Safe Self Driving Cars [Modulo Assumptions] 2024-09-15T13:58:19.472Z
Are LLMs on the Path to AGI? 2024-08-30T03:14:04.710Z
Scaling Laws and Likely Limits to AI 2024-08-18T17:19:46.597Z
Misnaming and Other Issues with OpenAI's “Human Level” Superintelligence Hierarchy 2024-07-15T05:50:17.770Z
Biorisk is an Unhelpful Analogy for AI Risk 2024-05-06T06:20:28.899Z
A Dozen Ways to Get More Dakka 2024-04-08T04:45:19.427Z
"Open Source AI" isn't Open Source 2024-02-15T08:59:59.034Z
Technologies and Terminology: AI isn't Software, it's... Deepware? 2024-02-13T13:37:10.364Z
Safe Stasis Fallacy 2024-02-05T10:54:44.061Z
AI Is Not Software 2024-01-02T07:58:04.992Z
Public Call for Interest in Mathematical Alignment 2023-11-22T13:22:09.558Z
What is autonomy, and how does it lead to greater risk from AI? 2023-08-01T07:58:06.366Z
A Defense of Work on Mathematical AI Safety 2023-07-06T14:15:21.074Z
"Safety Culture for AI" is important, but isn't going to be easy 2023-06-26T12:52:47.368Z
"LLMs Don't Have a Coherent Model of the World" - What it Means, Why it Matters 2023-06-01T07:46:37.075Z
Systems that cannot be unsafe cannot be safe 2023-05-02T08:53:35.115Z
Beyond a better world 2022-12-14T10:18:26.810Z
Far-UVC Light Update: No, LEDs are not around the corner (tweetstorm) 2022-11-02T12:57:23.445Z
Announcing AISIC 2022 - the AI Safety Israel Conference, October 19-20 2022-09-21T19:32:35.581Z
Rehovot, Israel – ACX Meetups Everywhere 2022 2022-08-25T18:01:16.106Z
AI Governance across Slow/Fast Takeoff and Easy/Hard Alignment spectra 2022-04-03T07:45:57.592Z
Arguments about Highly Reliable Agent Designs as a Useful Path to Artificial Intelligence Safety 2022-01-27T13:13:11.011Z
Elicitation for Modeling Transformative AI Risks 2021-12-16T15:24:04.926Z
Modelling Transformative AI Risks (MTAIR) Project: Introduction 2021-08-16T07:12:22.277Z
Maybe Antivirals aren’t a Useful Priority for Pandemics? 2021-06-20T10:04:08.425Z
A Cruciverbalist’s Introduction to Bayesian reasoning 2021-04-04T08:50:07.729Z
Systematizing Epistemics: Principles for Resolving Forecasts 2021-03-29T20:46:06.923Z
Resolutions to the Challenge of Resolving Forecasts 2021-03-11T19:08:16.290Z
The Upper Limit of Value 2021-01-27T14:13:09.510Z
Multitudinous outside views 2020-08-18T06:21:47.566Z
Update more slowly! 2020-07-13T07:10:50.164Z
A Personal (Interim) COVID-19 Postmortem 2020-06-25T18:10:40.885Z
Market-shaping approaches to accelerate COVID-19 response: a role for option-based guarantees? 2020-04-27T22:43:26.034Z
Potential High-Leverage and Inexpensive Mitigations (which are still feasible) for Pandemics 2020-03-09T06:59:19.610Z
Ineffective Response to COVID-19 and Risk Compensation 2020-03-08T09:21:55.888Z
Link: Does the following seem like a reasonable brief summary of the key disagreements regarding AI risk? 2019-12-26T20:14:52.509Z
Updating a Complex Mental Model - An Applied Election Odds Example 2019-11-28T09:29:56.753Z
Theater Tickets, Sleeping Pills, and the Idiosyncrasies of Delegated Risk Management 2019-10-30T10:33:16.240Z
Divergence on Evidence Due to Differing Priors - A Political Case Study 2019-09-16T11:01:11.341Z
Hackable Rewards as a Safety Valve? 2019-09-10T10:33:40.238Z
What Programming Language Characteristics Would Allow Provably Safe AI? 2019-08-28T10:46:32.643Z
Mesa-Optimizers and Over-optimization Failure (Optimizing and Goodhart Effects, Clarifying Thoughts - Part 4) 2019-08-12T08:07:01.769Z
Applying Overoptimization to Selection vs. Control (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 3) 2019-07-28T09:32:25.878Z
What does Optimization Mean, Again? (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 2) 2019-07-28T09:30:29.792Z
Re-introducing Selection vs Control for Optimization (Optimizing and Goodhart Effects - Clarifying Thoughts, Part 1) 2019-07-02T15:36:51.071Z
Schelling Fences versus Marginal Thinking 2019-05-22T10:22:32.213Z
Values Weren't Complex, Once. 2018-11-25T09:17:02.207Z

Comments

Comment by Davidmanheim on Trying to translate when people talk past each other · 2024-12-18T07:23:15.261Z · LW · GW

I don't think it was betrayal, I think it was skipping verbal steps, which left intent unclear.

If A had said "I promised to do X, is it OK now if I do Y instead?" There would presumably have been no confusion. Instead, they announced, before doing Y, their plan, leaving the permission request implicit. The point that "she needed A to acknowledge that he’d unilaterally changed an agreement" was critical to B, but I suspect A thought that stating the new plan did that implicitly.

Comment by Davidmanheim on MIRI's June 2024 Newsletter · 2024-12-14T19:56:26.729Z · LW · GW

Strongly agree that there needs to be an institutional home. My biggest problem is that there is still no such new home!

Comment by Davidmanheim on Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn · 2024-12-12T08:44:18.935Z · LW · GW

You should also read the relevant sequence about dissolving the problem of free will: https://www.lesswrong.com/s/p3TndjYbdYaiWwm9x

Comment by Davidmanheim on Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn · 2024-12-12T08:42:29.488Z · LW · GW

You believe that something inert cannot be doing computation. I agree. But you seem to think it's coherent that a system with no action - a post-hoc mapping of states - can be.

The place where comprehension of Chinese exists in the "chinese room" is the creation of the mapping - the mapping itself is a static object, and the person in the room by assumption is doing to cognitive work, just looking up entries. "But wait!" we can object, "this means that the Chinese room doesn't understand Chinese!" And I think that's the point of confusion - repeating someone else telling you answers isn't the same as understanding. The fact that the "someone else" wrote down the answers changes nothing. The question is where and when the computation occurred.

In our scenarios, there are a couple different computations - but the creation of the mapping unfairly sneaks in the conclusion that the execution of the computation, which is required to build the mapping, isn't what creates consciousness!

Comment by Davidmanheim on Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn · 2024-12-12T05:55:13.337Z · LW · GW

Good point. The problem I have with that is that in every listed example, the mapping either requires the execution of the conscious mind and a readout of its output and process in order to build it, or it stipulates that it is well enough understood that it can be mapped to an arbitrary process, thereby implicitly also requiring that it was run elsewhere.

Comment by Davidmanheim on Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn · 2024-12-11T16:36:48.560Z · LW · GW

That seems like a reasonable idea. It seems not at all related to what any of the philosophers proposed.

For their proposals, it seems like the computational process is more like:
1. Extract a specific string of 1s and zeros from the sandstorm's initial position, and another from it's final position, with the some length as the length of the full description of the mind.
2. Calculate the bitwise sum of the initial mind state and the initial sand position.
3. Calculate the  bitwise sum of the final mind state and the final sand position.
4. Take the output of state 2 and replace it with the output of state 3.
5. Declare that the sandstorm is doing something isomorphic to what the mind did. Ignore the fact that the internal process is completely unrelated, and all of the computation was done inside of the mind, and you're just copying answers.

Comment by Davidmanheim on Most Minds are Irrational · 2024-12-11T11:30:39.867Z · LW · GW

I agree that's a more interesting question, and computational complexity theorists have done work on it which I don't fully understand, but it also doesn't seem as relevant for AI safety questions.

Comment by Davidmanheim on Most Minds are Irrational · 2024-12-10T13:05:38.149Z · LW · GW

Regarding Chess agents, Vanessa pointed out that while only perfect play is optimal, informally we would consider agents to have an objective that is better served by slightly better play, for example, an agent rated 2500 ELO is better than one rated 1800, which is better than one rated 1000, etc. That means that lots of "chess minds" which are non-optimal are still somewhat rational at their goal.

I think that it's very likely that even according to this looser definition, almost all chess moves, and therefore almost all "possible" chess bots, fail to do much to accomplish the goal. 
We could check this informally by evaluating the set of possible moves in normal games would be classified as blunders, using a method such as the one used here to evaluate what proportion of actual moves made by players are blunders. Figure 1 there implies that in positions with many legal moves, a larger proportion are blunders - but this is looking at the empirical blunder rate by those good enough to be playing ranked chess. Another method would be to look at a bot that actually implements "pick a random legal move" - namely Brutus RND. It has an ELO of 255 when ranked against other amateur chess bots, and wins only occasionally against some of the worst bots; it seems hard to figure out from that what proportion of moves are good, but it's evidently a fairly small proportion.

Comment by Davidmanheim on Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn · 2024-12-09T23:14:36.300Z · LW · GW

We earlier mentioned that it is required that the finite mapping be precomputed. If it is for arbitrary Turing machines, including those that don't halt, we need infinite time, so the claim that we can map to arbitrary Turing machines fails. If we restrict it to those which halt, we need to check that before providing the map, which requires solving the halting problem to provide the map.

Edit to add: I'm confused why this is getting "disagree" votes - can someone explain why or how this is an incorrect logical step, or

Comment by Davidmanheim on Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn · 2024-12-09T21:40:56.055Z · LW · GW

OK, so this is helpful, but if I understood you correctly, I think it's assuming too much about the setup. For #1, in the examples we're discussing, the states of the object aren't predictably changing in complex ways - just that it will change "states" in ways that can be predicted to follow a specific path, which can be mapped to some set of states. The states are arbitrary, and per the argument don't vary in some way that does any work - and so as I argued, they can be mapped to some set of consecutive integers. But this means that the actions of the physical object are predetermined in the mapping.

And the difference between that situation and the CNS is that we know he neural circuitry is doing work - the exact features are complex and only partly understood, but the result is clearly capable of doing computation in the sense of Turing machines. 

Comment by Davidmanheim on Language Models are a Potentially Safe Path to Human-Level AGI · 2024-12-09T16:58:41.306Z · LW · GW

I think this was a valuable post, albeit ending up somewhat incorrect about whether LLMs would be agentic - not because they developed the capacity on their own, but because people intentionally built and are building structure around LLMs to enable agency. That said, the underlying point stands - it is very possible that LLMs could be a safe foundation for non-agentic AI, and many research groups are pursuing that today.

Comment by Davidmanheim on Five Worlds of AI (by Scott Aaronson and Boaz Barak) · 2024-12-09T16:55:26.651Z · LW · GW

The blogpost this points to was an important contribution at the time, more clearly laying out extreme cases for the future.  (The replies there were also particularly valuable.)

Comment by Davidmanheim on "Publish or Perish" (a quick note on why you should try to make your work legible to existing academic communities) · 2024-12-09T16:45:32.863Z · LW · GW

I think this post makes an important and still neglected claim that people should write their work more clearly and get it published in academia, instead of embracing the norms of the narrower community they interact with. There has been significant movement in this direction in the past 2 years, and I think this posts marks a critical change in what the community suggests and values in terms of output.

Comment by Davidmanheim on Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn · 2024-12-09T16:29:45.446Z · LW · GW

"the actual thinking-action that the mapping interprets"


I don't think this is conceptually correct. Looking at the chess playing waterfall that Aaronson discusses, the mapping itself is doing all of the computation. The fact that the mapping ran in the past doesn't change the fact that it's the location of the computation, any more than the fact that it takes milliseconds for my nerve impulses to reach my fingers means that my fingers are doing the thinking in writing this essay. (Though given the typos you found, it would be convenient to blame them.)

they assume ad arguendo that you can instantiate the computations we're interested in (consciousness) in a headful of meat, and then try to show that if this is the case, many other finite collections of matter ought to be able to do the job just as well.

Yes, they assume that whatever runs the algorithm is experiencing running the algorithm from the inside. And yes, many specific finite systems can do so - namely, GPUs and CPUs, as well as the wetware in our head. But without the claim that arbitrary items can do these computations, it seems that the arguendo is saying nothing different than the conclusion - right?

Comment by Davidmanheim on Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn · 2024-12-09T16:23:23.790Z · LW · GW

Looks like I messed up cutting and pasting - thanks!

Comment by Davidmanheim on Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn · 2024-12-09T14:33:36.233Z · LW · GW

Thanks - fixed!

Comment by Davidmanheim on Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn · 2024-12-09T14:31:39.493Z · LW · GW

Yeah, perhaps refuting is too strong given that the central claim is that we can't know what is and is not doing computation - which I think is wrong, but requires a more nuanced discussion. However, the narrow claims they made inter-alia were strong enough to refute, specifically by showing that their claims are equivalent to saying the integers are doing arbitrary computation - when making the claim itself requires the computation to take place elsewhere!

Comment by Davidmanheim on Do simulacra dream of digital sheep? · 2024-12-09T13:52:43.772Z · LW · GW

Seems worth noting that the claims of most of the philosophers being cited here is (1) - that even rocks are doing the same computation as minds.

Comment by Davidmanheim on Refuting Searle’s wall, Putnam’s rock, and Johnson’s popcorn · 2024-12-09T13:49:05.415Z · LW · GW

I agree that this wasn't intended as an introduction to the topic. For that, I will once again recommend Scott Aaronson's excellent mini-book explaining computational complexity to philosophers

I agree that the post isn't a definition of what computation is - but I don't need to be able to define fire to be able to point out something that definitely isn't on fire! So I don't really understand your claim. I agree that it's objectively hard to interpret computation, but it's not at all hard to interpret the fact that the integers are less complex and doing less complex computation than, say, an exponential-time Turing machine - and given the specific arguments being made, neither is a wall or a bag of popcorn. Which, as I just responded to the linked comment, was how I understood the position being taken by Searle, Putnam, and Johnson.  (And even this ignores that one implication of the difference in complexity is that the wall / bag of popcorn / whatever is not mappable to arbitrary computations, since the number of steps required for a computation may not be finite!)

Comment by Davidmanheim on Do simulacra dream of digital sheep? · 2024-12-09T08:24:56.033Z · LW · GW

I've written my point more clearly here: https://www.lesswrong.com/posts/zxLbepy29tPg8qMnw/refuting-searle-s-wall-putnam-s-rock-and-johnson-s-popcorn

Comment by Davidmanheim on Detection of Asymptomatically Spreading Pathogens · 2024-12-06T11:06:33.182Z · LW · GW

I think 'we estimate... to be'

Comment by Davidmanheim on Do simulacra dream of digital sheep? · 2024-12-05T04:33:34.784Z · LW · GW

Your/Aaronson's claim is that only the fully connected, sensibly interacting calculation matters.

Not at all. I'm not making any claim about what matters or counts here, just pointing out a confusion in the claims that were made here and by many philosophers who discussed the topic.

Comment by Davidmanheim on Do simulacra dream of digital sheep? · 2024-12-04T17:11:19.965Z · LW · GW

You disagree with Aaronson that the location of the complexity is in the interpreter, or you disagree that it matters?

In the first case, I'll defer to him as the expert. But in the second, the complexity is an internal property of the system! (And it's a property in a sense stronger than almost anything we talk about in philosophy; it's not just a property of the world around us, because as Gödel and others showed, complexity is a necessary fact about the nature of mathematics!)

Comment by Davidmanheim on Do simulacra dream of digital sheep? · 2024-12-04T17:07:42.109Z · LW · GW

Yeah, something like that. See my response to Euan in the other reply to my post.

Comment by Davidmanheim on Do simulacra dream of digital sheep? · 2024-12-04T17:06:50.193Z · LW · GW

Yes, and no, it does not boil down to Chalmer's argument. (as Aaronson makes clear in the paragraph before the one you quote, where he cites the Chalmers argument!) The argument from complexity is about the nature and complexity of systems capable of playing chess - which is why I think you need to carefully read the entire piece and think about what it says.

But as a small rejoinder, if we're talking about playing a single game, the entire argument is ridiculous; I can write the entire "algorithm" a kilobyte of specific instructions. So it's not that an algorithm must be capable of playing multiple counterfactual games to qualify, or that counterfactuals are required for moral weight - it's that the argument hinges on a misunderstanding of how complex different classes of system need to be to do the things they do.

PS. Apologies that the original response comes off as combative - I really think this discussion is important, and wanted to engage to correct an important point, but have very little time to do so at the moment!

Comment by Davidmanheim on Do simulacra dream of digital sheep? · 2024-12-04T07:22:28.904Z · LW · GW

As with OP, I strongly recommend Aaronson, who explains why waterfalls aren't doing computation in ways that refute the rock example you discuss: https://www.scottaaronson.com/papers/philos.pdf

Comment by Davidmanheim on Do simulacra dream of digital sheep? · 2024-12-04T07:19:44.741Z · LW · GW

You seem to fundamentally misunderstand computation, in ways similar to Searle. I can't engage deeply, but recommend Scott Aaronson's primer on computational complexity: https://www.scottaaronson.com/papers/philos.pdf

Comment by Davidmanheim on Is the mind a program? · 2024-12-04T07:17:52.493Z · LW · GW

You seem deeply confused about computation, in ways similar to Searle et al. I cannot engage deeply on this at present, but recommend Aaronson's primer on the topic: https://www.scottaaronson.com/papers/philos.pdf

Comment by Davidmanheim on Hierarchical Agency: A Missing Piece in AI Alignment · 2024-12-02T12:16:48.553Z · LW · GW

Norms can accomplish this as well - I wrote about this a couple weeks ago.

Comment by Davidmanheim on Hierarchical Agency: A Missing Piece in AI Alignment · 2024-12-02T12:01:41.751Z · LW · GW

Are you familiar with Davidad's program working on compositional world modeling? (The linked notes are from before the program was launched, there is ongoing work on the topic.)

The reason I ask is because embedded agents and agents in multi-agent settings should need compositional world models that include models of themselves and other agents, which implies that hierarchical agency is included in what they would need to solve. 

It also relates closely to work Vanessa is doing (as an "ARIA Creator") in learning theoretic AI, related to what she has called "Frugal Compositional Languages" and see this work by @alcatal - though I understand both are not yet addressing on multi-agent world models, nor is it explicitly about modeling the agents themselves in a compositional / embedded agent way, though those are presumably desiderata.

Comment by Davidmanheim on Mitigating Geomagnetic Storm and EMP Risks to the Electrical Grid (Shallow Dive) · 2024-11-28T21:43:02.681Z · LW · GW

That is an interesting question l, but I unfortunately do not know enough to even figure out how to answer it.

Comment by Davidmanheim on Mitigating Geomagnetic Storm and EMP Risks to the Electrical Grid (Shallow Dive) · 2024-11-27T06:51:59.400Z · LW · GW

Good points. Yes, storage definitely helps, and microgrids are generally able to have some storage, if only to smooth out variation in power generation for local use. But solar storms can last days, even if a large long-lasting event is very, very unlikely. And it's definitely true that if large facilities have storage, shutdowns will have reduced impact - but I understand that the transformers are used for power transmission, so having local storage at the large generators won't change the need to shut down the transformers used for sending that power to consumers.

Comment by Davidmanheim on (Salt) Water Gargling as an Antiviral · 2024-11-27T06:02:23.353Z · LW · GW

Do I understand correctly that the blue-green graph has a y-axis that goes above 100% median reduction, with error bars in that range? (This would happen if they estimated a proportion as a standard variable - not great practice, but I want to check that it is what happened.)

Comment by Davidmanheim on Occupational Licensing Roundup #1 · 2024-10-31T06:01:42.891Z · LW · GW

Question for a lawyer: how is non-reciprocity not an interstate trade issue that federal courts can strike down?

Comment by Davidmanheim on Dialogue introduction to Singular Learning Theory · 2024-10-06T13:37:10.449Z · LW · GW

In addition to the point that current models are already strongly superhuman in most ways, I think that if you buy the idea that we'll be able to do automated alignment of ASI, you'll still need some reliable approach to "manual" alignment of current systems. We're already far past the point where we can robustly verify LLMs claims' or reasoning in a robust fashion outside of narrow domains like programming and math.

But on point two, I strongly agree that Agent foundations and Davidad's agendas are also worth pursuing. (And in a sane world, we should have tens or hundreds of millions of dollars in funding for each every year.) Instead, it looks like we have Davidad's ARIA funding, Jaan Talinn and LTFF funding some agent foundations and SLT work, and that's basically it. And MIRI abandoned agent foundations, while Openphil isn't, it seems, putting money or effort into them.

Comment by Davidmanheim on Proveably Safe Self Driving Cars [Modulo Assumptions] · 2024-09-22T13:52:57.339Z · LW · GW

I partly disagree; steganography is only useful when it's possible for the outside / receiving system to detect and interpret the hidden messages, so if the messages are of a type that outside systems would identify, they can and should be detectable by the gating system as well. 

That said, I'd be very interested in looking at formal guarantees that the outputs are minimally complex in some computationally tractable sense, or something similar - it definitely seems like something that @davidad would want to consider.

Comment by Davidmanheim on Proveably Safe Self Driving Cars [Modulo Assumptions] · 2024-09-22T13:43:11.194Z · LW · GW

I really like that idea, and the clarity it provides, and have renamed the post to reflect it! (Sorryr this was so slow- I'm travelling.)

Comment by Davidmanheim on Proveably Safe Self Driving Cars [Modulo Assumptions] · 2024-09-22T13:40:38.440Z · LW · GW

That seems fair!

Comment by Davidmanheim on Proveably Safe Self Driving Cars [Modulo Assumptions] · 2024-09-18T10:46:49.128Z · LW · GW

I agree that in the most general possible framing, with no restrictions on output, you cannot guard against all possible side-channels. But that's not true for proposals like safeguarded AI, where a proof must accompany the output, and it's not obviously true if the LLM is gated by a system that rejects unintelligible or not-clearly-safe outputs.

Comment by Davidmanheim on Proveably Safe Self Driving Cars [Modulo Assumptions] · 2024-09-18T10:43:15.977Z · LW · GW

On the absolute safety, I very much like the way you put it, and will likely use that framing in the future, so thanks!

On impossibility results, there are some, andI definitely think that this is a good question, but also agree this isn't quite the right place to ask. I'd suggest talking to some of the agents foundations people for suggestions

Comment by Davidmanheim on Proveably Safe Self Driving Cars [Modulo Assumptions] · 2024-09-16T07:48:04.320Z · LW · GW

I think these are all really great things that we could formalize and build guarantees around. I think some of them are already ruled out by the responsibility sensitive safety guarantees, but others certainly are not. On the other hand, I don't think that use of cars to do things that violate laws completely unrelated to vehicle behavior are in scope; similar to what I mentioned to Oliver,  if what is needed in order for a system to be safe is that nothing bad can be done, you're heading in the direction of a claim that the only safe AI is a universal dictator that has sufficient power to control all outcomes.

But in cases where provable safety guarantees are in place, and the issues relate to car behavior - such as cars causing damage, blocking roads, or being redirected away from the intended destination - I think hardware guarantees on the system, combined with software guarantees, combined with verifying that only trusted code is being run, could be used to ignition-lock cars which have been subverted.

And I think that in the remainder of cases, where cars are being used for dangerous or illegal purposes, we need to trade off freedom and safety. I certainly don't want AI systems which can conspire to break the law - and in most cases, I expect that this is something LLMs can already detect - but I also don't want a car which will not run if it determines that a passenger is guilty of some unrelated crime like theft. But for things like "deliver explosives or disperse pathogens," I think vehicle safety is the wrong path to preventing dangerous behavior; it seems far more reasonable to have separate systems that detect terrorism, and separate types of guarantees to ensure LLMs don't enable that type of behavior.

Comment by Davidmanheim on Proveably Safe Self Driving Cars [Modulo Assumptions] · 2024-09-16T07:32:45.697Z · LW · GW

Yes, after saying it was about what they need "to do not to cause accidents" and that "any accidents which could occur will be attributable to other cars actions," which I then added caveats to regarding pedestrians, I said "will only have accidents" when I should have said "will only cause accidents." I have fixed that with another edit. But I think you're confused about what I'm trying to show .
 

Principally, I think you are wrong about what needs to be shown here for safety in the sense I outlined, or are trying to say that the sense I outlined doesn't lead to something I don't claim. If what is needed in order for a system to be safe is that no damage will be caused in situations which involve the system, you're heading in the direction of a claim that the only safe AI is a universal dictator that has sufficient power to control all outcomes. My claim, on the other hand, is that in sociotechnological systems, the way that safety is achieved is by creating guarantees that each actor - human or AI - behaves according to rules that minimizes foreseeable dangers. That would include safeguards for stupid, malicious, or dangerous human actions, much like human systems have laws about dangerous actions. However, in a domain like driving, in the same way that it's impossible for human drivers to both get where they are going, and never hit pedestrians who act erratically and jump out from behind obstacles into the road with an oncoming car, a safe autonomous vehicle wouldn't be expected to solve every possible case of human misbehavior - just to drive responsibly.

More specifically, you make the claim that "as far as I can tell it would totally be compatible with a car driving extremely recklessly in a pedestrian environment due to making assumptions about pedestrian behavior that are not accurate." The paper, on the other hand, says "For example, in a typical residential street, a pedestrian has the priority over the vehicles, and it follows that vehicles must yield and be cautious with respect to pedestrians," and formalizes this with statements like "a vehicle must be in a kinematic state such that if it will apply a proper response (acceleration for ρ seconds and when braking) it will remain outside of a ball of radius 50cm around the pedestrian." 

I also think that it formalizes reasonable behavior for pedestrians, but I agree that it won't cover every case - pedestrians oblivious to cars that are driving in ways that are otherwise safe, who rapidly change their path to jump in front of cars, are sometimes able to be hit by those cars - but I think fault is pretty clear here. (And the paper is clear that even in those cases, the car would need to both drive safely in residential areas, and attempt to brake or avoid the pedestrian in order to avoid crashes even in cases with irresponsible and erratic humans!)

But again, as I said initially, this isn't solving the general case of AI safety, it's solving a much narrower problem. And if you wanted to make the case that this isn't enough for similar scenarios that we care about, I will strongly agree that for more capable systems, the set of situations it would need to avoid are correspondingly larger, and the set of necessary guarantees are far stronger. But as I said at the beginning, I'm not making that argument - just the much simpler one that proveability can work in physical systems, and can be applied in sociotechnological systems in ways that make sense.

Comment by Davidmanheim on Proveably Safe Self Driving Cars [Modulo Assumptions] · 2024-09-16T05:25:10.814Z · LW · GW

I agree that "safety in an open world cannot be proved," at least as a general claim, but disagree that this impinges on the narrow challenge of designing cars that do not cause accidents - a misunderstanding which I tried to be clear about, but which I evidently failed to make sufficiently clear, as Oliver's misunderstanding illustrates. That said, I strongly agree that better methods for representing grain of truth problems, and considering hypotheses outside those which are in the model is critical. It's a key reason I'm supporting work on infra-Bayesian approaches, which are designed explicitly to handle this class of problem. Again, it's not necessary for the very narrow challenge I think I addressed above, but I certainly agree that it's critical.

 

Second, I'm a huge proponent of complex system engineering approaches, and have discussed this in previous unrelated work. I certainly agree that these issues are critical, and should receive more attention - but I think it's counterproductive to try to embed difficult problems inside of addressable ones. To offer an analogy, creating provably safe code that isn't vulnerable to any known technical exploit still will not prevent social engineering attacks, but we can still accomplish the narrow goal.

If, instead of writing code that can't be fuzzed for vulnerabilities, doesn't contain buffer overflow or null-pointer vulnerabilities, and can't be exploited via transient execution CPU vulnerabilities, and isn't vulnerable to rowhammer attacks, you say that we need to address social engineering before trying to make the code provably safe, and should address social engineering with provable properties, you're sabotaging progress in a tractable area in order to apply a paradigm ill-suited to the new problem you're concerned with.

That's why, in this piece, I started by saying I wasn't proving anything general, and "I am making far narrower claims than the general ones which have been debated." I agree that the larger points are critical. But for now, I wanted to make a simpler point.

Comment by Davidmanheim on Proveably Safe Self Driving Cars [Modulo Assumptions] · 2024-09-16T05:10:09.242Z · LW · GW

To start at the end, you claim I "straightforwardly made an inaccurate unqualified statement," but replaced  my statement about "what a car needs to do not to cause accidents" with "no accidents will take place." And I certainly agree that there is an "extremely difficult and crucial step of translating a form toy world like RSS into real world outcomes," but the toy model that the paper is dealing with is therefore one of rule-following entities, both pedestrians and cars. That's why it's not going to require accounting for "what if pedestrians do something illegal and unexpected."

Of course, I agree that this drastically limits the proof, or as I said initially, "relying on assumptions about other car behavior is a limit to provable safety," but you seem to insist that because the proof doesn't do something I never claimed it did, it's glossing over something.

That said, I agree that I did not discuss pedestrians, but as you sort-of admit, the paper does - it treats stationary pedestrians not at crosswalks, and not on sidewalks, as largely unpredictable entities that may enter the road. For example, it notes that "even if pedestrians do not have priority, if they entered the road at a safe distance, cars must brake and let them pass." But again, you're glossing over the critical assumption for the entire section, which is responsibility for accidents. And this is particularly critical; the claim is not that pedestrians and other cars cannot cause accidents, but that the safe car will not do so.

 

Given all of that, to get back to the beginning, your initial position was that "RSS seems miles away from anything that one could describe as a formalization of how to avoid an accident." Do you agree that it's close to "a formalization of how to avoid causing an accident"?

Comment by Davidmanheim on Proveably Safe Self Driving Cars [Modulo Assumptions] · 2024-09-15T20:03:42.388Z · LW · GW

Have you reviewed the paper? (It is the first link under "The RSS Concept" in the page which was linked to before, though perhaps I should have linked to it directly.) It seems to lay out the proof, and discusses pedestrians, and deals with most of the objections you're raising, including obstructions and driving off of marked roads. I admit I have not worked through the proof in detail, but I have read through it, and my understanding is that it was accepted, and a large literature has been built that extends it.

And the objections about slippery roads and braking are the set of things I noted under "traditional engineering analysis and failure rates" I agree that guarantees are non-trivial, but they also aren't outside of what is done already in safety analysis, and there is explicit work in the literature on the issue, both from the verification and validation side, and from the perception and sensing weather conditions side.

Comment by Davidmanheim on Proveably Safe Self Driving Cars [Modulo Assumptions] · 2024-09-15T17:52:21.289Z · LW · GW

I agree that it's the most challenging part, and there are unsolved problems, but I don't share your intuition that it's in some way unsolvable, so I suspect we're thinking of very different types of things.

For RSS specifically, Rule 5 is obviously the most challenging, but it's also not in general required for the not-being-at-fault guarantee, and Rule 4 is largely about ensuring the relationship between sensor uncertainty in low visibility areas and the other rules - respecting distance and not hitting things - are enforced. Other than that, right of way rules are very simple, if the car correctly detects that the situation is one where they apply, and changing lanes is based on a very simple formula for distance, and assuming the car isn't changing lanes, during driving, in order to follow the rules, you essentially only need to restrict speed, which seems like something you can check very easily. 

Comment by Davidmanheim on Limitations on Formal Verification for AI Safety · 2024-09-11T09:14:08.689Z · LW · GW

As you sort of refer to, it's also the case that the 7.5 hour run time can be paid once, and then remain true of the system. It's a one-time cost!

So even if we have 100 different things we need to prove for a higher level system, then even if it takes a year of engineering and mathematics research time plus a day or a month of compute time to get a proof, we can do them in parallel, and this isn't much of a bottleneck, if this approach is pursued seriously. (Parallelization is straightforward if we can, for example, take the guarantee provided by one proof as an assumption in others, instead of trying to build a single massive proof.) And each such system built allows for provability guarantees for systems build with that component, if we can build composable proof systems, or can separate the necessary proofs cleanly.

Comment by Davidmanheim on Limitations on Formal Verification for AI Safety · 2024-09-09T06:09:23.506Z · LW · GW

Yes - I didn't say it was hard without AI, I said it was hard. Using the best tech in the world, humanity doesn't *even ideally* have ways to get AI to design safe useful vaccines in less than months, since we need to do actual trials.

Comment by Davidmanheim on How I got 4.2M YouTube views without making a single video · 2024-09-08T12:45:16.890Z · LW · GW

I know someone who has done lots of reporting on lab leaks, if that helps?

Also, there are some "standard" EA-adjacent journalists who you could contact / someone could introduce you to, if it's relevant to that as well.

Comment by Davidmanheim on Limitations on Formal Verification for AI Safety · 2024-09-08T11:37:44.917Z · LW · GW

Vaccine design is hard, and requires lots of work. Seems strange to assert that someone could just do it on the basis of a theoretical design. Viral design, though, is even harder, and to be clear, we've never seen anyone build one from first principles; the most we've seen is modification of extant viruses in minor ways where extant vaccines for the original virus are likely to work at least reasonably well.