Posts

D&D.Sci Hypersphere Analysis Part 4: Fine-tuning and Wrapup 2024-01-18T03:06:39.344Z
D&D.Sci Hypersphere Analysis Part 3: Beat it with Linear Algebra 2024-01-16T22:44:52.424Z
D&D.Sci Hypersphere Analysis Part 2: Nonlinear Effects & Interactions 2024-01-14T19:59:37.911Z
D&D.Sci Hypersphere Analysis Part 1: Datafields & Preliminary Analysis 2024-01-13T20:16:39.480Z
Deception Chess: Game #1 2023-11-03T21:13:55.777Z
Find Hot French Food Near Me: A Follow-up 2023-09-06T12:32:02.844Z
D&D.Sci 5E: Return of the League of Defenders Evaluation & Ruleset 2023-06-09T15:25:21.948Z
D&D.Sci 5E: Return of the League of Defenders 2023-05-26T20:39:18.879Z
[S] D&D.Sci: All the D8a. Allllllll of it. Evaluation and Ruleset 2023-02-27T23:15:39.094Z
[S] D&D.Sci: All the D8a. Allllllll of it. 2023-02-10T21:14:59.192Z
Ambiguity in Prediction Market Resolution is Harmful 2022-09-26T16:22:48.809Z
Dwarves & D.Sci: Data Fortress Evaluation & Ruleset 2022-08-16T00:15:33.305Z
Dwarves & D.Sci: Data Fortress 2022-08-06T18:24:21.499Z
Ars D&D.Sci: Mysteries of Mana Evaluation & Ruleset 2022-07-19T02:06:02.577Z
Ars D&D.sci: Mysteries of Mana 2022-07-09T12:19:36.510Z
D&D.Sci Divination: Nine Black Doves Evaluation & Ruleset 2022-05-17T00:34:25.019Z
D&D.Sci Divination: Nine Black Doves 2022-05-06T23:02:01.266Z
Duels & D.Sci March 2022: Evaluation and Ruleset 2022-04-05T00:21:28.170Z
Interacting with a Boxed AI 2022-04-01T22:42:30.114Z
Two Forms of Moral Judgment 2022-04-01T22:13:30.129Z
Duels & D.Sci March 2022: It's time for D-d-d-d-d-d-d-d-d-d-d-d-d-d-data! 2022-03-25T16:55:48.486Z
Seek Mistakes in the Space Between Math and Reality 2022-03-01T05:58:15.419Z
D&D.SCP: Anomalous Acquisitions Evaluation & Ruleset 2022-02-22T18:19:22.408Z
D&D.SCP: Anomalous Acquisitions 2022-02-12T16:03:01.758Z
D&D.Sci Holiday Special: How the Grinch Pessimized Christmas Evaluation & Ruleset 2022-01-11T01:29:59.816Z
D&D.Sci Holiday Special: How the Grinch Pessimized Christmas 2021-12-31T16:23:41.223Z
Two Stupid AI Alignment Ideas 2021-11-16T16:13:20.134Z
D&D.Sci Dungeoncrawling: The Crown of Command Evaluation & Ruleset 2021-11-16T00:29:12.193Z
D&D.Sci Dungeoncrawling: The Crown of Command 2021-11-07T18:39:22.475Z
D&D.Sci 4th Edition: League of Defenders of the Storm Evaluation & Ruleset 2021-10-05T17:30:50.049Z
D&D.Sci 4th Edition: League of Defenders of the Storm 2021-09-28T23:19:43.916Z
D&D.Sci Pathfinder: Return of the Gray Swan Evaluation & Ruleset 2021-09-09T14:03:56.859Z
D&D.Sci Pathfinder: Return of the Gray Swan 2021-09-01T17:43:38.128Z
How poor is US vaccine response by comparison to other countries? 2021-02-17T02:57:11.116Z
Limits of Current US Prediction Markets (PredictIt Case Study) 2020-07-14T07:24:23.421Z

Comments

Comment by aphyer on A D&D.Sci Dodecalogue · 2024-04-12T15:25:22.421Z · LW · GW
Comment by aphyer on D&D.Sci: The Mad Tyrant's Pet Turtles [Evaluation and Ruleset] · 2024-04-10T22:07:23.260Z · LW · GW

Will that extra credit be worth...uh...at least 1.98 gp?

Comment by aphyer on On the 2nd CWT with Jonathan Haidt · 2024-04-07T01:15:29.923Z · LW · GW

I can't help but read this post as something like this:

  1. Current government mandates around children are very harmful to children.
  2. Enforcement of current cultural norms around children is very harmful to children.
  3. ???
  4. We need to add on and enforce these three new government mandates around children and these two new cultural norms around children.

There is one section arguing that schools are prisons that children hate and are miserable in.  And then there is another section advocating for the schools to crack down harshly on children using their phones in school.  I find this somewhat depressing.

Comment by aphyer on D&D.Sci: The Mad Tyrant's Pet Turtles · 2024-04-05T19:16:39.612Z · LW · GW

Haven't found anything particularly good, but I've probably gone as far as I'll go.  I've done some analysis trying to predict how much variance we expect from each turtle so that I know how much to overestimate, and for the non-special turtles I'm predicting:
 

Abigail: 23.0lb

Bertrand: 19.0lb

Chartreuse: 26.2lb

Donatello Dontanien: 21.1lb

Espera: 17.3lb

(Flint is already estimated as a gray turtle as 7.3lb)

Gunther: 30.0lb

(Harold is already estimated as a six-segmented clone as 20.4lb)

Irene: 23.7lb

Jacqueline: 20.0lb

I'm rounding these to 0.1lb even though I'm allowed to go more granular, because if the Tyrant weighs to the same precision we do he will also be rounding to 0.1lb, which means we don't gain anything from more precision (estimating 7.25lb gives a payoff exactly halfway between estimating 7.3 and 7.2).

I'll put these estimates in the parent comment for ease of GM extraction.

The one interesting thing I've turned up is that Abnormalities appear to convey a very large amount of variance: each abnormality adds ~1lb of average weight, but actually slightly over 1lb of stdev-weight.  I suspect that abnormalities are adding weight in a highly-random way: my weight estimates for Espera, Irene and Jacqueline (0-abnormality turtles) are relatively low as a result because my confidence was higher, while my estimate for Gunther (6 abnormalities?) has a lot more safety margin built in.

Comment by aphyer on D&D.Sci: The Mad Tyrant's Pet Turtles · 2024-04-03T17:53:20.845Z · LW · GW

A simple linear regression analysis on the remaining turtles (everything that isn't a Fanged Gray Turtle or a Six-Segmented Harold Clone) gives the following formula:

  • 10.56lb base weight if green...
  • +2.02lb if grayish-green,
  • +5.47lb if greenish-gray,
  • +0.359lb/Wrinkle
  • +0.142lb/Scar
  • +0.598lb/Segment
  • +1.000lb/Abnormality

This does a reasonable job of prediction, but has a residual with a fairly-large ~2lb standard deviation.  Our standard-deviation math suggests that this means we should give the Tyrant answers overestimating each turtle by 2.4-2.5lb, and should expect to lose on average ~35gp/turtle to error.

That seems like we might be able to improve on it, but I'm not sure how.  I haven't been able to find any useful interactions yet.  There does seem to be an obvious explanation of all the traits except Abnormalities being driven by some hidden Age variable: old turtles start getting grayish, are wrinklier, have grown more shell segments and accumulated more scars, and are larger.  However, I'm not sure how actionable this is for us.  

The one thing it does look like I can do is adjust the amount of overestimation I do: it does seem that our estimate is less accurate as turtles get older and larger, and so rather than overestimating by 2.44lb for every turtle I should overestimate the larger ones by more and the smaller by less.  That's not going to be a very large improvement, though.  I feel like there ought to be something else to do, but haven't found anything yet.

Comment by aphyer on Religion = Cult + Culture · 2024-04-02T17:33:56.567Z · LW · GW

What is QC?

Comment by aphyer on D&D.Sci: The Mad Tyrant's Pet Turtles · 2024-04-02T15:32:04.529Z · LW · GW

The Fanged Gray Turtle seems relatively simple, so we look at that first.

The weight of a Fanged Gray Turtle seems well-approximated by (0.425 + 0.4568*#segments) lb.

This leaves behind a residual that looks roughly like a normal distribution with stdev ~0.357lb.  I'm not able to find any interaction of this residual with any other properties of the turtles - scars, mutations, etc. all seem unpredictive for the Fanged Gray Turtle.

Some quick math reveals that the Tyrant's asymmetric payoff distribution encourages us to overestimate a turtle's weight by ~1.22 standard deviations.  Therefore, we're going to bump up all our weight estimates by 0.435lb in order to flatter His Tyranny.  

(We could bump them up a bit further if we thought that reducing the odds of him having an unflattering portrait of us was worth trading off money for.  However, I actually think we can plausibly use that to extract more money: whatever itinerant artist he kidnaps to do that portrait, we can demand that they give us part of their commission in exchange for us being helpful and sitting for the portrait!  Kaching!)

There's only one Fanged Gray Turtle among the Tyrant's pets: Flint, with 14 Shell Segments.  Our best guess of Flint's true weight is 6.8lb, but we're going to overestimate this to 7.3lb in order to optimize our payoff.

 

And two(low-priority) questions for the GM:

  1. Are we unusually careful and competent at weighing turtles in a way that the Tyrant is not likely to be?  If he is careless about weighing his turtles, and introduces additional error, that increased variance makes us want to slightly increase how far we overestimate by.
  2. What level of granularity are we able to give the Tyrant in our weight estimates?  I think that an estimate of 7.25lb for Flint is slightly higher-payoff than 7.3lb in expectation, but don't know if that's something I'm allowed to give.
Comment by aphyer on D&D.Sci: The Mad Tyrant's Pet Turtles · 2024-04-01T17:18:14.944Z · LW · GW

When we look at the distributions of variables individually, there's a startling number (5-6k out of 30k) of green turtles with 6 shell segments (the lowest number, never seen otherwise), zero wrinkles, and zero abnormalities, that weigh exactly 20.4lb.  

They do have varying numbers of scars, though, which makes me incline more towards 'some very particular turtle subspecies' and less towards 'one very friendly turtle that figured out that it can get extra attention by wiping off the mark you put on it and coming by again'.

Harold from the King's pets matches this pattern (and thus presumably is one of these strange clone turtles).

Removing those and looking at the rest of the universe:

  • The remaining green turtles now resemble the grayish-green and greenish-gray turtles, making me draw the following three species:
    • Fanged Gray Turtles.
    • Six-Segmented Harold Clones.
    • All Other Turtles.
  • Most variables are now reasonably smoothly-distributed:
    • Scars and Wrinkles look Poisson-like.
    • Abnormalities peak at 0 and fall off: that might also be a poisson distribution, just from a lower mean, or might be something else.
    • Weights are bimodal (with one peak around 5-6lb for the Fanged Gray Turtles, and one wider peak around 15-25lb for All Other Turtles).
Comment by aphyer on D&D.Sci: The Mad Tyrant's Pet Turtles · 2024-04-01T15:44:02.207Z · LW · GW

EDITED TO ADD FINAL ANSWER:

  • Abigail: 23.0lb
  • Bertrand: 19.0lb
  • Chartreuse: 26.2lb
  • Donatello Dontanien: 21.1lb
  • Espera: 17.3lb
  • Flint: 7.3lb
  • Gunther: 30.0lb
  • Harold: 20.4lb
  • Irene: 23.7lb
  • Jacqueline: 20.0lb

Getting started with my favorite first step of calculating a bunch of correlations:

  • It turns out that all fanged turtles are gray, and all gray turtles are fanged.
  • This suggests some kind of speciation by color.
  • When we break down by color:
    • Grayish-green and greenish-gray turtles show near-identical patterns - I assume those are the same species and you've just categorized them a couple different way:
      • Weight is positively correlated with wrinkles, scars, shell segments and abnormalities.
      • The first three of these are positively correlated with one another, and probably reflect some hidden 'age' variable.  Number of abnormalities is not correlated with the others, and seems to do its own thing.  (Turtles grown larger with age, and also weigh more per extra mutant tentacle they have grown?)
      • Nostril size has no effect.
    • Gray turtles work differently:
      • They show the same pattern of wrinkles, scars and shell segments being positively correlated.
      • However, weight in this case seems to be almost entirely determined by # of shell segments.
      • Perhaps these turtles grow at a more predictable rate, with one shell segment per year that adds a regular amount of weight?
    • And green turtles also work differently:
      • The correlations between wrinkles, scars and shell segments have broken down.
      • Additionally, those variables have only small correlations with weight.
      • The most predictive variable towards weight is the # of abnormalities.
      • Perhaps these are strange mutant ninja turtles of some kind that are perpetually teenage don't have a regular growth lifecycle?
  • It looks like these three species of turtle behave differently enough that I'm probably going to end up modelling the three of them all separately (except maybe the what-I'm-assuming-is-age effect that shows up on both gray and mixed-color turtles).
  • My planned next step is to try three independent simple regressions and see how predictive they are for each of those three types of turtle.
Comment by aphyer on On Lex Fridman’s Second Podcast with Altman · 2024-03-26T15:01:52.588Z · LW · GW

The argument Zvi is making.

Comment by aphyer on On Lex Fridman’s Second Podcast with Altman · 2024-03-26T12:52:26.403Z · LW · GW

They open with the Battle of the Board. Altman starts with how he felt rather than any details, and drops this nugget: “And there were definitely times I thought it was going to be one of the worst things to ever happen for AI safety.” If he truly believed that, why did he not go down a different road? If Altman had come out strongly for a transition to Mutari and searching for a new outside CEO, that presumably would have been fine for AI safety. So this then is a confession that he was willing to put that into play to keep power.

 

I don't have a great verbalization of why, but want to register that I find this sort of attempted argument kind of horrifying.

Comment by aphyer on Using axis lines for good or evil · 2024-03-19T15:25:13.621Z · LW · GW

I'm surprised to hear you say that.  I would consider it perfectly reasonable to use a line graph without a zero-based y-axis to plot gravity against altitude: the underlying reality is in fact a line (well, a curve I guess)!  Gravitational force goes down with altitude in a known way!  But the effects of altitude on gravity are very small for altitudes we can easily measure, and extending the graph all the way down to zero will make it impossible to see them.

Comment by aphyer on Using axis lines for good or evil · 2024-03-19T13:39:21.721Z · LW · GW

If I measure gravitational force against altitude, and end up with points like the following:

  • 0 ft above sea level, force is 9.8000 m/s2
  • 1000 ft above sea level, force is 9.7992 m/s2
  • 2000 ft above sea level, force is 9.7986 m/s2
  • 3000 ft above sea level, force is 9.7980 m/s2

would it be egregious for me to plot this graph without a zero-based y-axis?  Do I need to plot it with a y-axis going down to zero?

Certainly there are cases where it's misleading to not extend a graph like this down to zero.  But there are also cases where it's entirely reasonable to not extend it down to zero.

Comment by aphyer on Toward a Broader Conception of Adverse Selection · 2024-03-15T13:48:59.274Z · LW · GW

I did not understand #8 at all.  I am confident that this is not because I don't understand the general point.  Does anyone have an explanation of #8?

Comment by aphyer on 'Empiricism!' as Anti-Epistemology · 2024-03-14T13:59:28.157Z · LW · GW

<trolling>

The S&P500 has returned an average of ~8%/year for the past 30 years.  As you say, we have on many occasions observed people lying, cheating, and scamming.  But we have only rarely observed lucrative good ideas!  Why, even banks, which claim much more safety and offer much lower returns than the stock market, have frequently gone bust!

It follows inevitably, therefore, that there is a very high chance that the S&P 500, and the stock market in general, is a scam, and will steal all your money.

It follows further that the only safe investment approach is to put all your money into something that you retain personal custody of.  Like gold bars buried in your backyard!  Or Bitcoin!  

</trolling>

Comment by aphyer on If you weren't such an idiot... · 2024-03-02T20:32:54.567Z · LW · GW

'Bike' is sometimes used as shorthand for 'motorcycle', in which case the 'absurdly dangerous' claim stands. I agree that riding a pedal-powered cycle without a helmet is somewhat dangerous, and unnecessarily so, but not 'absurdly dangerous'.

Comment by aphyer on Less Wrong automated systems are inadvertently Censoring me · 2024-02-21T18:48:18.977Z · LW · GW

On the object-level of your particular case, I don't see how you've ended up rate-limited.  The post of yours that I think you're talking about is currently at +214 karma, which makes it quite strange that your related comments are being rate-limited - I don't understand how that algorithm works but I think that seems very odd.  Is it counting downvotes but not upvotes, so that +300 and -100 works out to rate-limiting?  That would be bizarre.

In the general case, however, I'm very much on board with rate-limiting people who are heavily net downvoted, and I think that referring to this as 'censorship' is misleading.  When I block a spam caller, or decide not to invite someone who constantly starts loud angry political arguments to a dinner party, it seems very strange to say that I am 'censoring' them.  I agree that this can lead to feedback loops that punish unpopular opinions, but that seems like a smaller cost than communities having to listen to every spammer/jerk who wants to rant at them.

Comment by aphyer on FTX expects to return all customer money; clawbacks may go away · 2024-02-14T15:43:39.116Z · LW · GW

The 'full repayment' part is only sort of true, in a similar way to what happened with MTGOX, due to bankruptcy claims being USD-denominated.

Suppose that:

  • You owe customers 100 Bitcoin and $1M.
  • You have only half of that, 50 Bitcoins and $500k. 
  • The current price of Bitcoin is $20k.

You are clearly insolvent.  You will enter bankruptcy, and the bankruptcy estate will say 'you have $3M in liabilities', since you owe $1M in cash and $2M in bitcoin.

Suppose that the price of bitcoin then recovers to $50k.  You now have $3M in assets, since you have $500k in cash and $2.5M in bitcoin!  You can 'fully repay' everyone!  Hooray!

Of course, anyone who held a Bitcoin with you is getting back much less than a bitcoin in value, but since the bankruptcy court is evaluating your claims as USD liabilities you don't need to care about that.

This 'full repayment' is plausibly still important from a legal or a PR perspective, but e.g. this part:

there is typically a legal fight over whether a company was insolvent at the time of the investment or that the investment led to insolvency. If every FTX creditor stands to get 100 cents on the dollar, the clawback cases that don’t involve fraud wouldn’t serve much of a financial purpose and may be more difficult to argue, some lawyers say

is better thought of as 'our legal system may get confused by exchange rates and pretend FTX was always solvent' rather than as 'FTX was actually always solvent'.

Comment by aphyer on Childhood and Education Roundup #4 · 2024-01-30T14:43:27.876Z · LW · GW

Competition should improve meth and reading outcomes here.

Is this a typo, or a snarky comment on reducing student drug use?

Comment by aphyer on Processor clock speeds are not how fast AIs think · 2024-01-29T15:25:53.233Z · LW · GW

We don't care about how many FLOPs something has.  We care about how fast it can actually solve things.

As far as I know, in every case where we've successfully gotten AI to do a task at all, AI has done that task far far faster than humans.  When we had computers that could do arithmetic but nothing else, they were still much faster at arithmetic than humans.  Whatever your view on the quality of recent AI-generated text or art, it's clear that AI is producing it much much faster than human writers or artists can produce text/art.

Comment by aphyer on Making every researcher seek grants is a broken model · 2024-01-29T14:27:08.219Z · LW · GW

This change would not get rid of the need for researchers to have a non-research skillset to secure funding.  It would just switch the required non-research skillset from 'wrangling money out of grant committees' to 'wrangling positions out of administrators'.  Your mileage may vary as to which of those two is less dysfunctional.

Comment by aphyer on Surgery Works Well Without The FDA · 2024-01-26T17:07:29.694Z · LW · GW

Keep quiet about it! If the FDA hears about this we won't be allowed to conduct surgeries any more!

Comment by aphyer on D&D.Sci(-fi): Colonizing the SuperHyperSphere [Evaluation and Ruleset] · 2024-01-23T02:11:26.447Z · LW · GW

Thanks for making this!

It looks like Simon was right about the effects of Pi and Murphy being linear/cubic in isolation: I modeled everything as logarithmic because it let me use simple linear regression more easily, and ended up just hitting pi/murphy with regressions until I got something that fit acceptably.  

(I am surprised that I got such good fits off things like 1/(7-Murphy), I wonder if that fits well with the log version of the chart for some reason).

I think there was a bit of a missed opportunity in not having there be sneaky interactions/hypersphere effects.  This was a scenario where it would have been extremely fair to have an effect that triggered based on a threshold not of e.g. Latitude but of something horrendous like cos(Latitude)*cos(Shortitude)*cos(Deltitude): in any other scenario an effect like that might be overcomplicated, but here I think it would have been perfectly natural and made sense when uncovered.  I was looking for spheric-type effects, but the only thing like that was Longitude's effect being sine-wavey.

Comment by aphyer on There is way too much serendipity · 2024-01-20T23:42:37.413Z · LW · GW

They can't weigh in, they're dead!

Comment by aphyer on D&D.Sci Hypersphere Analysis Part 4: Fine-tuning and Wrapup · 2024-01-19T12:11:20.923Z · LW · GW

Huh, that is neat!

Comment by aphyer on D&D.Sci Hypersphere Analysis Part 4: Fine-tuning and Wrapup · 2024-01-18T13:06:26.357Z · LW · GW

Hm.  I'm trying to predict log of performance (technically negative log of performance) rather than performance directly, but I'd imagine you are too?

If you plot your residuals against pi/murphy, like the graphs I have above, do you see no remaining effect?

Comment by aphyer on D&D.Sci(-fi): Colonizing the SuperHyperSphere · 2024-01-18T03:47:59.877Z · LW · GW

(and it seems that several other people have given the same exact answer haha}

Comment by aphyer on D&D.Sci(-fi): Colonizing the SuperHyperSphere · 2024-01-18T03:11:59.571Z · LW · GW

My submission:

96286
9344
68204
107278
905
23565
8415
62718
42742
83512
16423
94304

Comment by aphyer on D&D.Sci(-fi): Colonizing the SuperHyperSphere · 2024-01-14T12:28:55.811Z · LW · GW

Oooooookay.  Do we know what time period the existing performance data was derived over, and how long that is compared to the time we have until the ship picks us up? 

I'm asking because

I see an effect of Longitude on performance that resembles what you'd see on Earth if sunlight was good for performance.  However, I'm nervous that this effect might be present in the existing data but change by the time our superiors evaluate our performance: if we choose locations on the day side of the planet, and then the planet rotates, then our superiors will come by and the planet will be pointed a different way.

If the existing data was gathered over months and our superiors are here tomorrow, I'd be willing to assume 'the planet doesn't meaningfully rotate' and put sites at Longitudes that worked well in the existing data.  But if the existing data is the performance of all those sites this morning, I'd need to find solutions that worked without expecting to benefit from Longitude effects.

Comment by aphyer on D&D.Sci(-fi): Colonizing the SuperHyperSphere · 2024-01-14T12:09:23.543Z · LW · GW

Fixed link, thanks!

Comment by aphyer on D&D.Sci(-fi): Colonizing the SuperHyperSphere · 2024-01-14T01:05:26.846Z · LW · GW

Do we know how the planet rotates/orients with respect to its sun (or any other local astronomical features)?

Comment by aphyer on D&D.Sci(-fi): Colonizing the SuperHyperSphere · 2024-01-13T20:17:24.051Z · LW · GW

I started putting together my analysis of this here, I'll try to update as I make more progress.

Comment by aphyer on Introduce a Speed Maximum · 2024-01-11T15:01:49.971Z · LW · GW

I'm a bit concerned about the long-term effects of this plan (especially after laws formally change to disregard the old 'speed limit' signs entirely but strictly enforce the 'max' ones).

I believe that many current speed limits are much slower than the correct speed to drive at, and I don't think this is particularly controversial.  (I have driven on a perfectly straight, almost-empty six-lane freeway with a posted speed limit of 55).

This did not happen by random chance!  Aliens did not land and subtract 10 from every speed limit sign!  Existing pressures - revenue pressures from tickets?  political pressure from people who don't like cars driving fast by their house? - lead to speed limits being set too low.

If you switch to this new approach, in the absence of some reason to expect that not to happen again, I'd imagine that once your 'max' signs are the limit they will also be set too low.

Comment by aphyer on Principles For Product Liability (With Application To AI) · 2023-12-12T02:51:15.127Z · LW · GW

A potential answer, if you want to consider things through a pure econ lens, of why I would be skeptical of this policy even in a perfect-spherical-cow world without worries about implementation difficulties:

  • I commented this below, but reiterating here: in almost all products, the manufacturer generally does not capture anything remotely close to all of the value of a product they produce.  The value of me having air travel available mostly accrues to, well, me.  Boeing captures only a small portion of it.  It is entirely possible for something to be a large net benefit to the world, and also for the harm it causes to drastically exceed the manufacturer's portion of the benefits.
  • In theory, this can change by having Boeing raise their prices to capture more of the value they create, in order to pay this liability.  (I wish to note in passing that I believe this would have to be quite a large price increase, but I do not consider that central to the argument.  So long as the liability regime is applied evenly, and is impossible to evade by not having deep pockets - perhaps by requiring liability insurance? - all Boeing's competitors will also have to implement the same price rises).
  • At that point, consumers will consume less (I believe often much less) of the product.  Is that efficient?  Sometimes yes, sometimes no.  One central question to consider is whether reducing the amount of a product supplied would linearly reduce the amount of harm done:
    • Say a car sells for $20k, each car has a 1/10k chance of killing someone, you charge the manufacturer $10M if it does, the manufacturer adds $1k to the price of cars to pay this, and people respond by buying fewer cars.  This is a fairly strong case for your liability theory!  The small chance of people being killed by cars was an unpriced externality, which is being correctly internalized and hence reduced.
    • However, suppose that the supply of terrorism is determined by how many religious extremists have gotten mad at the US lately.  In this case, your increase to air prices reduces the amount of air travel but does not reduce the amount of terrorism!  Here, you've simply created a large deadweight loss by reducing the amount of air travel done, with no offsetting benefit.

In our real world, my objections are driven primarily by the ways in which our legal system is not in fact perfectly implemented.  A list that I do not consider complete:

  • I don't think it's uncommon for judgments against deep-pocketed defendants to be imposed vastly in excess of harm done, to be driven primarily by public opinion rather than justice, or to cause huge indirect damages for no real benefit. 
  • I believe that any company that found itself being legally liable for 9/11 would as a factual matter have been utterly destroyed by that liability, even if a perfectly even-handed justice system might have charged it only $100B, and that 'that company having more insurance/higher prices to let it cover those costs' would as a factual matter simply have resulted in it being charged more until it no longer existed, however much that required.
  • I am concerned that this policy would make many large industries be dominated by legal-skills over efficiency to a much greater extent than is already the case (and I kinda think that is already too much the case).  If GM has 1% better lawyers than Toyota, but Toyota has 1% better cars than GM, the more that 'legal liability costs' are a major impact on a company's bottom line the more GM ends up advantaged.
  • You suggest:
  • Product manufacturer/seller is still liable for all damages including from intentional acts, BUT
  • In case of intentional acts, the manufacturer/seller can sue whoever intentionally caused the damage to cover the cost to the manufacturer/seller.
  • I worry that this policy would lead to some interesting incentives around who to sell to.  If Bill Gates goes crazy and uses a chainsaw to murder a bunch of teenagers, the manufacturer can recover from him.  If I do, they cannot.  This means...that...they should charge Bill Gates a lower price than me?  We already have a lot of obnoxious politics around similar issues in car insurance, I'm not enthusiastic about extending that to every other industry.
Comment by aphyer on Principles For Product Liability (With Application To AI) · 2023-12-11T18:58:23.690Z · LW · GW

I have almost the exact inverse frustration, where it seems to me that people are extraordinarily willing to assume 'companies all have gigantic Scrooge McDuck-style vaults that can pay any amount with no effect' to the point where they will accept unexamined a claim that one corporation could easily accept full liability for 9/11.

Comment by aphyer on Principles For Product Liability (With Application To AI) · 2023-12-11T18:26:27.939Z · LW · GW

I am not sure how $150k is a remotely relevant number.  I tried Googling a couple vaguely-similar things:

Googling 'asbestos liability' turned up this page claiming 

The average asbestos settlement amount is typically between $1 million and $2 million, as Mealey's latest findings show. The average mesothelioma verdict amounts are between $5 million and $11.4 million.

and asbestos exposure is to my understanding usually not immediately lethal.  

Googling 'J&J talcum powder' turns up a lot of results like this one:

After 8 hours of deliberations Thursday, a St. Louis jury awarded $4.69 billion to 22 women who sued pharmaceutical giant Johnson & Johnson alleging their ovarian cancer was caused by using its powder as a part of their daily feminine hygiene routine.

The jury award includes $550 million in compensatory damages and $4.14 billion in punitive damages.

which works out to $25M/death even if we entirely ignore the punitive damages portion, and even if we assume that all 22 victims immediately died.

faul_sname, below, links:

Per NHTSA, the statistical value of a human life is $12.5M.

It doesn't seem at all uncommon for liabilities to work out to well upwards of $10M/victim, even in cases with much less collateral damage, much less certain chains of causation, much less press coverage, and victims not actually immediately dying.

$12.5M/victim would be $37.5B, which is still less than Boeing's market cap today (though comparable to what Boeing's market cap was in 2001).  This also ignores all other costs of 9/11: Googling shows 6,000 injuries, plus I think a fairly large amount of property damage (edited to add: quick Googling turns up a claim of $16 billion in property damage to businesses).  

And our legal system is not always shy about adding new types of liability - pain and suffering of victims' families?  Disruption of work?  Securities fraud (apparently the stock market dropped $1.4 trillion of value off 9/11)?  

You can argue the exact numbers, I guess, but I nevertheless think that, as a matter of legal realism if nothing else, imposing liability for 9/11 on Boeing would have ended up bankrupting it.  

Comment by aphyer on Principles For Product Liability (With Application To AI) · 2023-12-11T16:58:59.898Z · LW · GW

Most sellers of products do not capture 100% of the social benefit of their product.  Even if a seller is a pure monopoly, there is a large amount of consumer surplus.

Even if you manage to avoid punitive damages, and impose liability only equal to actual damage done (something that I do not in fact believe you can get our current legal system to do when a photogenic plaintiff sues a deep-pocketed corporation), this will still inefficiently shut down large volumes of valuable activity whenever:

[total benefit of the product] > [total harm of the product] > [portion of benefit captured by the seller]

Comment by aphyer on Principles For Product Liability (With Application To AI) · 2023-12-11T16:28:21.182Z · LW · GW

It sounds like this policy straightforwardly implies that 9/11 should have bankrupted Boeing.  Do you endorse that view?

Comment by aphyer on Principles For Product Liability (With Application To AI) · 2023-12-11T16:16:15.018Z · LW · GW

Many software products are free even if supplied by a corporation.  For example, the Visual Studio programming environment is free to you and me, but charges for enterprise licenses.

If Microsoft is liable to the full depth of its very deep pockets for the harm of computer viruses I write in Visual Studio, and needs to pay for that out of their profits, they are unlikely to continue offering the free community license they currently do.

Comment by aphyer on What I Would Do If I Were Working On AI Governance · 2023-12-09T18:17:27.819Z · LW · GW

How do you distinguish your Case 1 from 'impose vast liability on Adobe for making Photoshop'?

Comment by aphyer on What I Would Do If I Were Working On AI Governance · 2023-12-09T03:41:36.313Z · LW · GW

I find your 'liability' section somewhat scary.  It sounds really concerningly similar to saying the following:

AI companies haven't actually done any material harm to anyone yet.  However, I would like to pretend that they have, and to punish them for these imagined harms, because I think that being randomly punished for no reason will make them improve and be less likely to do actual harms in future.

Comment by aphyer on Stupid Question: Why am I getting consistently downvoted? · 2023-11-30T02:25:39.698Z · LW · GW

I don't think you're being consistently downvoted: most of your comments are neutral-to-slightly positive?

I do see one recent post of yours that was downvoted noticeably, https://www.lesswrong.com/posts/48X4EFJdCQvHEvL2t/ethicophysics-ii-politics-is-the-mind-savior

I downvoted that post myself.  (Uh....sorry?) My engagement with it was as follows:

  1. I opened the post.  It consisted of a few paragraphs that did not sound particularly encouraging ('ethicophysical treatment...modeled on the work of Hegel and Marx' does not fill me with joy to read), plus a link at the top 'This is a linkpost for [link]'.
  2. I clicked the link.  It took me here, to a paywalled Substack post with another link at the top.
  3. I clicked the next link.  It took me here, to a sketchy-looking site that wanted me to download a PDF.
  4. I sighed, closed the tab, went back and downvoted the post.

I...guess this might be me being impatient or narrow-minded or some such.  But ideally I would like either to see what your post is about directly, or at least have a clearer and more comprehensible summary that makes me think putting the effort into digging in will likely be rewarded.

Comment by aphyer on Deception Chess: Game #2 · 2023-11-29T21:39:48.945Z · LW · GW

Chess is a game where, in every board state, almost all legal moves are terrible and you have to pick one of the few that aren't.

 

So is reality.

Comment by aphyer on Deception Chess: Game #2 · 2023-11-29T17:55:34.961Z · LW · GW

Another thing to keep in mind is that a full set of honest advisors can (and I think would) ask the human to take a few minutes to go over chess notation with them after the first confusion.  If the fear of dishonest advisors means that the human doesn't do that, or the honest advisor feels that they won't be trusted in saying 'let's take a pause to discuss notation', that's also good to know.

Question for the advisor players: did any of you try to take some time off explain notation to the human player?

Comment by aphyer on why did OpenAI employees sign · 2023-11-28T01:21:48.660Z · LW · GW

This is true, but in general the differences between an ordinary employee and a CEO go in the CEO's favor.  I believe this does also extend to 'how are they fired': on my understanding the modal way a CEO is 'fired' is by announcing that they have chosen to retire to pursue other opportunities/spend more time with their family, and receiving a gigantic severance package.

Comment by aphyer on why did OpenAI employees sign · 2023-11-27T18:32:25.268Z · LW · GW

Disclaimer: I do not work at OpenAI and have no inside knowledge of the situation.

I work in the finance industry.  (Personal views are not those of my employer, etc, etc).

Some years ago, a few people from my team (2 on a team of ~7) were laid off as part of firm staff reductions.

My boss and my boss's boss held a meeting with the rest of the team on the day those people left, explaining what had happened, reassuring us that no further layoffs were planned, describing who would be taking over what parts of the responsibilities of the laid-off people, etc.

On my understanding of employment, this was just...sort of...the basic standard of professionalism and courtesy?

If I had found out about layoffs at my firm through media coverage, or when I tried to email a coworker and their email no longer worked, I would be unhappy.  If the only communication I got from above about reasons for the layoffs was that destroying the company 'would be consistent with the mission', I would be very unhappy.  In any of those cases, I would strongly consider looking for jobs elsewhere.

It has sometimes seemed to me that the EA/nonprofit space does not follow the rules I am familiar with for the employer/employee relationship.  Perhaps my experience in the famously kindly and generous finance industry has not prepared me for the cutthroat reality of nonprofit altruist organizations.

Nevertheless, any OpenAI employee with views similar to my own would be concerned and plausibly looking for a new job after the board fired the CEO with no justification or communication.  If you want a one-sentence summary of the thought process, it could be: 

'If this is how they treat the CEO, how will they treat me?'

Comment by aphyer on What are the results of more parental supervision and less outdoor play? · 2023-11-25T13:36:30.432Z · LW · GW

Visits to emergency rooms might not be down if parents are e.g. panicking and bringing a child to the ER with a bruise.

Comment by aphyer on OpenAI: The Battle of the Board · 2023-11-22T19:43:10.422Z · LW · GW
Comment by aphyer on OpenAI: The Battle of the Board · 2023-11-22T18:48:52.513Z · LW · GW

The board had a choice.

If Ilya was willing to cooperate, the board could fire Altman, with the Thanksgiving break available to aid the transition, and hope for the best.

Alternatively, the board could choose once again not to fire Altman, watch as Altman finished taking control of OpenAI and turned it into a personal empire, and hope this turns out well for the world.

They chose to pull the trigger.

 

I...really do not see how these were the only choices?  Like, yes, ultimately my boss's power over me stems from his ability to fire me.  But it would be very strange to say 'my boss and I disagreed, and he had to choose between firing me on the spot or letting me do whatever I wanted with no repercussions'?

Here are some things I can imagine the board doing.  I don't know if some of these are things they wouldn't have had the power to do, or wouldn't have helped, but:

  1. Consolidating the board/attempting to reduce Altman's control over it.  If Sam could try to get Helen removed from the board (even though he controlled only 2/6 directors?) could the 4-2 majority of other directors not do anything other than 'fire Sam as CEO'?  
    1. Remove Sam from the board but leave him as CEO?
    2. Remove Greg from the board?
    3. Appoint some additional directors aligned with the board?  
    4. Change the board's charter to have more members appointed in different ways?
  2. Publicly stating 'We stand behind Helen and think she has raised legitimate concerns about safety that OpenAI is currently not handling well.  We ask the CEO to provide a roadmap by which OpenAI will catch up to Anthropic in safety by 2025.'
  3. Publicly stating 'We object to OpenAI's current commercial structure, in which the CEO is paid more for rushing ahead but not paid more for safety.  We ask the CEO to restructure his compensation arrangements so that they do not incentivize danger.'

I am not a corporate politics expert!  I am a programmer who hates people!  But it seems to me that there are things you can do with a 4-2 board majority and the right to fire the CEO in between 'fire the CEO on the spot with no explanation given' and 'fire every board member who disagrees with the CEO and do whatever he wants'.  It...sort of...sounds like you imagine that in two weeks' time Sam would have found a basilisk hack to mind-control the rest of the board and force them to do whatever he wanted?  I do not see how that is the case?  If a majority of the board is willing to fire him on the spot, it really doesn't seem that he's about to take over if not immediately fired.

Comment by aphyer on OpenAI: Facts from a Weekend · 2023-11-21T14:34:29.131Z · LW · GW

If you pushed for fire sprinklers to be installed, then yell "FIRE", and turn on the fire sprinklers, causing a bunch of water damage, and then refuse to tell anyone where you thought the fire was and why you thought that, I don't think you should be too surprised when people contemplate taking away your ability to trigger the fire sprinklers.

 

The situation is actually even less surprising than this, because the thing people actually initially contemplated doing in response to the board's actions was not even 'taking away your ability to trigger the fire sprinklers' but 'going off and living in a new building somewhere else that you can't flood for lulz'.

As I'm understanding the situation OpenAI's board had and retained the legal right to stay in charge of OpenAI as all its employees left to go to Microsoft.  If they decide they would rather negotiate from their starting point of 'being in charge of an empty building' to 'making concessions' this doesn't mean that the charter didn't mean anything!  It means that the charter gave them a bunch of power which they wasted.