Posts
Comments
Yes, I meant specifically the Bay Area scene, since that's the only part of the LW community that's accused of excluding e/acc-ers.
It's interesting and relevant if you can say that in the NYC scene, this sort of thing is unheard of, and that you're familiar enough with that scene to say so, but it isn't 100% on point.
I described my feelings about human extinction elsewhere.
However, unlike the median commenter on this topic, you seem to grant that e/acc exclusion is actually a real thing that actually happens. That is
I think the desire to exclude e/accs is mainly because of their attitude that human extinction is acceptable or even desirable,
is a strange thing to say if there was not, in fact, an actual desire among LW party hosts in Berkeley. So inasmuch as my doubts about the truth of this have been raised by other respondents, would you mind clarifying
- If you do in fact believe that e/acc exclusion from LW parties is a real phenomenon.
- What kind of experience this is based on.
Would you describe yourself as familiar with the scene at all? You seem to imply that you doubt that e/acc exclusion is an actual thing, but is that based on your experience with the scene?
I'm not suggesting that you're wrong to doubt it (if anything I was most likely wrong to believe it), I just want to clarify what info I can take from your doubt.
This is a good point, but I don't intuitively see that it's particularly strong evidence that it must be unusual. I would expect an event like this to have more explicit rules than the average party.
This seems like good evidence and I don't think you would make it up.
I'm rapidly coming to the conclusion that Beff & co are exaggerating/full-of-it/otherwise-inaccurate.
Possibly the Aella thing was an anomaly, but also the thing that they actually really wanted to go to, and they're inaccurately (although not necessarily dishonestly) assuming it to be more widespread than it actually is.
Would you describe yourself as plugged into the LW party scene in Berkeley?
I wrote a long reply to your points, but ultimately decided it was a derail to original topic. I'll PM you just for fun though.
I don't think there's anything misleading about that. Building AI that kills everyone means you never get to build the immortality-granting AI.
I didn't say it wasn't sensible. I said describing it that way was misleading.
If your short-term goal is in fact to decelerate the development of AI, describing this as "accelerating the development of Friendly AI" is misleading, or at least confused. What you're actually doing is trying to mitigate X-risk. In part you are doing this in the hopes that you survive to build Friendly AI. This makes sense except for the part where you call it "acceleration."
Incidentally, people don't seem to say "Friendly AI" anymore. What's up with that?
Thanks, this is extremely helpful. Having a clearer definition of how e/acc is understood to LW makes this much easier to think about.
Just for fun, I'll quibble: I would add to my list of e/acc heresies
Related to previous: Those who think that the wrong human having power over other humans is the thing we need to worry about.
Insofar as I genuinely believe that to some extent, various actors are trying to take advantage of sincerely-held beliefs by LWers in the importance of decel-until-alignment to craft rules which benefit them and their short-term interests in money and power. This is bad, but also people do this sort of thing in our society all the time so you need to have perspective and recognize that it's not the literal end of the world. I don't know if I would say it's the thing we need to worry about, but it's more likely to cause harm now, whereas AGI is not.
Those like Beff Jezos, who think human extinction is an acceptable outcome.
I'd say it was an acceptable risk, and one that we're running anyway. It's reasonable to increase the risk slightly in the short run to reduce it in the long run. Is there an outcome with human extinction which I would also consider good? That's kind of hard to say. Like I think Neanderthal extinction was an acceptable outcome. So clearly "all humans are extinct and now there are only posthumans" is acceptable, for some values of posthuman. I dunno, It's all extremely academic and taking it too seriously feels silly.
Also
at least a Victorian NRC would be bad since they would decel the things that eventually made nuclear reactors possible
I think you misunderstood what I was getting at. The reason I object to a Victorian NRC is not that I want to avoid decelerating atomic physics (I don't even know if I ought to expect that). I object because it's quixotic. Or just plain silly. There are no nuclear reactors! What are you people in HMNRC even doing all day? Theorycrafting reporting standards for SCRAM incidents? How sure are you that you actually, you know, need to do that?
I think it counts. And while it's not the typical LW party, do you really think that prohibition says nothing about the scene? That seems like an odd opinion to me.
I think rationalists generally agree that speeding up the development of AGI (that doesn't kill all of us) is extremely important
Didn't Eli want a worldwide moratorium on AI development, with data center airstrikes if necessary?
Granted, I understood this to be on the grounds that we were at the point that AGI killing us was a serious concern. But still, being in favor of "speeding up AGI that doesn't kill us" is kind of misleading if you think the plan should be
- Slow down AGI to 0.
- Figure out all of the alignment stuff.
- Develop AGI with alignment as fast as possible.
I mean, sure, you want all 3 steps to happen as fast as possible, but that's not why there's a difference of opinion. There's a reason why e/acc refer to the other side as "decels" and it's not unwarranted IMO.
I would be more worried about getting kicked out of parties because you think "the NRC is a good thing"
Let's say "An NRC would be a good thing (at least on the assumption that we don't intend to be 100% libertarian in the short run)". I'm not going to die on the hill of whatever they may have done recently.
Didn't Aella explicitly reject e/acc's from her gangbang on the grounds that they were philosophically objectionable?
I assume this is April Fools' related, but I can't really tell
It's not.
I think there's less cohesive leadership of LW parties than you seem to think
That sounds likely. To be fair, I've mostly heard of this from
-
Beff Jezos & Beff-adjacent people whining about it on twitter
-
Aella's gangbang
And I probably adjusted too hard off of this. Like nobody goes around prominently saying "actually we don't mind if e/acc show up, so long as they're nice" that I know of, but there's no reason to assume that they would.
Ask the host. If they're unclear or unsure, go and see if you feel unwelcome or uncomfortable, and leave if so.
So my initial reaction to this was to feel slightly insulted, but I realize that probably wasn't your intention.
I know how to go to parties. I'm not unusually awkward. I'm not a preacher. That's not the problem here. I can usually get along with people as long as they're not too nuts.
I asked this question because I believed that it very common for LW parties to specifically want to exclude people on philosophical grounds. If they do, and if their objection applies to me, I want them to succeed. I've heard stories of e/acc people trying to sneak into Berkeley parties incognito. That's not my style.
Also, my model of the most likely circumstances where I was likely to attend such a party was an acquaintance--who recognized that I seemed like an LW kinda guy but wasn't deeply familiar with my whole belief system--saying "Hey there's this thing tonight we could go to." So asking the host might not be practical; said host might already be drinking and playing board games or whatever degeneracy usually goes on. Thus if the e/acc ban was as widespread as I thought, it would make sense to know ahead of time.
Would you leak that statement to the press if the board definitely wasn't planning these things, and you knew they weren't? I don't see how it helps you. Can you explain?
I don't have a strong opinion about Altman's trustworthiness, but I can assume he just isn't trustworthy and I still don't get doing this.
the Bystander Effect is dumb
It's dumber than you think.
What, you don't think Plasmodum falciparium is a living being with a right to exist? Don't be such a humanity chauvinist.
I think we've gone well past the point of productivity on this. I've asked some lawyers for opinions on this. I'll just address a few things briefly.
If the whistleblower doesn't have evidence of the meeting taking place, and no memos, reports or e-mails documenting that they passed their concerns up the chain, it's perfectly reasonable for a representative of the corporation to reply, "I don't recall hearing about this concern."
I agree this is true in general, but my point is limited to the cases where documentation would in fact exist were it not for the company's communication policy or data retention policy. If there was a point in the Google case you brought up earlier where Google had attempted to cast doubt on a DOJ witness by pointing out the lack of corroborating evidence (which would have been deleted per Google's policy), I'd strongly reconsider my opinion.
What the article about the case said was just that DOJ complained that it would like to have all the documentation that Google destroyed, and that this probably contained evidence which proved their case. It did not say that Google challenged DOJ witnesses on a lack of corroboration between their testimony and the discoverable record.
It doesn't have to be the Google case. Any case where the defense tried to impeach a witness on grounds of lack of corroborating evidence where that evidence would have been intentionally destroyed by a data retention policy would do.
There are other things I disagree with, but as I said, we're being unproductive.
I've been trying to figure out how someone who appears to believe deeply in the principles of effective altruism could do what SBF did. ... It seems important to me to seek an understanding of the deeper causes of this disaster to help prevent future such disasters.
There's a part of my brain screaming "Why are you leaving yourself wide open to affinity fraud? Are you trying to ensure 'SBF 2: This Time It's Personal' happens or what?" However, I'll ask him to be quiet and explain.
The problem was that you should never go around thinking "Somebody who believes in EA wouldn't screw me, therefore this investment must be safe." Instead you should think "The rate of return on this investment is not possible without crime, therefore I don't know why somebody who claims to be an EA would do this, but I don't have to know, I just have to stay away." Or as I said in response to Zvi's book review
You have to think: this man wouldn't offer me free candy just to get in his unmarked van, that doesn't make sense. I wouldn't give anyone candy for that. What's going on here?
It doesn't matter why something is too good to be true. If it is, it must be a lie, and thus bad. Don't take the deal. In case it's not clear "taking the deal" can mean more than just investing with FTX; it also encompasses other sorts of relationships one might get into with SBF or FTX, like taking their money or allowing them to be a public symbol of you.
The point here is that understanding human psychology and motivations, especially where the human you're trying to understand might be trying to trick you, is way harder than just knowing what sorts of returns are possible on capital investments with given amounts of risk. You can try to understand the SBFs of the world in the hopes of being able to identify them, but why do all that extra work? Just don't trust anyone who says they can make you a 50% return on your investment in a year with zero risk (or comparable risk to T-bills) because every single one of them is lying and committing crimes.
But not much worse; against counterexamplebot, ringer tit-for-tat will defect (so almost full double-defect), and tit-for-tat will always cooperate, so for that match ringer tit-for-tat is down about 50 points (assuming 50 rounds so the score is 100-49). Ringer tit-for-tat then picks up 150 points for each match against the 2 ringers, and the score is now (300-349). And it's only this close because the modified strategy is tit-for-tat rather than something more clever.
Also, this assumes that a bot even can defect specifically against ringer tit-for-tat. Insofar as ringer's source is secret and the identification is a function of the source of both ringer and shill, this may not be possible. If I understood correctly we only have access to ourselves and our opponent during the match, so we can't ask if some 3rd bot would always cooperate against our opponent, who would always defect against the 3rd.
It seems unlikely to me that this ringer/shill strategy will be particularly good compared to the other options
It will absolutely be guaranteed to be better than equivalent strategies without ringer/shill. Remember that ringer/shill modifies an existing strategy like tit-for-tat. Ringer tit-for-tat will always beat tit-for-tat since it will score the same as tit-for-tat except when it goes up against shill tit-for-tat, where it will always get the best possible score.
This means that whatever the strongest ~160-character strategy is, the ringer/shill version will beat 2 more strategies than it. Intuitively, it seems unlikely that anyone will come up with a 240-character strategy which is that much stronger than the best ~160-character strategy. Partly this is because I suspect that the more sophisticated strategies that people will actually come up with will start running up against the execution time barrier and won't have time to make positive use of all that complexity.
you haven't provided a compelling reason why I need to disallow it.
You don't need to disallow it, I'm just saying that it would be ideal if it could be disallowed. It could easily not be worth the trouble.
My default is that people shouldn't be judged by random strangers on the internet over the claims of other random strangers on the internet. As random strangers to Sam, we should not want to be in judgment of him over the claims of some other random stranger. This isn't good or normal or healthy.
Moreover, it is unlikely that we will devote the required amount of time & effort to really know what we're talking about, which we should if we're going to attack him or signal boost attacks. And if we are going to devote the great amount of time necessary, couldn't we be doing something more useful or fun with our time? A lot of good video games have come out recently, for example.
It would be different if I knew Sam personally. I would ask him about it, see what he had to say, and draw a conclusion. It might be worth it to me to know the truth. But I don't. This has the same flavor to me as being really invested in any more conventional celebrity. Like apparently there was some kerfuffle with Johnny Depp and Amber Heard a while ago. My response to that was that I genuinely could not care less. In fact, I actively did not want this bullshit taking up space in my brain. I intentionally avoided learning anything about it. And I'm glad I did. Please don't tell me what it was about.
I mean, yes, it's not currently against the rules, but it obviously should be (or technical measures should be added to render it impossible, like adding random multiline comment blocks to programs).
Presumably the purpose of having 3 bots is to allow me to try 3 different strategies. But if my strategy takes less than about 160 characters to express, I have to use the ringer/shill version of the same strategy since otherwise I will always lose to a ringer/shill player using the same strategy but also shilling for his ringer. And the benefit of the ringer/shill strategy will be constant, since it shouldn't be triggered by other players, just my bots. So it just means all the strategies are inflated by a constant amount. And it requires everyone to adopt the strategy, which is a waste of effort.
Academia is sufficiently dysfunctional that if you want to make a great scientific discover(y) you should basically do it outside of academia.
I feel like this point is a bit confused.
A person believing this essentially has to have a kind of "Wherever I am is where the party's at" mindset, in which case he ought to have an instrumental view of academia. Like obviously, if I want to maximize the time I spend reading math books and solving math problems, doing it inside of academia would involve wasting time and is suboptimal. However, if my goal is to do particle accelerator experiments, the easiest way to do this may be to convince people who have one to let me use it, which may mean getting a PhD. Since getting a PhD will still involve spending a bunch of time studying (if slightly suboptimally as compared to doing it outside of academia) then this might be the way to go.
See, we still think that academia is fucked, we just think they have all the particle accelerators. We only have to think academia is not so fucked that it's a thoroughly unreasonable source of particle accelerator access. We can still publish all our research open access, or even just on our blog (or also on our blog).
The tournament allows you to enter 3 bots, so we should be able to cheat if our bots can recognize each other. If so we have 1 bot, "ringer", and 2 bots "shill0" and "shill1" and follow this strategy:
ringer: if my opponent is shill0 or shill1, always defect. otherwise, play tit-for-tat.
shill0: if my opponent is ringer, always cooperate. otherwise play tit-for-tat.
To recognize each other, since we have access to the source code of both bots during the game, we need a hash function which is efficient in terms of characters used. One possibility would be to simply count the character codes. So we have
// one of the programs I grabbed at random as an example
let s = "r = h.length === 0 ? 1 : (h[h.length - 1].o === 1 ? h[h.length - 1].m : 1 - h[h.length - 1].m)"
// 6589
let id = [...s].reduce((a,c)=>a+c.charCodeAt(0),0)
This expression takes 41 chars, so we should be able to fit that in along with the required logic.
The obvious difficulty is that somebody could attempt to piggyback: craft a bot whose hash adds up to the same as the one our shills will auto-cooperate with. However, we should be able to solve this:
shill: if hash(c+s) = N
, always cooperate. otherwise, play tit-for-tat
ringer: if hash(c+s) = M
always defect. otherwise, play tit-for-tat
where c is the source code of the opponent and s is the source code of ourself.
We note that the rules of the challenge permit us to submit one of our bots by private message so that other players cannot see it. Therefore, we submit the source of ringer privately, and nobody should be able to piggyback.
The part that confuses me about this is twofold...
What you're missing is how specific and narrow my original point was. The thing that makes it look like you are concealing evidence is only if you do two things simultaneously
-
Challenge witness testimony by saying it's not corroborated by some discoverable record.
-
Have a policy whereby you avoid creating the discoverable record, periodically delete the discoverable record, or otherwise make it unlikely that the record would corroborate the testimony.
So basically you have to pick one or the other. And you're probably going to pick the 2nd. So as a witness, you probably just aren't going to have to worry about those kinds of challenges.
Defense counsel asks why, if the risk was so great and the witness so certain, they were unwilling to follow the clearly established procedure under section 34.B.II.c of the communication policy.
This is a really good and interesting point, but I ultimately don't think it will work.
I'm going to say I just told my boss because I didn't understand 34.B.II.c, nobody does. The fact that I didn't follow the exception rule isn't going to convince anyone that it didn't happen. The jury will get to see the communication policy in its full glory. Plaintiff counsel will ask me whether anyone ever explained the policy or rule 34.B.II.c to me (of course not), if we had a class to make sure we understood how to use the policy in practice like we had for the company sexual harassment policy (of course there was no class). And failure to follow the exception rule won't prove that I didn't think it was as serious. I told my boss and colleagues. People knew. What good would getting out my legal dictionary and parsing the text of 34.B.II.c do as far as helping people address the issues I raised? I dunno, I never even considered it as a possibility. I'm just a simple country engineer who wanted people to fix a safety issue.
Linking this back to the OP, the strategy of sending an email to make the risk discoverable as suggested is reliant on converting the actual risks to a legal risk because the legal risk is a bigger factor in company decision making, and the goal is to make the company choose not to take the risk at all.
I'm not sure this makes sense, except as a psychological trick. Like, the legal risk is that the actual risk will manifest itself and then someone will sue. I feel like everyone just understands this clearly already. Remember I've already told everyone about the risk. If the risk manifests, of course we will be sued. So maybe by making the fact that I told them readily discoverable, they'll be less likely to ignore it just because the idea that I'd be willing to testify about it is something they could ignore or not consider. This is plausible, but we have to compare it to other options. Like I could also just say "If this were to happen, I would testify that I thought it might happen and that I told you guys." How well would that work compared to trying to make things discoverable? I dunno, it might work better.
Out of curiosity, did you get any static afterward, or was it just an "oops" and done?
Assuming you're talking about my story, it was the basically the latter, although that might have had something to do with the fact that we abandoned the idea, so it was never going to come up. I'm pretty sure that I was quietly regarded as less reliable and less of the sort of clever operator who would instinctively keep them out of trouble, and that this affected my chances of advancement. No doubt I could have demonstrated growth in that regard and fixed the issue, but I didn't stick around that company long enough for it to be relevant (again, for other reasons).
I still disagree. If it wasn't written down, it didn't happen, as far as the organization is concerned.
So obviously I violently disagree with this, so assuming it was supposed to be meaningful and not some kind of throwaway statement, you should clarify exactly what you do and don't mean by this.
The engineer's manager can (and probably will) claim that they didn't recall the conversation
They may say this, but I think that you aren't thinking clearly enough in terms of the logical chain of argument that the hypothetical legal proceeding is trying to establish. A lawyer has a witness answer specific questions because they support specific facts which logically prove his case. They don't just say stuff.
Suppose plaintiff didn't have a witness, but wanted to try to establish that the company knew about the widgets in order to establish responsibility and/or negligence. Plaintiff might ask a manager "Did any engineers mention that the widgets were dangerous?" And he might reply "I don't recall" at which point plaintiff is SOL. On the other hand, if plaintiff has already elicited testimony from the engineer to the effect that the conversation happened, could defendant try to imply that it didn't happen by asking the manager whether he recalled the meeting? I mean, yes, but it's probably a really bad strategy. Try to think about how you would exploit that as plaintiff: either so many people are mentioning potentially life-threatening risks of your product that you can't recall them all, in which case the company is negligent, or your memory is so bad it was negligent for you to have your regularly-delete-records policy. It's like saying I didn't commit sexual harassment because we would never hire a woman in the first place. Sure, it casts doubt on the opposition's evidence, but at what cost?
or dispute the wording, or argue that while the engineer may have said something, it wasn't at all apparent that the problem was a serious concern.
Disputing the wording is probably a bad idea; arguing that the engineer did say something, but we decided it was not an issue is what I've been saying they would do. But it involves admitting that the conversation happened, which is most or all of what a discoverable record would establish in the first place.
Suppose the engineer originally told management that the widgets would explode if heated to 80C, and this is what he testified that he told management. One approach that management could take is to say "I recall that conversation precisely and the engineer said it would explode at 70C, not 80C. We tested it up to 70C and nothing went wrong. Now they're lying and saying they said 80C all along in order to make us look negligent." This is possible, but it's an extremely dangerous game. For example, as a lawyer you would be ethically prohibited from advising a client to attempt this defense, and your career would be over if it ever came out. Also, it's the sort of thing that could be checked: does it even make sense to believe that it would explode at 70C? Assuming the engineer used the normal methods he always used in his job (like some kind of simulation software) to arrive at 80C, what would he have to do wrong to think it was 70C? Does this even make sense to make those errors? Successfully lying in court about something technological could be very difficult.
Also keep in mind that if we're going to assume the company will lie on the stand about complex technical points, presumably they're also willing to doctor the discoverable record, if they think they can get away with it. If they are, then creating a discoverable record might not be enough. Now you'd have to steal copies of the discoverable record prior to anything going wrong. In other words we're going to do something wrong because we think that maybe someone else will do something wrong in the future. This is not something to mess around with.
One reason not to mess with this is that we have other options. I could keep a journal. If I keep notes like "2023-11-09: warned boss that widgets could explode at 80C. boss said they didn't have time for redesign and it probably wouldn't happen. ugh! 2023-11-10: taco day in cafeteria, hell yeah!" then I can introduce these to support my statement. Also, if I told my wife that I was unhappy about the conversation with management right after it happened and she recalls that I said 80C, plaintiff can call her to testify to prove my point (this is the present sense impression exception to hearsay).
There's a reason that whistleblowers focus so hard on generating and maintaining a paper trail of their actions and conversations, to the point that they will often knowingly and willfully subvert retention policies
I agree, but we're now talking about whistleblowers in general, and corporate malfeasance in general. There are absolutely situations in which subversion is reasonable. But there are also situations in which it's unreasonable. In the original example, an engineer has a safety concern which he wants to communicate to management, and to be able to establish that he did this at a later date. I don't think this calls for full-on defiance, open or secret.
In general, I'd say it makes sense to do subversion/defiance when I have good reason to believe that the organization's conduct is criminal (not e.g. tortious) right now. For example, I work for Madoff Investment Securities and I think they're actively doing securities fraud right now, not maybe in the future. Then I should consider keeping a few documents for later in defiance of company policy.
Why wouldn't the defendant dispute this?...In this case, I would expect the defendant to produce analyses showing that the widgets were expected to be safe
You're not reading carefully enough. The thing I said that the defendant would not dispute is the fact that the engineer said something to them, not whether they should have believed him. This is why I said later on
Remember, they're not conceding the whole case, just the fact that the engineer told them his opinion. What they're going to do instead is admit that the first engineer told them, but that they asked some other other engineers to weigh in on the point, and those engineers disagreed.
Of course the company will defend their decision in either case. My point is about what you gain by having a record of having raised a concern versus testifying that you raised that concern. My opinion is that they're the same unless there's a reason to doubt that you raised the concern like you say you did. And if the defendant doesn't challenge the claim that the concern-raising happened, why would there be?
If you destroyed records, per your normal documented retention policies prior to any court case being filed, there's no grounds for adverse inference.
Yes, this is correct. I was simplifying it.
Every company I've worked for has had retention policies that call for the automatic deletion of e-mails after a period of time (5-7 years).
I don't doubt it, but I think you're missing the point here. What I'm referring to by "defendant's strategy" is not the practice of regularly deleting things, but the trial strategy of attempting to rebut witness testimony by claiming that testimony is not corroborated by records while simultaneously regularly deleting records. I agree that regularly deleting things can be very useful to your legal strategy, it just takes certain options off the table for you. Either you can rebut testimony by saying it doesn't match the record or you can regularly delete that record, but you can't do both without getting crucified.
Google did not pursue that strategy. Or at least, if they did, the article you linked doesn't say so.
What I am saying that Google did not and would not do is that when Barton testified that
Google feared if Bing became the default search engine on Android, "then users would have a 'difficult time finding or changing to Google.'"
Google would not respond that he was talking horseshit and if what he was saying was true, why isn't there any evidence of it in our internal employee communications? They would not say this because DOJ would say that this corroborating evidence did not exist because Google took steps to ensure it would not.
When I worked at a different FAANG company, I was told in orientation to never use legal terminology in e-mail, for similar reasons.
Same here. I was told not to say that this new change would allow us to "Crush Yahoo in terms of search result quality". But I understood the idea to be that since in real life what we were trying to do was just maximize search result quality, we shouldn't let our jocularity and competitive spirit create more work for Legal.
Of course, maybe the real FAANG I worked for wasn't Google and I'm just adapting the real story to Google for anonymity purposes. Who knows?
It's not that safe communication guidelines are damning. It's that claiming that the lack of discoverable evidence corroborating your statement disproves it while simultaneously having conspired to ensure that discoverable evidence would not exist would be damning.
What makes you conclude that personal testimony weighs more under the law than written documents?
It doesn't matter. Plaintiff wants to prove that an engineer told the CEO that the widgets were dangerous. So he introduces testimony from the engineer that the engineer told the CEO that the widgets were dangerous. Defendant does not dispute this. How much more weight could you possibly want? The only other thing you could do was to ask defendant to stipulate that the engineer told the CEO about the widgets. I think most lawyers wouldn't bother.
Why would the defense prefer to basically concede the case than have their communication policy entered into evidence?
Because it makes it look like they're trying to conceal evidence, which is much worse for them than simply maybe being negligent. This could it be grounds for punitive damages or an adverse inference ruling or both. It would also be so easy for plaintiff to score off of that the court might not even bother with an adverse inference
If I say I want you to turn over your email records to me in discovery to establish that an engineer had told you that your widgets were dangerous, but you instead destroy those records, the court will instruct the jury to assume that those records did contain that evidence. This is an adverse inference.
Even if there's no adverse inference, just think about what happens. Defendant attempts to counter the testimony by saying that if this meeting took place there would be records, but there aren't, so you must be lying. Plaintiff responds by showing that defendant had a policy designed to prevent such records from being created, so defendant knows that records would not exist whether the meeting took place or not, and thus his argument is disingenuous. Would you follow defendant's strategy here? I wouldn't.
Remember, they're not conceding the whole case, just the fact that the engineer told them his opinion. What they're going to do instead is admit that the first engineer told them, but that they asked some other other engineers to weigh in on the point, and those engineers disagreed. They decided to trust the other engineers and thus the resulting injury wasn't negligent, it was just a mistake.
It seems to me even companies that always do the right thing don't want to have to litigate about email chains where a bunch of people are talking out of their backside.
This is what I was saying before. And moreover, if the actual main effect of these policies was to prevent creation of discoverable records of real bad stuff the companies deserved to burn for, they'd probably risk trouble just for having them in the first place.
In my own case, I was working on a project where I thought it would be useful for us to have a copy of the entire WHOIS database. Under ICANN's rules (IIRC), a registrar who got the WHOIS database in the first place was required to sell copies to organizations which meet certain criteria, which we definitely did. I was told that regardless of ICANN's rules, registrars all just ignore that rule and refuse to sell the database. I said that in that case we should just flagrantly violate their ToS and scrape ourselves a full copy from their free lookup interface, and if they didn't like it they could sell me a copy like they were required too. It's not like they'd be able to stop me. Legal got in touch and said basically "Look, we're not saying you're wrong, but keep that off of discoverable channels. Proving that you're right in court costs money." Which of course was correct, and exactly what I was missing. As an engineer, I could establish that it was both feasible and legally winnable, but it takes a lawyer to know about the expected legal costs in the event of a conflict and how to minimize those. Anyway, we never ended up actually doing it for unrelated reasons.
We are taking many of the brightest young people. We are telling them to orient themselves as utility maximizers with scope sensitivity, willing to deploy instrumental convergence. Taught by modern overprotective society to look for rules they can follow so that they can be blameless good people, they are offered a set of rules that tells them to plan their whole lives around sacrifices on an alter, with no limit to the demand for such sacrifices. And then, in addition to telling them to in turn recruit more people to and raise more money for the cause, we point them into the places they can earn the best ‘career capital’ or money or ‘do the most good,’ which more often than not have structures that systematically destroy these people’s souls.
See, I don't think that's the problem. Or at least not the only one. And maybe not the important one. I think the issue was more of a failure-to-be-suspicious-of-too-good-to-be-true type of error, which seems common in these kinds of cases.
When Madoff blew up, my dad told me that what was going on was that lots of the people investing with Madoff knew the returns he was promising weren't possible legitimately. So they assumed he must be screwing somebody, and this must be somebody else other than them. Unfortunately it turned out he was also screwing them. Whoops!
When Sam came along, people could have looked and said this seems like an unreasonably positive windfall. Therefore, I should be suspicious. The more unreasonably generous his donations, the more I should want to know more, to double-check everything. But they didn't. Plenty of other people have made this mistake before, so I'm not being too critical here, but that's what it takes. You have to think: this man wouldn't offer me free candy just to get in his unmarked van, that doesn't make sense. I wouldn't give anyone candy for that. What's going on here?
I've seen several reports of people saying that they in fact believed at the time that Sam must have been doing something slightly sketchy, and that maybe that was not such a good thing, but ultimately concluded that it wasn't worth worrying about. The lesson we should have taken from Madoff is that it's always worth worrying about. It could always be worse than your initial guess. People must not have seriously asked themselves "What's the worst thing that could happen?" because the obvious answer would be "He's another Madoff."
Personally, if you had asked me what was going to happen with FTX in early 2022, I'd have said the same thing as Madoff. But I'm super-suspicious of everything to do with crypto so maybe this doesn't count as extraordinary prescience. I'd probably have said the same thing in early 2021 too.
In your first example, it's clear that expected loss is as important as intent. It's not just that you probably don't have a strong intent to misuse their data. It's that the cost of you actually having this intent is pretty small when you only have access to whatever data is left on your laptop, compared to when you had access to a live production database or whatever. In other words, it's not that they necessarily have some sort of binary model of intent to screw them. Even if they have some sort of distribution of the likelihood that either now or soon you will want to misuse the data, it doesn't matter because the expected loss is so small in any model.
To impute a binary model of intent to screw them, they'd have to do something like this: Previously we had lots of people who just had root access to our production environment. We now want to tighten up and give everyone permission sets tailored to their actual job requirements. However, silentbob has been with us for a while and would have screwed us by now if that was their intent, so let's just let them keep their old root account since that's slightly less work and might slightly improve productivity.
In your second example, I'm involuntarily screaming NO NO NO DO NOT MAKE THAT CHANGE WHAT THE FUCK IS WRONG WITH YOU before I even get to their reasoning. By the time I've read their reasoning I already want them fired. When you report the results of the A/B test data, I'm thinking: well of course the data showed that. How could you possibly think anything else could happen?
I'm starting to think I've been programming for too long. Like, I didn't even have to think to know what would actually happen. I just felt it immediately.
In your third example, I don't think that's how gun enthusiasts usually reason about this point, but I respect that this isn't really what you're getting at.
I genuinely can't think of a situation where this makes sense, either as a way to keep the email clean for discovery or anything vaguely related (like concerns about employees leaking to journalists). On the other hand, it makes a lot of sense for phishing prevention. Seriously, if you can think of an example, tell me. I'm stumped.
I feel like a flaw here is that given that we're assuming that our hero is willing to both speak up about risks and provide evidence to lawyers suing his company when things go wrong (because he wanted to communicate those risks in discoverable form), how much actual benefit does this provide over just being willing to testify? If I testify that I told the CEO that our widgets could explode and kill you, the opposition isn't going to be so stupid as to ask why there isn't any record of me bringing this to the CEO's attention. The first lawyer will be hardly able to contain his delight as he asks the court to mark "WidgetCo Safe Communication Guidelines" for evidence. The opposition would much rather just admit that it the conversation happened.
If there isn't much benefit, then why would you take the risk of defying the company's communication policies? You may have legal recourse in the event of retaliation, but you may not succeed. You don't want to do something stupid just because you have legal recourse if it goes wrong. Especially if you have less stupid options that meet the case.
I think companies have these policies in part because not every discoverable document which can be used effectively against them in a lawsuit is necessarily an actual example of something that was an actual legal wrong that should be used against them. Sometimes you might have an innocuous concern which is effectively twisted into damning evidence by some shyster of a lawyer. You can ultimately prove that it is innocuous, but you have to spend millions of dollars in legal fees to do so. And so you take the $250,000 settlement offer said shyster proposes, which was his plan all along.
So then when you go to sue your former employer for firing you over your defiance of the company's communication policy, the company's lawyer says that they have legitimate concerns about frivolous lawsuits, they have a policy to help prevent this, and you refused to abide by it, so they fired you. They recognize that you couldn't have been prevented from testifying about risks and your communication of those risks if they got sued and you were subpoenaed, so it's not like they were trying to fire you to cover something up. So it wasn't retaliation. So you lose.