Posts

Comments

Comment by howie-lempel on Maybe Lying Doesn't Exist · 2019-12-26T04:05:23.822Z · score: 11 (3 votes) · LW · GW

[I'm not a lawyer and it's been a long time since law school. Also apologies for length]

Sorry - I was unclear. All I meant was that civil cases don't require *criminal intent.* You're right that they'll both usually have some intent component, which will vary by the claim and the jurisdiction (which makes it hard to give a simple answer).

---

tl;dr: It's complicated. Often reckless disregard for the truth r deliberate ignorance is enough to make a fraud case. Sometimes a "negligent misrepresentation" is enough for a civil suit. But overall both criminal and ccivil cases usually have some kind of intent/reckless in difference/deliberate ignorance requirement. Securities fraud in NY is an important exception.

Also I can't emphasize enough that there are 50 versions in 50 states and also securities fraud, mail fraud, wire fraud, etc can all be defined differently in each state.

----

After a quick Google., it looks to me like the criminal and civil standards are usually pretty similar.

It looks like criminal fraud typically (but not always) requires "fraudulent intent" or "knowledge that the fraudulent claim was false." However, it seems "reckless indifference to the truth" is enough to satisfy this in many jurisdictions.[1]

New York is famous for the Martin Act, which outlaws both criminal and civil securities fraud without having any intent requirement at all.[2] (This is actually quite important because a high percentage of all securities transactions go through New York at some point, so NY gets to use this law to prosecute transactions that occur basically anywhere).

The action most equivalent to civil fraud is Misrepresentation of material facts/fraudulent misrepresentation. This seems a bit more likely than criminal law to accept "reckless indifference" as a substitute for actually knowing that the relevant claim was false.[3] For example, thee Federal False Claims Act makes you liable if you display "deliberate ignorance" or "reckless disregard of the truth" even if you don't knowingly make a false claim.[4]

However, in at least some jurisdictions you can bring a civil claim for negligent misrepresentation of material facts, which seems to basically amount too fraud but with a negligence standard, not an intent standardd.[5]


P.S. Note that we seem to be discussing the aspect of "intent" pertaining to whether the defendant knew the relevant statement was false.There's also often a required intent to deceive or harm in both the criminal and civil context (I'dd guess the requirement is a bit weaker in civil law.

------

[1] "Fraudulent intent is shown if a representation is made with reckless indifference to its truth or falsity." https://www.justice.gov/jm/criminal-resource-manual-949-proof-fraudulent-intent

[2] "In some instances, particularly those involving civil actions for fraud and securities cases, the intent requirement is met if the prosecution or plaintiff is able to show that the false statements were made recklessly—that is, with complete disregard for truth or falsity."

[3] https://en.wikipedia.org/wiki/False_Claims_Act#1986_changes

[4] "Notably, in order to secure a conviction, the state is not required to prove scienter (except in connection with felonies) or an actual purchase or sale or damages resulting from the fraud.[2]

***

.In 1926, the New York Court of Appeals held in People v. Federated Radio Corp. that proof of fraudulent intent was unnecessary for prosecution under the Act.[8] In 1930, the court elaborated that the Act should "be liberally and sympathetically construed in order that its beneficial purpose may, so far as possible, be attained."[9]

https://en.wikipedia.org/wiki/Martin_Act#Investigative_Powers

[5] "Although a misrepresentation fraud case may not be based on negligent or accidental misrepresentations, in some instances a civil action may be filed for negligent misrepresentation. This tort action is appropriate if a defendant suffered a loss because of the carelessness or negligence of another party upon which the defendant was entitled to rely. Examples would be negligent false statements to a prospective purchaser regarding the value of a closely held company’s stock or the accuracy of its financial statements." https://www.acfe.com/uploadedFiles/Shared_Content/Products/Self-Study_CPE/Fraud-Trial-2011-Chapter-Excerpt.pdf

Comment by howie-lempel on We run the Center for Applied Rationality, AMA · 2019-12-25T18:16:58.981Z · score: 3 (2 votes) · LW · GW

Thanks! forgot about that post.

Comment by howie-lempel on We run the Center for Applied Rationality, AMA · 2019-12-25T17:51:20.547Z · score: 1 (1 votes) · LW · GW

I'm not sure I understand what you mean by "something to protect." Can you give an example?

[Answered by habryka]

Comment by howie-lempel on We run the Center for Applied Rationality, AMA · 2019-12-25T17:48:57.723Z · score: 9 (2 votes) · LW · GW

[Possibly digging a bit too far into the specifics so no worries if you'd rather bow out.]

Do you think these confusions[1] are fairly evenly dispersed throughout the community (besides what you already mentioned: "People semi-frequently have them at the beginning and then get over them.")?

Two casual observations: (A) the confusions seem less common among people working full-time at EA/Rationalist/x-risk/longtermist organisation than in other people who "take singularity scenarios seriously."[2] (B) I'm very uncertain but they also seem less prevalent to me in the EA community than the rationalist community (to the extent the communities can be separated).[3] [4]

Do A and B sound right to you? If so, do you have a take on why that is?

If A or B *are* true, do you think this is in any part caused by the relative groups taking the singularity [/x-risk/the future/the stakes] less seriously? If so, are there important costs from this?


[1] Using your word while withholding my own judgment as to whether every one of these is actually a confusion.

[2] If you're right that a lot of people have them at the beginning and then get over them, a simple potential explanation would be that by the time you're working at one of these orgs, that's already happened.

Other hypothesis: (a) selection effects; (b) working FT in the community gives you additional social supports and makes it more likely others will notice if you start spiraling; (c) the cognitive dissonance with the rest of society is a lot of what's doing the damage. It's easier to handle this stuff psychologically if the coworkers you see every day also take the singularity seriously.[i]

[3] For example perhaps less common at Open Phil, GPI, 80k, and CEA than CFAR and MIRI but I also think this holds outside of professional organisations.

[4] One potential reason for this is that a lot of EA ideas are more "in the air" than rationalist/singularity ones. So a lot of EAs may have had their 'crisis of faith' before arriving in the community. (For example, I know plenty of EAs (myself included) who did some damage to themselves in their teens or early twenties by "taking Peter Singer really seriously."

[i] I've seen this kind of dissidence offered as a (partial) explanation of why PTSD has become so common among veterans & why it's so hard for them to reintegrate after serving a combat tour. No clue if the source is reliable/widely held/true. It's been years but I think I got it from Odysseus in America or perhaps its predecessor, Achilles in Vietnam.

Comment by howie-lempel on We run the Center for Applied Rationality, AMA · 2019-12-25T17:16:58.935Z · score: 7 (5 votes) · LW · GW
My closest current stab is that we’re the “Center for Bridging between Common Sense and Singularity Scenarios.

[I realise there might not be precise answers to a lot of these but would still be interested in a quick take on any of them if anybody has one.]

Within CFAR, how much consensus is there on this vision? How stable/likely to change do you think it is? How long has this been the vision for (alternatively, how long have you been playing with this vision for)? Is it possible to describe what the most recent previous vision was?

Comment by howie-lempel on We run the Center for Applied Rationality, AMA · 2019-12-25T17:15:02.891Z · score: 4 (1 votes) · LW · GW

This seemed really useful. I suspect you're planning to write up something like this at some point down the line but wanted to suggest posting this somewhere more prominent in the meantime (otoh, idea inoculation, etc.)

Comment by howie-lempel on We run the Center for Applied Rationality, AMA · 2019-12-25T16:32:22.905Z · score: 17 (8 votes) · LW · GW
The need to coordinate in this way holds just as much for consequentialists or anyone else.

I have a strong heuristic that I should slow down and throw a major warning flag if I am doing (or recommending that someone else do) something I believe would be unethical if done by someone not aiming to contribute to a super high impact project. I (weakly) believe more people should use this heuristic.

Comment by howie-lempel on We run the Center for Applied Rationality, AMA · 2019-12-25T16:16:38.196Z · score: 6 (4 votes) · LW · GW

Thanks for writing this up. Added a few things to my reading list and generally just found it inspiring.

Things like PJ EBY's excellent ebook.

FYI - this link goes to an empty shopping cart. Which of his books did you mean to refer to?

The best links I could find quickly were:

Comment by howie-lempel on We run the Center for Applied Rationality, AMA · 2019-12-25T16:06:19.471Z · score: 15 (6 votes) · LW · GW
I think I also damaged something psychologically, which took 6 months to repair.

I've been pretty curious about the extent to which circling has harmful side effects for some people. If you felt like sharing what this was, the mechanism that caused it, and/or how it could be avoided I'd be interested.

I expect, though, that this is too sensitive/personal so please feel free to ignore.

Comment by howie-lempel on Maybe Lying Doesn't Exist · 2019-12-25T15:26:49.951Z · score: 12 (3 votes) · LW · GW

Note that criminal intent is *not* required for a civil fraud suit which could be brought simultaneously with or after a criminal proceeding.

Comment by howie-lempel on We run the Center for Applied Rationality, AMA · 2019-12-22T15:08:02.915Z · score: 12 (8 votes) · LW · GW

"For example, we spent a bunch of time circling for a while"

Does this imply that CFAR now spends substantially less time circling? If so and there's anything interesting to say about why, I'd be curious.

Comment by howie-lempel on Ben Hoffman's donor recommendations · 2018-06-25T22:42:54.238Z · score: 6 (3 votes) · LW · GW

This doesn't look to me like an argument that there is so much funging between EA Funds and GiveWell recommended charities that it's odd to spend attention distinguishing between them? For people with some common sets of values (e.g. long-termist, placing lots of weight on the well-being of animals) it doesn't seem like there's a decision-relevant amount of funging between GiveWell recommendations and the EA Fund they would choose. Do we disagree about that?

I guess I interpreted Rob's statement that "the EA Funds are usually a better fallback option than GiveWell" as shorthand for "the EA Fund relevant to your values is in expectation a better fallback option than GiveWell." "The EA Fund relevant to your values" does seem like a useful abstraction to me.

Comment by howie-lempel on Ben Hoffman's donor recommendations · 2018-06-22T20:44:04.660Z · score: 6 (3 votes) · LW · GW

Here's a potentially more specific way to get at what I mean.

Let's say that somebody has long-termist values and believes that the orgs supported by the Long Term Future EA Fund in expectation have a much better impact on the long-term future than GW recommended charities. In particular, let's say she believes that (absent funging) giving $1 to GW recommended charities would be as valuable as giving $100 to the EA Long Term Future Fund.

You're saying that she should reduce her estimate because Open Phil may change its strategy or the blog post may be an imprecise guide to Open Phil's strategy so there's some probability that giving $1 to GW recommended charities could cause Open Phil to reallocate some money from GW recommended charities toward the orgs funded by the Long Term Future Fund.

In expectation, how much money do you think is reallocated from GW recommended charities toward orgs like those funded by the Long Term Future Fund for every $1 given to GW recommended charities? In other words, by what percent should this person adjust down their estimate of the difference in effectiveness?

Personally, I'd guess it's lower than 15% and I'd be quite surprised to hear you say you think it's as high as 33%. This would still leave a difference that easily clears the bar for "large enough to pay attention to."

Fwiw, to the extent that donors to GW are getting funged, I think it's much more likely that they are funging with other developing world interventions (e.g. one recommended org's hits diminishing returns and so funding already targeted toward developing world interventions goes to a different developing world health org instead).

I'm guessing that you have other objections to EA Funds (some of which I think are expressed in the posts you linked although I haven't had a chance to reread them). Is it possible that funging with GW top charities isn't really your true objection?

Comment by howie-lempel on Ben Hoffman's donor recommendations · 2018-06-22T18:05:59.796Z · score: 12 (3 votes) · LW · GW

I see you as arguing that GW/Open Phil might change its strategic outlook in the future and that their disclosures aren't high precision so we can't rule out that (at some point in the future or even today) giving to GW recommended charities could lead Open Phil to give more to orgs like those in the EA Funds.

That doesn't strike me as sufficient to argue that GW recommended charities funge so heavily against EA funds that it's "odd to spend attention distinguishing them, vs spending effort distinguishing substantially different strategies."

Comment by howie-lempel on Ben Hoffman's donor recommendations · 2018-06-22T01:18:17.378Z · score: 21 (6 votes) · LW · GW

What's the reason to think EA Funds (other than the global health and development one) currently funges heavily with GiveWell recommended charities? My guess would have been that that increased donations to GiveWell's recommended charities would not cause many other donors (including Open Phil or Good Ventures) to give instead to orgs like those supported by the Long-Term Future, EA Community, or Animal Welfare EA Funds.

In particular, to me this seems in tension with Open Phil's last public writing on it's current thinking about how much to give to GW recommendations versus these other cause areas ("world views" in Holden's terminology). In his January "Update on Cause Prioritization at Open Philanthropy," Holden wrote:

"We will probably recommend that a cluster of 'long-termist' buckets collectively receive the largest allocation: at least 50% of all available capital. . . .
We will likely recommend allocating something like 10% of available capital to a “straightforward charity” bucket (described more below), which will likely correspond to supporting GiveWell recommendations for the near future."

There are some slight complications here but overall it doesn't seem to me that Open Phil/GV's giving to long-termist areas is very sensitive to other donors' decisions about giving to GW's recommended charities. Contra Ben H, I therefore think it does currently make sense for donors to spend attention distinguishing between EA Funds and GW's recommendations.

For what it's worth, there might be a stronger case that EA Funds funges against long-termist/EA community/Animal welfare grants that Open Phil would otherwise make but I think that's actually an effect with substantially different consequences.

[Disclosure - I formerly worked at GiveWell and Open Phil but haven't worked there for over a year and I don't think anything in this comment is based on any specific inside information.]

[Edited to make my disclosure slightly more specific/nuanced.]