Sharing Information About Nonlinear
post by Ben Pace (Benito) · 2023-09-07T06:51:11.846Z · LW · GW · 323 commentsContents
Conversation with Kat on March 7th, 2023 A High-Level Overview of The Employees’ Experience with Nonlinear Background Alice and Chloe An assortment of reported experiences My Level of Trust in These Reports Why I’m sharing these Highly dependent finances and social environment Severe downsides threatened if the working relationship didn’t work out Effusive positive emotion not backed up by reality, and other manipulative techniques Many other strong personal costs Lax on legalities and adversarial business practices This is not a complete list Perspectives From Others Who Have Worked or Otherwise Been Close With Nonlinear Conversation with Nonlinear Paraphrasing Nonlinear My thoughts on the ethics and my takeaways Summary of My Epistemic State General Comments From Me Addendum None 325 comments
Added (11th Sept): Nonlinear have commented that they intend to write a response [EA(p) · GW(p)], have written a short follow-up [LW · GW], and claim that they dispute 85 claims in this post. I'll link here to that if-and-when it's published.
Added (11th Sept): One of the former employees, Chloe, has written a lengthy comment [LW(p) · GW(p)] personally detailing some of her experiences working at Nonlinear and the aftermath.
Added (12th Sept): I've made 3 relatively minor edits to the post. I'm keeping a list of all edits at the bottom of the post, so if you've read the post already, you can just go to the end to see the edits.
Added (15th Sept): I've written a follow-up post [LW · GW] saying that I've finished working on this investigation and do not intend to work more on it in the future. The follow-up also has a bunch of reflections on what led up to this post.
Added (22nd Dec): Nonlinear has written a lengthy reply, which you can read here [LW · GW].
Epistemic status: Once I started actively looking into things, much of my information in the post below came about by a search for negative information about the Nonlinear cofounders, not from a search to give a balanced picture of its overall costs and benefits. I think standard update rules suggest not that you ignore the information, but you think about how bad you expect the information would be if I selected for the worst, credible info I could share, and then update based on how much worse (or better) it is than you expect I could produce. (See section 5 of this post about Mistakes with Conservation of Expected Evidence [LW · GW] for more on this.) This seems like a worthwhile exercise for at least non-zero people to do in the comments before reading on. (You can condition on me finding enough to be worth sharing, but also note that I think I have a relatively low bar for publicly sharing critical info about folks in the EA/x-risk/rationalist/etc ecosystem.)
tl;dr: If you want my important updates quickly summarized in four claims-plus-probabilities, jump to the section near the bottom titled "Summary of My Epistemic State".
When I used to manage the Lightcone Offices [LW · GW], I spent a fair amount of time and effort on gatekeeping — processing applications from people in the EA/x-risk/rationalist ecosystem to visit and work from the offices, and making decisions. Typically this would involve reading some of their public writings, and reaching out to a couple of their references that I trusted and asking for information about them. A lot of the people I reached out to were surprisingly great at giving honest references about their experiences with someone and sharing what they thought about someone.
One time, Kat Woods and Drew Spartz from Nonlinear applied to visit. I didn't know them or their work well, except from a few brief interactions that Kat Woods seems high-energy, and to have a more optimistic outlook on life and work than most people I encounter.
I reached out to some references Kat listed, which were positive to strongly positive. However I also got a strongly negative reference — someone else who I informed about the decision told me they knew former employees who felt taken advantage of around things like salary. However the former employees reportedly didn't want to come forward due to fear of retaliation and generally wanting to get away from the whole thing, and the reports felt very vague and hard for me to concretely visualize, but nonetheless the person strongly recommended against inviting Kat and Drew.
I didn't feel like this was a strong enough reason to bar someone from a space — or rather, I did, but vague anonymous descriptions of very bad behavior being sufficient to ban someone is a system that can be straightforwardly abused, so I don't want to use such a system. Furthermore, I was interested in getting my own read on Kat Woods from a short visit — she had only asked to visit for a week. So I accepted, though I informed her that this weighed on my mind. (This is a link to the decision email I sent to her.)
(After making that decision I was also linked to this ominous yet still vague EA Forum thread [EA(p) · GW(p)], that includes a former coworker of Kat Woods saying they did not like working with her, more comments like the one I received above, and links to a lot of strongly negative Glassdoor reviews for Nonlinear Cofounder Emerson Spartz's former company “Dose”. Note that more than half of the negative reviews are for the company after Emerson sold it, but this is a concerning one from 2015 (while Emerson Spartz was CEO/Cofounder): "All of these super positive reviews are being commissioned by upper management. That is the first thing you should know about Spartz, and I think that gives a pretty good idea of the company's priorities… care more about the people who are working for you and less about your public image". A 2017 review says "The culture is toxic with a lot of cliques, internal conflict, and finger pointing." There are also far worse reviews about a hellish work place which are very worrying, but they’re from the period after Emerson’s LinkedIn says he left, so I’m not sure to what extent he is responsible he is for them.)
On the first day of her visit, another person in the office privately reached out to me saying they were extremely concerned about having Kat and Drew in the office, and that they knew two employees who had had terrible experiences working with them. They wrote (and we later discussed it more):
Their company Nonlinear has a history of illegal and unethical behavior, where they will attract young and naive people to come work for them, and subject them to inhumane working conditions when they arrive, fail to pay them what was promised, and ask them to do illegal things as a part of their internship. I personally know two people who went through this, and they are scared to speak out due to the threat of reprisal, specifically by Kat Woods and Emerson Spartz.
This sparked (for me) a 100-200 hour investigation where I interviewed 10-15 people who interacted or worked with Nonlinear, read many written documents and tried to piece together some of what had happened.
My takeaway is that indeed their two in-person employees had quite horrendous experiences working with Nonlinear, and that Emerson Spartz and Kat Woods are significantly responsible both for the harmful dynamics and for the employees’ silence afterwards. Over the course of investigating Nonlinear I came to believe that the former employees there had no legal employment, tiny pay, a lot of isolation due to travel, had implicit and explicit threats of retaliation made if they quit or spoke out negatively about Nonlinear, simultaneously received a lot of (in my opinion often hollow) words of affection and claims of familial and romantic love, experienced many further unpleasant or dangerous experiences that they wouldn’t have if they hadn’t worked for Nonlinear, and needed several months to recover with friends and family afterwards before they felt able to return to work.
(Note that I don’t think the pay situation as-described in the above quoted text was entirely accurate, I think it was very small — $1k/month — and employees implicitly expected they would get more than they did, but there was mostly not salary ‘promised’ that didn’t get given out.)
After first hearing from them about their experiences, I still felt unsure about what was true — I didn’t know much about the Nonlinear cofounders, and I didn’t know which claims about the social dynamics I could be confident of. To get more context, I spent about 30+ hours on calls with 10-15 different people who had some professional dealings with at least one of Kat, Emerson and Drew, trying to build up a picture of the people and the org, and this helped me a lot in building my own sense of them by seeing what was common to many people’s experiences. I talked to many people who interacted with Emerson and Kat who had many active ethical concerns about them and strongly negative opinions, and I also had a 3-hour conversation with the Nonlinear cofounders about these concerns, and I now feel a lot more confident about a number of dynamics that the employees reported.
For most of these conversations I offered strict confidentiality, but (with the ex-employees’ consent) I’ve here written down some of the things I learned.
In this post I do not plan to name most of the people I talked to, but two former employees I will call “Alice” and “Chloe”. I think the people involved mostly want to put this time in their life behind them and I would encourage folks to respect their privacy, not name them online, and not talk to them about it unless you’re already good friends with them.
Conversation with Kat on March 7th, 2023
Returning to my initial experience: on the Tuesday of their visit, I still wasn’t informed about who the people were or any details of what happened, but I found an opportunity to chat with Kat over lunch.
After catching up for ~15 mins, I indicated that I'd be interested in talking about the concerns I raised in my email, and we talked in a private room for 30-40 mins. As soon as we sat down, Kat launched straight into stories about two former employees of hers, telling me repeatedly not to trust one of the employees (“Alice”), that she has a terrible relationship with truth, that she's dangerous, and that she’s a reputational risk to the community. She said the other employee ("Chloe") was “fine”.
Kat Woods also told me that she expected to have a policy with her employees of “I don’t say bad things about you, you don’t say bad things about me”. I am strongly against this kind of policy on principle (as I told her then). This and other details raised further red flags to me (i.e. the salary policy) and I wanted to understand what happened.
Here’s an overview of what she told me:
- When they worked at Nonlinear, Alice and Chloe had expenses covered (room, board, food) and Chloe also got a monthly bonus of $1k/month.
- Alice and Chloe lived in the same house as Kat, Emerson and Drew. Kat said that she has decided to not live with her employees going forward.
- She said that Alice, who incubated their own project (here is a description of the incubation program on Nonlinear’s site), was able to set their own salary, and that Alice almost never talked to her (Kat) or her other boss (Emerson) about her salary.
- Kat doesn’t trust Alice to tell the truth, and that Alice has a history of “catastrophic misunderstandings”.
- Kat told me that Alice was unclear about the terms of the incubation, and said that Alice should have checked in with Kat in order to avoid this miscommunication.
- Kat suggested that Alice may have quit in substantial part due to Kat missing a check-in call over Zoom toward the end.
- Kat said that she hoped Alice would go by the principle of “I don’t say bad things about you, you don’t say bad things about me” but that the employee wasn’t holding up her end and was spreading negative things about Kat/Nonlinear.
- Kat said she gives negative references for Alice, advises people “don't hire her” and not to fund her, and “she’s really dangerous for the community”.
- She said she didn’t have these issues with her other employee Chloe, she said she was “fine, just miscast” for her role of “assistant / operations manager”, which is what led to her quitting. Kat said Chloe was pretty skilled but did a lot of menial labor tasks for Kat that she didn’t enjoy.
- The one negative thing she said about Chloe was that she was being paid the equivalent of $75k[1] per year (only $1k/month, the rest via room and board), but that at one point she asked for $75k on top of all expenses being paid and that was out of the question.[2]
A High-Level Overview of The Employees’ Experience with Nonlinear
Background
The core Nonlinear staff are Emerson Spartz, Kat Woods, and Drew Spartz.
Kat Woods has been in the EA ecosystem for at least 10 years, cofounding Charity Science in 2013 and working there until 2019. After a year at Charity Entrepreneurship, in 2021 she cofounded [EA · GW] Nonlinear with Emerson Spartz, where she has worked for 2.5 years.
Nonlinear has received $599,000 from the Survival and Flourishing Fund in the first half of 2022, and $15,000 from Open Philanthropy in January 2022.
Emerson primarily funds the project through his personal wealth from his previous company Dose and from selling Mugglenet.com (which he founded). Emerson and Kat are romantic partners, and Emerson and Drew are brothers. They all live in the same house and travel across the world together, jumping from AirBnb to AirBnb once or twice per month. The staff they hire are either remote, or live in the house with them.
My current understanding is that they’ve had around ~4 remote interns, 1 remote employee, and 2 in-person employees (Alice and Chloe). Alice was the only person to go through their incubator program.
Nonlinear tried to have a fairly high-commitment culture where the long-term staff are involved very closely with the core family unit, both personally and professionally. However they were given exceedingly little financial independence, and a number of the social dynamics involved seem really risky to me.
Alice and Chloe
Alice travelled with Nonlinear from November 2021 to June 2022 and started working for the org from around February, and Chloe worked there from January 2022 to July 2022. After talking with them both, I learned the following:
- Neither were legally employed by the non-profit at any point.
- Chloe’s and Alice’s finances (along with Kat's and Drew's) all came directly from Emerson's personal funds (not from the non-profit). This left them having to get permission for their personal purchases, and they were not able to live apart from the family unit while they worked with them, and they report feeling very socially and financially dependent on the family during the time they worked there.
- Chloe’s salary was verbally agreed to come out to around $75k/year. However, she was only paid $1k/month, and otherwise had many basic things compensated i.e. rent, groceries, travel. This was supposed to make traveling together easier, and supposed to come out to the same salary level. While Emerson did compensate Alice and Chloe with food and board and travel, Chloe does not believe that she was compensated to an amount equivalent to the salary discussed, and I believe no accounting was done for either Alice or Chloe to ensure that any salary matched up. (I’ve done some spot-checks of the costs of their AirBnbs and travel, and Alice/Chloe’s epistemic state seems pretty reasonable to me.)
- Alice joined as the sole person in their incubation program. She moved in with them after meeting Nonlinear at EAG and having a ~4 hour conversation there with Emerson, plus a second Zoom call with Kat. Initially while traveling with them she continued her previous job remotely, but was encouraged to quit and work on an incubated org, and after 2 months she quit her job and started working on projects with Nonlinear. Over the 8 months she was there Alice claims she received no salary for the first 5 months, then (roughly) $1k/month salary for 2 months, and then after she quit she received a ~$6k one-off salary payment (from the funds allocated for her incubated organization). She also had a substantial number of emergency health issues covered.[3]
- Salary negotiations were consistently a major stressor for Alice’s entire time at Nonlinear. Over her time there she spent through all of her financial runway, and spent a significant portion of her last few months there financially in the red (having more bills and medical expenses than the money in her bank account) in part due to waiting on salary payments from Nonlinear. She eventually quit due to a combination of running exceedingly low on personal funds and wanting financial independence from Nonlinear, and as she quit she gave Nonlinear (on their request) full ownership of the organization that she had otherwise finished incubating.
- From talking with both Alice and Nonlinear, it turned out that by the end of Alice’s time working there, since the end of February Kat Woods had thought of Alice as an employee that she managed, but that Emerson had not thought of Alice as an employee, primarily just someone who was traveling with them and collaborating because she wanted to, and that the $1k/month plus other compensation was a generous gift.
- Alice and Chloe reported that Kat, Emerson, and Drew created an environment in which being a valuable member of Nonlinear included being entrepreneurial and creative in problem-solving — in practice this often meant getting around standard social rules to get what you wanted was strongly encouraged, including getting someone’s favorite table at a restaurant by pressuring the staff, and finding loopholes in laws pertaining to their work. This also applied internally to the organization. Alice and Chloe report being pressured into or convinced to take multiple actions that they seriously regretted whilst working for Nonlinear, such as becoming very financially dependent on Emerson, quitting being vegan, and driving without a license in a foreign country for many months. (To be clear I’m not saying that these laws are good and that breaking them is bad, I’m saying that it sounds to me from their reports like they were convinced to take actions that could have had severe personal downsides such as jail time in a foreign country, and that these are actions that they confidently believe they would not have taken had it not been due to the strong pressures they felt from the Nonlinear cofounders and the adversarial social environment internal to the company.) I’ll describe these events in more detail below.
- They both report taking multiple months to recover after ending ties with Nonlinear, before they felt able to work again, and both describe working there as one of the worst experiences of their lives.
- They both report being actively concerned about professional and personal retaliation from Nonlinear for speaking to me, and told me stories and showed me some texts that led me to believe that was a very credible concern.
An assortment of reported experiences
There are a lot of parts of their experiences at Nonlinear that these two staff found deeply unpleasant and hurtful. I will summarize a number of them below.
I think many of the things that happened are warning flags, I also think that there are some red lines, I’ll discuss my thoughts on which are the red lines in my takeaways at the bottom of this post.
My Level of Trust in These Reports
Most of the dynamics were described to me as accurate by multiple different people (low pay, no legal structure, isolation, some elements of social manipulation, intimidation), leading me to have high confidence in them, and Nonlinear themselves confirmed various parts of these accounts.
People whose word I would meaningfully update on about this sort of thing have vouched for Chloe’s word as reliable.
The Nonlinear staff and a small number of other people who visited during Alice and Chloe’s employment have strongly questioned Alice’s trustworthiness and suggested she has told outright lies. Nonlinear showed me texts where people who had spoken with Alice came away with the impression that she was paid $0 or $500, which is inaccurate (she was paid ~$8k on net, as she told me).
That said, I personally found Alice very willing and ready to share primary sources with me upon request (texts, bank info, etc), so I don’t believe her to be acting in bad faith.
In my first conversation with her, Kat claimed that Alice had many catastrophic miscommunications, but that Chloe was (quote) “fine”. In general nobody questioned Chloe’s word and broadly the people who told me they questioned Alice’s word said they trusted Chloe’s.
Personally I found all of their fears of retaliation to be genuine and earnest, and in my opinion justified.
Why I’m sharing these
I do have a strong heuristic that says consenting adults can agree to all sorts of things that eventually hurt them (i.e. in accepting these jobs), even if I paternalistically might think I could have prevented them from hurting themselves. That said, I see clear reasons to think that Kat and Emerson intimidated these people into accepting some of the actions or dynamics that hurt them, so some parts do not seem obviously consensual to me.
Separate from that, I think it’s good for other people to know what they’re getting into, so I think sharing this info is good because it is relevant for many people who have any likelihood of working with Nonlinear. And most importantly to me, I especially want to do it because it seems to me that Nonlinear has tried to prevent this negative information from being shared, so I am erring strongly on the side of sharing things.
(One of the employees also wanted to say something about why she contributed to this post, and I've put it in a footnote here.[4])
Highly dependent finances and social environment
Everyone lived in the same house. Emerson and Kat would share a room, and the others would make do with what else was available, often sharing bedrooms.
Nonlinear primarily moved around countries where they typically knew no locals and the employees regularly had nobody to interact with other than the cofounders, and employees report that they were denied requests to live in a separate AirBnb from the cofounders.
Alice and Chloe report that they were advised not to spend time with ‘low value people’, including their families, romantic partners, and anyone local to where they were staying, with the exception of guests/visitors that Nonlinear invited. Alice and Chloe report this made them very socially dependent on Kat/Emerson/Drew and otherwise very isolated.
The employees were very unclear on the boundaries of what would and wouldn’t be paid for by Nonlinear. For instance, Alice and Chloe report that they once spent several days driving around Puerto Rico looking for cheaper medical care for one of them before presenting it to senior staff, as they didn’t know whether medical care would be covered, so they wanted to make sure that it was as cheap as possible to increase the chance of senior staff saying yes.
The financial situation is complicated and messy. This is in large-part due to them doing very little accounting. In summary Alice spent a lot of her last 2 months with less than €1000 in her bank account, sometimes having to phone Emerson for immediate transfers to be able to cover medical costs when she was visiting doctors. At the time of her quitting she had €700 in her account, which was not enough to cover her bills at the end of the month, and left her quite scared. Though to be clear she was paid back ~€2900 of her outstanding salary by Nonlinear within a week, in part due to her strongly requesting it. (The relevant thing here is the extremely high level of financial dependence and wealth disparity, but Alice does not claim that Nonlinear failed to pay them.)
One of the central reasons Alice says that she stayed on this long was because she was expecting financial independence with the launch of her incubated project that had $100k allocated to it (fundraised from FTX). In her final month there Kat informed her that while she would work quite independently, they would keep the money in the Nonlinear bank account and she would ask for it, meaning she wouldn’t have the financial independence from them that she had been expecting, and learning this was what caused Alice to quit.
One of the employees interviewed Kat about her productivity advice, and shared notes from this interview with me. The employee writes:
During the interview, Kat openly admitted to not being productive but shared that she still appeared to be productive because she gets others to do work for her. She relies on volunteers who are willing to do free work for her, which is her top productivity advice.
The employees report that some interns later gave strongly negative feedback on working unpaid, and so Kat decided that she would no longer have interns at all.
Severe downsides threatened if the working relationship didn’t work out
In a conversation between Emerson Spartz and one of the employees, the employee asked for advice for a friend that wanted to find another job while being employed, without letting their current employer know about their decision to leave yet. Emerson reportedly immediately stated that he now has to update towards considering that the said employee herself is considering leaving Nonlinear. He went on to tell her that he gets mad at his employees who leave his company for other jobs that are equally good or less good; he said he understands if employees leave for clearly better opportunities. The employee reports that this led them to be very afraid of leaving the job, both because of the way Emerson made the update on thinking the employee is now trying to leave, as well as the notion of Emerson being retaliative towards employees that leave for “bad reasons”.
For background context on Emerson’s business philosophy: Alice quotes Emerson advising the following indicator of work progress: "How much value are you able to extract from others in a short amount of time?"[5] Another person who visited described Emerson to me as “always trying to use all of his bargaining power”. Chloe told me that, when she was negotiating salaries with external partners on behalf of Nonlinear, Emerson advised her when negotiating salaries, to offer "the lowest number you can get away with".
Many different people reported that Emerson Spartz would boast about his business negotiations tactics to employees and visitors. He would encourage his employees to read many books on strategy and influence. When they read the book The 48 Laws of Power he would give examples of him following the “laws” in his past business practices.
One story that he told to both employees and visitors was about his intimidation tactics when involved in a conflict with a former teenage mentee of his, Adorian Deck.
(For context on the conflict, here’s links to articles written about it at the time: Hollywood Reporter, Jacksonville, Technology & Marketing Law Blog, and Emerson Spartz’s Tumblr. Plus here is the Legal Contract they signed that Deck later sued to undo.)
In brief, Adorian Deck was a 16 year-old who (in 2009) made a Twitter account called “OMGFacts” that quickly grew to having 300,000+ followers. Emerson reached out to build companies under the brand, and agreed to a deal with Adorian. Less than a year later Adorian wanted out of the deal, claiming that Emerson had made over $100k of profits and he’d only seen $100, and sued to end the deal.
According to Emerson, it turned out that there’s a clause unique to California (due to the acting profession in Los Angeles) where even if a minor and their parent signs a contract, it isn’t valid unless the signing is overseen by a judge, and so they were able to simply pull out of the deal.
But to this day Emerson’s company still owns the OMGfacts brand and companies and Youtube channels.
(Sidenote: I am not trying to make claims about who was “in the right” in these conflicts, I am reporting these as examples of Emerosn’s negotiation tactics that he reportedly engages in and actively endorses during conflicts.)
Emerson told versions of this story to different people who I spoke to (people reported him as ‘bragging’).
In one version, he claimed that he strong-armed Adorian and his mother with endless legal threats and they backed down and left him with full control of the brand. This person I spoke to couldn’t recall the details but said that Emerson tried to frighten Deck and his mother, and that they (the person Emerson was bragging to) found it “frightening” and thought the behavior was “behavior that’s like 7 standard deviations away from usual norms in this area.”
Another person was told the story in the context of the 2nd Law from “48 Laws of Power”, which is “Never put too much trust in friends, learn how to use enemies”. The summary includes
“Be wary of friends—they will betray you more quickly, for they are easily aroused to envy. They also become spoiled and tyrannical… you have more to fear from friends than from enemies.”
For this person who was told the Adorian story, the thing that resonated most when he told it was the claim that he was in a close, mentoring relationship with Adorian, and leveraged knowing him so well that he would know “exactly where to go to hurt him the most” so that he would back off. In that version of the story, he says that Deck’s life-goal was to be a YouTuber (which is indeed Deck's profession until this day — he produces about 4 videos a month), and that Emerson strategically contacted the YouTubers that Deck most admired, and told them stories of Deck being lazy and trying to take credit for all of Emerson's work. He reportedly threatened to do more of this until Deck relented, and this is why Deck gave up the lawsuit. The person said to me “He loved him, knew him really well, and destroyed him with that knowledge.”[6]
I later spoke with Emerson about this. He does say that he was working with the top YouTubers to create videos exposing Deck, and this is what brought Deck back to the negotiating table. He says that he ended up renegotiating a contract where Deck receives $10k/month for 7 years. If true, I think this final deal reflects positively on Emerson, though I still believe the people he spoke to were actively scared by their conversations with Emerson on this subject. (I have neither confirmed the existence of the contract nor heard Deck’s side of the story.)
He reportedly told another negotiation story about his response to getting scammed in a business deal. I won’t go into the details, but reportedly he paid a high-price for the rights to a logo/trademark, only to find that he had not read the fine print and had been sold something far less valuable. He gave it as an example of the "Keep others in suspended terror: cultivate an air of unpredictability" strategy from The 48 Laws of Power:
Be deliberately unpredictable. Behavior that seems to have no consistency or purpose will keep them off-balance, and they will wear themselves out trying to explain your moves. Taken to an extreme, this strategy can intimidate and terrorize.
In that business negotiation, he (reportedly) acted unhinged. According to the person I spoke with, he said he’d call the counterparty and say “batshit crazy things” and yell at them, with the purpose of making them think he’s capable of anything, including dangerous and unethical things, and eventually they relented and gave him the deal he wanted.
Someone else I spoke to reported him repeatedly saying that he would be “very antagonistic” toward people he was in conflict with. He reportedly gave the example that, if someone tried to sue him, he would be willing to go into legal gray areas in order to “crush his enemies” (a phrase he apparently used a lot), including hiring someone to stalk the person and their family in order to freak them out. (Emerson denies having said this, and suggests that he was probably describing this as a strategy that someone else might use in a conflict that one ought to be aware of.)
After Chloe eventually quit, Alice reports that Kat/Emerson would “trash talk” her, saying she was never an “A player”, criticizing her on lots of dimensions (competence, ethics, drama, etc) in spite of previously primarily giving Chloe high praise. This reportedly happened commonly toward other people who ended or turned down working together with Nonlinear.
Here are some texts between Kat Woods and Alice shortly after Alice had quit, before the final salary had been paid.
A few months later, some more texts from Kat Woods.
(I can corroborate that it was difficult to directly talk with the former employee and it took a fair bit of communication through indirect social channels before they were willing to identify themselves to me and talk about the details.)
Effusive positive emotion not backed up by reality, and other manipulative techniques
Multiple people who worked with Kat reported that Kat had a pattern of enforcing arbitrary short deadlines on people in order to get them to make the decision she wants e.g. “I need a decision by the end of this call”, or (in an email to Alice) “This is urgent and important. There are people working on saving the world and we can’t let our issues hold them back from doing their work.”
Alice reported feeling emotionally manipulated. She said she got constant compliments from the founders that ended up seeming fake.
Alice wrote down a string of the compliments at the time from Kat Woods (said out loud and that Alice wrote down in text), here is a sampling of them that she shared with me:
“You’re the kind of person I bet on, you’re a beast, you’re an animal, I think you are extraordinary"
"You can be in the top 10, you really just have to think about where you want to be, you have to make sacrifices to be on the top, you can be the best, only if you sacrifice enough"
"You’re working more than 99% because you care more than 99% because you’re a leader and going to save the world"
"You can’t fail if you commit to [this project], you have what it takes, you get sh*t done and everyone will hail you in EA, finally an executor among us."
Alice reported that she would get these compliments near-daily. She eventually had the sense that this was said in order to get something out of her. She reported that one time, after a series of such compliments, the Kat Woods then turned and recorded a near-identical series of compliments into their phone for a different person.
Kat Woods reportedly several times cried while telling Alice that she wanted the employee in their life forever and was worried that this employee would ever not be in Kat’s life.
Other times when Alice would come to Kat with money troubles and asking for a pay rise, Alice reports that Kat would tell them that this was a psychological issue and that actually they had safety, for instance they could move back in with their parents, so they didn’t need to worry.
Alice also reports that she was explicitly advised by Kat Woods to cry and look cute when asking Emerson Spartz for a salary improvement, in order to get the salary improvement that she wanted, and was told this was a reliable way to get things from Emerson. (Alice reports that she did not follow this advice.)
Many other strong personal costs
Alice quit being vegan while working there. She was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days. Alice eventually gave in and ate non-vegan food in the house. She also said that the Nonlinear cofounders marked her quitting veganism as a ‘win’, as they thad been arguing that she should not be vegan.
(Nonlinear disputes this, and says that they did go out and buy her some vegan burgers food and had some vegan food in the house. They agree that she quit being vegan at this time, and say it was because being vegan was unusually hard due to being in Puerto Rico. Alice disputes that she received any vegan burgers.)
Alice said that this generally matched how she and Chloe were treated in the house, as people generally not worth spending time on, because they were ‘low value’ (i.e. in terms of their hourly wage), and that they were the people who had to do chores around the house (e.g. Alice was still asked to do house chores during the period where she was sick and not eating).
By the same reasoning, the employees reported that they were given 100% of the menial tasks around the house (cleaning, tidying, etc) due to their lower value of time to the company. For instance, if a cofounder spilled food in the kitchen, the employees would clean it up. This was generally reported as feeling very demeaning.
Alice and Chloe reported a substantial conflict within the household between Kat and Alice. Alice was polyamorous, and she and Drew entered into a casual romantic relationship. Kat previously had a polyamorous marriage that ended in divorce, and is now monogamously partnered with Emerson. Kat reportedly told Alice that she didn't mind polyamory "on the other side of the world”, but couldn't stand it right next to her, and probably either Alice would need to become monogamous or Alice should leave the organization. Alice didn't become monogamous. Alice reports that Kat became increasingly cold over multiple months, and was very hard to work with.[7]
Alice reports then taking a vacation to visit her family, and trying to figure out how to repair the relationship with Kat. Before she went on vacation, Kat requested that Alice bring a variety of illegal drugs across the border for her (some recreational, some for productivity). Alice argued that this would be dangerous for her personally, but Emerson and Kat reportedly argued that it is not dangerous at all and was “absolutely risk-free”. Privately, Drew said that Kat would “love her forever” if she did this. I bring this up as an example of the sorts of requests that Kat/Emerson/Drew felt comfortable making during Alice’s time there.
Chloe was hired by Nonlinear with the intent to have them do executive assistant tasks for Nonlinear (this is the job ad they responded to). After being hired and flying out, Chloe was informed that on a daily basis their job would involve driving e.g. to get groceries when they were in different countries. She explained that she didn’t have a drivers’ license and didn’t know how to drive. Kat/Emerson proposed that Chloe learn to drive, and Drew gave her some driving lessons. When Chloe learned to drive well enough in parking lots, she said she was ready to get her license, but she discovered that she couldn’t get a license in a foreign country. Kat/Emerson/Drew reportedly didn’t seem to think that mattered or was even part of the plan, and strongly encouraged Chloe to just drive without a license to do their work, so she drove ~daily for 1-2 months without a license. (I think this involved physical risks for the employee and bystanders, and also substantial risks of being in jail in a foreign country. Also, Chloe basically never drove Emerson/Drew/Kat, this was primarily solo driving for daily errands.) Eventually Chloe had a minor collision with a street post, and was a bit freaked out because she had no idea what the correct protocols were. She reported that Kat/Emerson/Drew didn’t think that this was a big deal, but that Alice (who she was on her way to meet) could clearly see that Chloe was distressed by this, and Alice drove her home, and Chloe then decided to stop driving.
(Car accidents are the second most common cause of death for people in their age group. Insofar as they were pressured to do this and told that this was safe, I think this involved a pretty cavalier disregard for the safety of the person who worked for them.)
Chloe talked to a friend of hers (who is someone I know fairly well, and was the first person to give me a negative report about Nonlinear), reporting that they were very depressed. When Chloe described her working conditions, her friend was horrified, and said she had to get out immediately since, in their words, “this was clearly an abusive situation”. The friend offered to pay for flights out of the country, and tried to convince her to quit immediately. Eventually Chloe made a commitment to book a flight by a certain date and then followed through with that.
Lax on legalities and adversarial business practices
I did not find the time to write much here. For now I’ll simply pass on my impressions.
I generally got a sense from speaking with many parties that Emerson Spartz and Kat Woods respectively have very adversarial and very lax attitudes toward legalities and bureaucracies, with the former trying to do as little as possible that is asked of him. If I asked them to fill out paperwork I would expect it was filled out at least reluctantly and plausibly deceptively or adversarially in some way. In my current epistemic state, I would be actively concerned about any project in the EA or x-risk ecosystems that relied on Nonlinear doing any accounting or having a reliable legal structure that has had the basics checked.
Personally, if I were giving Nonlinear funds for any project whatsoever, including for regranting, I’d expect it’s quite plausible (>20%) that they didn’t spend the funds on what they told me, and instead will randomly spend it on some other project. If I had previously funded Nonlinear for any projects, I would be keen to ask Nonlinear for receipts to show whether they spent their funds in accordance with what they said they would.
This is not a complete list
I want to be clear that this is not a complete list of negative or concerning experiences, this is an illustrative list. There are many other things that I was told about that I am not including here due to factors like length and people’s privacy (on all sides). Also I split them up into the categories as I see them; someone else might make a different split.
Perspectives From Others Who Have Worked or Otherwise Been Close With Nonlinear
I had hoped to work this into a longer section of quotes, but it seemed like too much back-and-forth with lots of different people. I encourage folks to leave comments with their relevant impressions.
For now I’ll summarize some of what I learned as follows:
- Several people gave reports consistent with Alice and Chloe being very upset and distressed both during and after their time at Nonlinear, and reaching out for help, and seeming really strongly to want to get away from Nonlinear.
- Some unpaid interns (who worked remotely for Nonlinear for 1-3 months) said that they regretted not getting paid, and that when they brought it up with Kat Woods she said some positive sounding things and they expected she would get back to them about it, but that never happened during the rest of their internships.
- Many people who visited had fine experiences with Nonlinear, others felt much more troubled by the experience.
- One person said to me about Emerson/Drew/Kat:
- "My subjective feeling is like 'they seemed to be really bad and toxic people'. And they at the same time have a decent amount of impact. After I interacted repeatedly with them I was highly confused about the dilemma of people who are mistreating other people, but are doing some good."
- Another person said about Emerson:
- “He seems to think he’s extremely competent, a genius, and that everyone else is inferior to him. They should learn everything they can from him, he has nothing to learn from them. He said things close to this explicitly. Drew and (to a lesser extent) Kat really bought into him being the new messiah.”
- One person who has worked for Kat Woods (not Alice or Chloe) said the following:
- I love her as a person, hate her as a boss. She’s fun, has a lot of ideas, really good socialite, and I think that that speaks to how she’s able to get away with a lot of things. Able to wear different masks in different places. She’s someone who’s easy to trust, easy to build social relationships with. I’d be suspicious of anyone who gives a reference who’s never been below Kat in power.
- Ben: Do you think Kat is emotionally manipulative?
- I think she is. I think it’s a fine line about what makes an excellent entrepreneur. Do whatever it takes to get a deal signed. To get it across the line. Depends a lot on what the power dynamics are, whether it’s a problem or not. If people are in equal power structures it’s less of a problem.
There were other informative conversations that I won’t summarize. I encourage folks who have worked with or for Nonlinear to comment with their perspective.
Conversation with Nonlinear
After putting the above together, I got permission from Alice and Chloe to publish, and to share the information I had learned as I saw fit. So I booked a call with Nonlinear, sent them a long list of concerns, and talked with Emerson, Kat and Drew for ~3 hours to hear them out.
Paraphrasing Nonlinear
On the call, they said their primary intention in the call was to convince me that Alice is a bald-faced liar. They further said they’re terrified of Alice making false claims about them, and that she is in a powerful position to hurt them with false accusations.
Afterwards, I wrote up a paraphrase of their responses. I shared it with Emerson and he replied that it was a “Good summary!”. Below is the paraphrase of their perspective on things that I sent them, with one minor edit for privacy. (The below is written as though Nonlinear is speaking, but to be clear this 100% my writing.)
- We hired one person, and kind-of-technically-hired a second person. In doing so, our intention wasn't just to have employees, but also to have members of our family unit who we traveled with and worked closely together with in having a strong positive impact in the world, and were very personally close with.
- We nomadically traveled the globe. This can be quite lonely so we put a lot of work into bringing people to us, often having visitors in our house who we supported with flights and accommodation. This probably wasn't perfect but in general we'd describe the environment as "quite actively social".
- For the formal employee, she responded to a job ad, we interviewed her, and it all went the standard way. For the gradually-employed employee, we initially just invited her to travel with us and co-work, as she seemed like a successful entrepreneur and aligned in terms of our visions for improving the world. Over time she quit her existing job and we worked on projects together and were gradually bringing her into our organization.
- We wanted to give these employees a pretty standard amount of compensation, but also mostly not worry about negotiating minor financial details as we traveled the world. So we covered basic rent/groceries/travel for these people. On top of that, to the formal employee we gave a $1k/month salary, and to the semi-formal employee we eventually did the same too. For the latter employee, we roughly paid her ~$8k over the time she worked with us.
- From our perspective, the gradually-hired employees gave a falsely positive impression of their financial and professional situation, suggesting they'd accomplished more than they had and were earning more than they had. They ended up being fairly financially dependent on us and we didn't expect that.
- Eventually, after about 6-8 months each, both employees quit. Overall this experiment went poorly from our perspective and we're not going to try it in future.
- For the formal employee, we're a bit unsure about why exactly she quit, even though we did do exit interviews with her. She said she didn't like a lot of the menial work (which is what we hired her for), but didn't say that money was the problem. We think it is probably related to everyone getting Covid and being kind of depressed around that time.
- For the other employee, relations got bad for various reasons. She ended up wanting total control of the org she was incubating with us, rather than 95% control as we'd discussed, but that wasn't on the table (the org had $250k dedicated to it that we'd raised!), and so she quit.
- When she was leaving, we were financially supportive. On the day we flew back from the Bahamas to London, we paid all our outstanding reimbursements (~$2900). We also offered to pay for her to have a room in London for a week as she got herself sorted out. We also offered her rooms with our friends if she promised not to tell them lies about us behind our backs.
- After she left, we believe she told a lot of lies and inaccurate stories about us. For instance, two people we talked to had the impression that we either paid her $0 or $500, which is demonstrably false. Right now we're pretty actively concerned that she is telling lots of false stories in order to paint us in a negative light, because the relationship didn't work out and she didn't get control over her org (and because her general character seems drama-prone).
There were some points around the experiences of these employees that we want to respond to.
- First; the formal employee drove without a license for 1-2 months in Puerto Rico. We taught her to drive, which she was excited about. You might think this is a substantial legal risk, but basically it isn't, as you can see here, the general range of fines for issues around not-having-a-license in Puerto Rico is in the range of $25 to $500, which just isn't that bad.
- Second; the semi-employee said that she wasn't supported in getting vegan food when she was sick with Covid, and this is why she stopped being vegan. This seems also straightforwardly inaccurate, we brought her potatoes, vegan burgers, and had vegan food in the house. We had been advising her to 80/20 being a vegan and this probably also weighed on her decision.
- Third; the semi-employee was also asked to bring some productivity-related and recreational drugs over the border for us. In general we didn't push hard on this. For one, this is an activity she already did (with other drugs). For two, we thought it didn't need prescription in the country she was visiting, and when we found out otherwise, we dropped it. And for three, she used a bunch of our drugs herself, so it's not fair to say that this request was made entirely selfishly. I think this just seems like an extension of the sorts of actions she's generally open to.
Finally, multiple people (beyond our two in-person employees) told Ben they felt frightened or freaked out by some of the business tactics in the stories Emerson told them. To give context and respond to that:
- I, Emerson, have had a lot of exceedingly harsh and cruel business experience, including getting tricked or stabbed-in-the-back. Nonetheless, I have often prevailed in these difficult situations, and learned a lot of hard lessons about how to act in the world.
- The skills required to do so seem to me lacking in many of the earnest-but-naive EAs that I meet, and I would really like them to learn how to be strong in this way. As such, I often tell EAs these stories, selecting for the most cut-throat ones, and sometimes I try to play up the harshness of how you have to respond to the threats. I think of myself as playing the role of a wise old mentor who has had lots of experience, telling stories to the young adventurers, trying to toughen them up, somewhat similar to how Prof Quirrell[8] toughens up the students in HPMOR through teaching them Defense Against the Dark Arts, to deal with real monsters in the world.
- For instance, I tell people about my negotiations with Adorian Deck about the OMGFacts brand and Twitter account. We signed a good deal, but a California technicality meant he could pull from it and take my whole company, which is a really illegitimate claim. They wouldn't talk with me, so I was working with top YouTubers to make some videos publicizing and exposing his bad behavior. This got him back to the negotiation table and we worked out a deal where he got $10k/month for seven years, which is not a shabby deal, and meant that I got to keep my company!
- It had been reported to Ben that Emerson said he would be willing to go into legal gray areas in order to "crush his enemies" (if they were acting in very reprehensible and norm-violating ways). Emerson thinks this has got to be a misunderstanding, that he was talking about what other people might do to you, which is a crucial thing to discuss and model.
(Here I cease pretending-to-be-Nonlinear and return to my own voice.)
My thoughts on the ethics and my takeaways
Summary of My Epistemic State
Here are my probabilities for a few high-level claims relating to Alice and Chloe’s experiences working at Nonlinear.
- Emerson Spartz employs more vicious and adversarial tactics in conflicts than 99% of the people active in the EA/x-risk/AI Safety communities: 95%
- Alice and Chloe were more dependent on their bosses (combining financial, social, and legally) than employees are at literally every other organization I am aware of in the EA/x-risk/AI Safety ecosystem: 85%[9]
- In working at Nonlinear Alice and Chloe were both took on physical and legal risks that they strongly regretted, were hurt emotionally, came away financially worse off, gained ~no professional advancement from their time at Nonlinear, and took several months after the experience to recover: 90%
- Alice and Chloe both had credible reason to be very scared of retaliation for sharing negative information about their work experiences, far beyond that experienced at any other org in the EA/x-risk/AI Safety ecosystem: 85%[10]
General Comments From Me
Going forward I think anyone who works with Kat Woods, Emerson Spartz, or Drew Spartz, should sign legal employment contracts, and make sure all financial agreements are written down in emails and messages that the employee has possession of. I think all people considering employment by the above people at any non-profits they run should take salaries where money is wired to their bank accounts, and not do unpaid work or work that is compensated by ways that don’t primarily include a salary being wired to their bank accounts.
I expect that if Nonlinear does more hiring in the EA ecosystem it is more-likely-than-not to chew up and spit out other bright-eyed young EAs who want to do good in the world. I relatedly think that the EA ecosystem doesn’t have reliable defenses against such predators. These are not the first, nor sadly the last, bright-eyed well-intentioned people who I expect to be taken advantage of and hurt in the EA/x-risk/AI safety ecosystem, as a result of falsely trusting high-status people at EA events to be people who will treat them honorably.
(Personal aside: Regarding the texts from Kat Woods shown above — I have to say, if you want to be allies with me, you must not write texts like these. A lot of bad behavior can be learned from, fixed, and forgiven, but if you take actions to prevent me from being able to learn that the bad behavior is even going on, then I have to always be worried that something far worse is happening that I’m not aware of, and indeed I have been quite shocked to discover how bad people’s experiences were working for Nonlinear.)
My position is not greatly changed by the fact that Nonlinear is overwhelmingly confident that Alice is a “bald-faced liar”. From my current perspective, they probably have some legitimate grievances against her, but that in no way makes it less costly to our collective epistemology to incentivize her to not share her own substantial grievances. I think the magnitude of the costs they imposed on their employees-slash-new-family are far higher than I or anyone I know would have expected was happening, and they intimidated both Alice and Chloe into silence about those costs. If it were only Alice then I would give this perspective a lot more thought/weight, but Chloe reports a lot of the same dynamics and similar harms.
To my eyes, the people involved were genuinely concerned about retaliation for saying anything negative about Nonlinear, including the workplace/household dynamics and how painful their experiences had been for them. That’s a red line in my book, and I will not personally work with Nonlinear in the future because of it, and I recommend their exclusion from any professional communities that wish to keep up the standard of people not being silenced about extremely negative work experiences. “First they came for the epistemology. We don't know what happened after that.”
Specifically, the things that cross my personal lines for working with someone or viewing them as an ally:
- Kat Woods attempted to offer someone who was really hurting, and in a position of strong need, very basic resources with the requirement of not saying bad things about her.
- Kat Woods’ texts that read to me as a veiled threat to destroy someone’s career for sharing negative information about her.
- Emerson Spartz reportedly telling multiple people he will use questionably legal methods in order to crush his enemies (such as spurious lawsuits and that he would hire a stalker to freak someone out).
- Both employees were actively afraid that Emerson Spartz would retaliate and potentially using tactics like spurious lawsuits and further things that are questionably legal, and generally try to destroy their careers and leave them with no resources. It seems to me (given the other reports I’ve heard from visitors) that Emerson behaved in a way that quite understandably led them to this epistemic state, and I consider that to be his responsibility to not give his employees this impression.
I think in almost any functioning professional ecosystem, there should be some general principles like:
- If you employ someone, after they work for you, unless they've done something egregiously wrong or unethical, they should be comfortable continuing to work and participate in this professional ecosystem.
- If you employ someone, after they work for you, they should feel comfortable talking openly about their experience working with you to others in this professional ecosystem.
Any breaking of the first rule is very costly, and any breaking of the second rule is by-default a red-line for me not being willing to work with you.
I do think that there was a nearby world where Alice, having run out of money, gave in and stayed at Nonlinear, begging them for money, and becoming a fully dependent and subservient house pet — a world where we would not have learned the majority of this information. I think we're not that far from that world, I think a weaker person than Alice might have never quit, and it showed a lot of strength to quit at the point where you have ~no runway left and you have heard the above stories about the kinds of things Emerson Spartz considers doing to former business partners that he is angry with.
I’m very grateful to the two staff members involved for coming forward and eventually spending dozens of hours clarifying and explaining their experiences to me and others who were interested. To compensate them for their courage, the time and effort spent to talk with me and explain their experiences at some length, and their permission to allow me to publish a lot of this information, I (using personal funds) am going to pay them each $5,000 after publishing this post.
I think that whistleblowing is generally a difficult experience, with a lot riding on the fairly personal account from fallible human beings. It’s neither the case that everything reported should be accepted without question, nor that if some aspect is learned to be exaggerated or misreported that the whole case should be thrown out. I plan to reply to further questions here in the comments, I also encourage everyone involved to comment insofar as they wish to answer questions or give their own perspective on what happened.
Addendum
This is a list of edits made post-publication.
- "Alice worked there from November 2021 to June 2022" became "Alice travelled with Nonlinear from November 2021 to June 2022 and started working for the org from around February"
- "using Lightcone funds" became "using personal funds"
- "I see clear reasons to think that Kat, Emerson and Drew intimidated these people" became "I see clear reasons to think that Kat and Emerson intimidated these people".
- ^
In a later conversation, Kat clarified that the actual amount discussed was $70k.
- ^
Comment from Chloe:
In my resignation conversation with Kat, I was worried about getting into a negotiation conversation where I wouldn’t have strong enough reasons to leave. To avoid this, I started off by saying that my decision to quit is final, and not an ultimatum that warrants negotiation of what would make me want to stay. I did offer to elaborate on the reasons for why I was leaving. As I was explaining my reasons, she still insisted on offering me solutions to things I would say I wanted, to see if that would make me change my mind anyway. One of the reasons I listed was the lack of financial freedom in not having my salary be paid out as a salary which I could allocate towards decisions like choices in accommodation for myself, as well as meals and travel decisions. She wanted to know how much I wanted to be paid. I kept evading the question since it seemed to tackle the wrong part of the problem. Eventually I quoted back the number I had heard her reference to when she’d talk about what my salary is equivalent to, suggesting that if they’d pay out the 75k as a salary instead of the compensation package, then that would in theory solve the salary issue. There was a miscommunication around her believing that I wanted that to be paid out on top of the living expenses - I wanted financial freedom and a legal salary. I believe the miscommunication stems from me mentioning that salaries are more expensive for employers to pay out as they also have to pay tax on the salaries, e.g. social benefits, pension (depending on the country). Kat was surprised to hear that and understood it as me wanting a 75k salary before taxes. I do not remember that conversation concluding with her thinking I wanted everything paid for and also 75k.
- ^
Note that Nonlinear and Alice gave conflicting reports about which month she started getting paid, February vs April. It was hard for me to check as it’s not legally recorded and there’s lots of bits of monetary payments unclearly coded between them.
- ^
Comment from one of the employees:
I had largely moved on from the subject and left the past behind when Ben started researching it to write a piece with his thoughts on it. I was very reluctant at first (and frightened at the mere thought), and frankly, will probably continue to be. I did not agree to post this publicly with any kind of malice, rest assured. The guiding thought here is, as Ben asked, "What would you tell your friend if they wanted to start working for this organization?" I would want my friend to be able to make their own independent decision, having read about my experience and the experiences of others who have worked there. My main goal is to create a world where we can all work together towards a safe, long and prosperous future, and anything that takes away from that (like conflict and drama) is bad and I have generally avoided it. Even when I was working at Nonlinear, I remember saying several times that I just wanted to work on what was important and didn't want to get involved in their interpersonal drama. But it's hard for me to imagine a future where situations like that are just overlooked and other people get hurt when it could have been stopped or flagged before. I want to live in a world where everyone is safe and cared for. For most of my life I have avoided learning about anything to do with manipulation, power frameworks and even personality disorders. By avoiding them, I also missed the opportunity to protect myself and others from dangerous situations. Knowledge is the best defense against any kind of manipulation or abuse, so I strongly recommend informing yourself about it, and advising others to do so too.
- ^
This is something Alice showed me was written in her notes from the time.
- ^
I do not mean to make a claim here about who was in the right in that conflict. And somewhat in Emerson’s defense, I think some of people’s most aggressive behavior comes out when they themselves have just been wronged — I expect this is more extreme behavior than he would typically respond with. Nonetheless, it seems to me that there was reportedly a close, mentoring relationship — Emerson’s tumblr post on the situation says “I loved Adorian Deck” in the opening paragraph — but that later Emerson reportedly became bitter and nasty in order to win the conflict, involving threatening to overwhelm someone with lawsuits and legal costs, and figure out the best way to use their formerly close relationship to hurt them emotionally, and reportedly gave this as an example of good business strategy. I think this sort of story somewhat justifiably left people working closely with Emerson very worried about the sort of retaliation he might carry out if they were ever in a conflict, or he were to ever view them as an ‘enemy’.
- ^
After this, there were further reports of claims of Kat professing her romantic love for Alice, and also precisely opposite reports of Alice professing her romantic love for Kat. I am pretty confused about what happened.
- ^
Note that during our conversation, Emerson brought up HPMOR and the Quirrell similarity, not me.
- ^
With the exception of some FTX staff.
- ^
One of the factors lowering my number here is that I’m not quite sure what the dynamics are like at places like Anthropic and OpenAI — who have employees sign non-disparagement clauses, and are involved in geopolitics — or whether they would even be included. I also could imagine finding out that various senior people at CEA/EV are terrified of information coming out about them. Also note that I am not including Leverage Research in this assessment.
323 comments
Comments sorted by top scores.
comment by chloe · 2023-09-11T10:07:55.830Z · LW(p) · GW(p)
On behalf of Chloe and in her own words, here’s a response that might illuminate some pieces that are not obvious from Ben’s post - as his post is relying on more factual and object-level evidence, rather than the whole narrative.
“Before Ben published, I found thinking about or discussing my experiences very painful, as well as scary - I was never sure with whom it was safe sharing any of this with. Now that it’s public, it feels like it’s in the past and I’m able to talk about it. Here are some of my experiences I think are relevant to understanding what went on. They’re harder to back up with chatlog or other written evidence - take them as you want, knowing these are stories more than clearly backed up by evidence. I think people should be able to make up their own opinion on this, and I believe they should have the appropriate information to do so.
I want to emphasize *just how much* the entire experience of working for Nonlinear was them creating all kinds of obstacles, and me being told that if I’m clever enough I can figure out how to do these tasks anyway. It’s not actually about whether I had a contract and a salary (even then, the issue wasn’t the amount or even the legality, it was that they’d be verbally unclear about what the compensation entailed, eg Emerson saying that since he bought me a laptop in January under the premise of “productivity tool”, that meant my January salary was actually higher than it would have been otherwise, even though it was never said that the laptop was considered as part of the compensation when we discussed it, and I had not initiated the purchase of it), or whether I was asked to do illegal things and what constitutes as okay illegal vs not okay illegal - it’s the fact that they threw some impossibly complex setup at us, told us we can have whatever we want, if we are clever enough with negotiating (by us I mostly mean me and Alice). And boy did we have to negotiate. I needed to run a medical errand for myself in Puerto Rico and the amount of negotiating I needed to do to get them to drive me to a different city that was a 30 min drive away was wild. I needed to go there three times, and I knew the answer of anyone driving me would be that it’s not worth their time, at the same time getting taxis was difficult while we were living in isolated mountain towns, and obviously it would have been easiest to have Drew or Emerson drive me. I looked up tourism things to do in that city, and tried to use things like “hey this city is the only one that has a store that sells Emerson’s favorite breakfast cereal and I could stock up for weeks if we could just get there somehow”. Also - this kind of going out of your way to get what you wanted or needed was rewarded with the Nonlinear team members giving you “points” or calling you a “negotiation genius”.
Of course I was excited to learn how to drive - I could finally get my tasks done and take care of myself, and have a means to get away from the team when it became too much to be around them. And this is negotiating for just going to a city that’s a 30 minute drive away - three times. Imagine how much I had to negotiate to get someone to drive me to a grocery store to do weekly groceries, and then add to that salary or compensation package negotiations and negotiate whether I could be relieved from having to learn how to buy weed for Kat in every country we went to. I’m still not sure how to concisely describe the frame they prescribed to us (here’s a great post on frame control by Aella that seems relevant https://aella.substack.com/p/frame-control ), but most saliently it included the heavy pep talk of how we could negotiate anything we wanted if we were clever enough, and if we failed - it was implied that we simply weren’t good enough. People get prescribed an hourly rate, based on how much their time is worth at Nonlinear. On the stack of who has most value, it goes Emerson, Kat, Drew, Alice, Chloe. All this in the context where we were isolated, and our finances mostly controlled by Emerson. I’ll add a few stories from my perspective, of how this plays out in practice.
Note: These stories are roughly 2 to 3 months into my job, this means 2 to 3 months of needing to find clever solutions to problems that ought to be simple, as well as ongoing negotiations with members of the Nonlinear team, to get the basics of my job done.
(⅙)”
…
“When we were flying to the Bahamas from St Martin, I was given a task of packing up all of Nonlinear’s things (mostly Kat & Emerson) into 5 suitcases. Emerson wanted the suitcases to be below the allowed limit if possible. I estimated that the physical weight of their items would exceed the weight limit of 5 suitcases. I packed and repacked the suitcases 5 to 6 times, after each time Emerson would check my work, say that the suitcases are too heavy, and teach me a new rule according to which to throw things out. Eventually I got it done to a level that Emerson was satisfied with. Him and Kat had been working outside the entire time.
In a previous packing scenario I had packed some things like charging cables and similar daily used items too fast, which Emerson did not appreciate, so this time I had left some everyday things around for him to use and grab as the last things. When I said we are packed and ready to go, he looked around the house and got angry at all the things that were lying around that he now had to pack himself - I remember him shouting in anger. I was packing up the cars and didn’t deal with him, just let him be mad in the house. This got Drew pretty frustrated as well, he had witnessed me repacking five bags 5-6 times and also tried to negotiate with Emerson about ditching some things that he refused to leave behind (we carried around 2 mountain bikes, and Emerson tasked me with packing in a beach chair as well). When we got into the car which was packed to the brim, Drew got to driving and as we drove out, he shouted really loudly out of anger. The anger was so real that I parsed it as him making a joke because I could not fathom how angry he was - my immediate response was to laugh. I quickly realized he was serious, I stopped and apologized, to which he responded with something like “no I am actually mad, and you should be too!” - related to how much we had to pack up. (2/6)“
…
“Kat had asked me to buy her a specific blonde hair coloring, at the time she told me it’s urgent since she had grown out her natural hair quite a lot. We were living in St Martin where they simply do not sell extreme blond coloring in the specific shade I needed to find, and Amazon does not deliver to St Martin. I also needed to grab this hair coloring while doing weekly groceries. One important guideline I needed to follow for groceries was that it had to be roughly a 10 min car trip but they were frequently disappointed if I didn’t get all their necessities shopped for from local stores so I naturally ventured further sometimes to make sure I got what they asked for.
I ended up spending hours looking for that blonde hair coloring in different stores, pharmacies, and beauty stores, across multiple weekly grocery trips. I kept Kat updated on this. Eventually I found the exact shade she asked for - Kat was happy to receive this but proceeded to not color her hair with it for another two weeks. Then we had to pack up to travel to the Bahamas. The packing was difficult (see previous paragraph) - we were struggling with throwing unnecessary things out. The hair color had seemed pretty important, and I thought Bahamas would also be a tricky place to buy that haircolor from, so I had packed it in. We get to the airport, waiting in the queue to check in the suitcases. Kat decides to open up the suitcases to see which last minute things we can throw out to make the suitcases lighter. She reaches for the hair color and happily throws it out. My self worth is in a place where I witness her doing this (she knows how much effort I put into finding this), and I don’t even think to say anything in protest - it just feels natural that my work hours are worth just this much. It’s depressing. (3/6)”
…
“There was a time during our stay at St Martin when I was overwhelmed from living and seeing only the same people every single day and needed a day off. Sometimes I’d become so overwhelmed I became really bad at formulating sentences and being in social contexts so I’d take a day off and go somewhere on the island where I could be on my own, away from the whole team - I’ve never before and after experienced an actual lack of being able to formulate sentences just from being around the same people for too long. This was one of these times. We had guests over and the team with the guests had decided in the morning that it’s a good vacation day for going to St Barths. I laid low because I thought since I’m also on a weekend day, it would not be mine to organize (me and Kat would take off Tuesdays and Saturdays, these were sometimes called weekend or vacation days).
Emerson approaches me to ask if I can set up the trip. I tell him I really need the vacation day for myself. He says something like “but organizing stuff is fun for you!”. I don’t know how to respond nor how to get out of it, I don’t feel like I have the energy to negotiate with him so I start work, hoping that if I get it done quickly, I can have the rest of the day for myself.
I didn’t have time to eat, had just woken up, and the actual task itself required to rally up 7 people and figure out their passport situation as well as if they want to join. St Barths means entering a different country, which meant that I needed to check in with the passport as well as covid requirements and whether all 7 people can actually join. I needed to quickly book some ferry tickets there and back for the day, rally the people to the cars and get to the ferry - all of this within less than an hour. We were late and annoyed the ferry employees - but this is one of the things generally ignored by the Nonlinear team, us being late but getting our way is a sign of our agency and how we aren’t NPCs that just follow the prescribed ferry times - they’re negotiable after all, if we can get away with getting to St Barths anyway.
I thought my work was done. We got to the island, my plan was to make the most of it and go on my own somewhere but Emerson says he wants an ATV to travel around with and without an ATV it’s a bit pointless. Everyone sits down at a lovely cafe to have coffee and chit chat, while I’m running around to car and ATV rentals to see what they have to offer. All ATVs have been rented out - it’s tourist season. I check back in, Emerson says I need to call all the places on the island and keep trying. I call all the places I can find, this is about 10 places (small island). No luck. Eventually Emerson agrees that using a moped will be okay, and that’s when I get relieved from my work tasks.
I did describe this to Kat in my next meeting with her that it’s not okay for me to have to do work tasks while I’m on my weekends, and she agreed but we struggled to figure out a solution that would make sense. It remained more of a “let’s see how this plays out”. (4/6)”
…
“One of my tasks was to buy weed for Kat, in countries where weed is illegal. When I kept not doing it and saying that it was because I didn’t know how to buy weed, Kat wanted to sit me down and teach me how to do it. I refused and asked if I could just not do it. She kept insisting that I’m saying that because I’m being silly and worry too much and that buying weed is really easy, everybody does it. I wasn’t comfortable with it and insisted on not doing this task. She said we should talk about it when I’m feeling less emotional about it. We never got to that discussion because in the next meeting I had with her I quit my job. (⅚)”
…
“The aftermath of this experience lasted for several months. Working and living with Nonlinear had me forget who I was, and lose more self worth than I had ever lost in my life. I wasn’t able to read books anymore, nor keep my focus in meetings for longer than 2 minutes, I couldn’t process my own thoughts or anything that took more than a few minutes of paying attention. I was unable to work for a few months. I was scared to share my experiences, terrified that Emerson or Kat would retaliate. While working with them I had forgotten that I used to be excited for work, and getting new tasks would spark curiosity on how to solve them best, rather than feelings of overwhelm. I stopped going for runs and whenever I did exercise I wasn’t able to finish my routine - I thought it meant I was just weak. Emerson held such a strong grasp of financial control over me that I actually forgot that I had saved up money from my previous jobs, to the extent of not even checking my bank statements. I seriously considered leaving effective altruism, as well as AI safety, if it meant that I could get away from running into them, and get away from a tolerance towards such behavior towards people.
It’s really not about the actual contracts, salaries, illegal jobs. Even with these stories, I’m only able to tell some of them that I can wrap my head around. I spent months trying to figure out how to empathize with Kat and Emerson, how they’re able to do what they’ve done, to Alice, to others they claimed to care a lot about. How they can give so much love and support with one hand and say things that even if I’d try to model “what’s the worst possible thing someone could say”, I’d be surprised how far off my predictions would be. I think the reader should make up their own mind on this. Read what Nonlinear has to say. Read what Ben says, what these comments add to it.
People trying their best can sometimes look absolutely terrifying, but actions need to have consequences nonetheless. This isn’t an effect of weird living and working conditions either, I believe it goes deeper than that - I am still happy to hear that Nonlinear has since abandoned at least that part of their “experiment”. But Nonlinear also isn’t my idea of effective altruism or doing good better and I hope we can keep this community safer than it was for me and Alice, for all the current and new members to come along in the future. (6/6) ”
Replies from: Benito, joel-becker, thoth-hermes↑ comment by Ben Pace (Benito) · 2023-09-11T16:02:45.863Z · LW(p) · GW(p)
I confirm that this is Chloe, who contacted me through our standard communication channels to say she was posting a comment today.
↑ comment by Joel Becker (joel-becker) · 2023-09-12T13:38:58.578Z · LW(p) · GW(p)
Repost [EA(p) · GW(p)] from EA forum:
Thank you very much for sharing, Chloe.
Ben, Kat, Emerson, and readers of the original post have all noticed that the nature of Ben's process leads to selection against positive observations about Nonlinear. I encourage readers to notice that the reverse might also be true. Examples of selection against negative information include:
- Ben has reason to exclude [EA(p) · GW(p)] stories that are less objective or have a less strong evidence base. The above comment is a concrete example of this.
- There's also something related here about the supposed unreliability of Alice as a source: Ben needs to include this to give a complete picture/because other people (in particular the Nonlinear co-founders) have said this. I strongly concur with Ben when he writes that he "found Alice very willing and ready to share primary sources [...] so I don’t believe her to be acting in bad faith." Personally, my impression is that people are making an incorrect inference about Alice from her characteristics (that are perhaps correlated with source-reliability in a large population, but aren't logically related, and aren't relevant in this case).
- To the extent that you expect other people to have been silenced (e.g. via anticipated retaliation), you might expect not to hear relevant information from them.
- To the extent that you expect Alice and Chloe to have had burnout-style experiences, you might expect not to read clarifications on or news about negative experiences.
- Until this post came out, this was true of ~everything in the post.
- There is a reason the post was published 1.5 years after the relevant events took place -- people involved in the events really do not want to spend further mental effort on this.
↑ comment by Thoth Hermes (thoth-hermes) · 2023-09-11T16:42:05.619Z · LW(p) · GW(p)
It seems like a big part of this story is mainly about people who have relatively strict preferences kind of aggressively defending their territory and boundaries, and how when you have multiple people like this working together on relatively difficult tasks (like managing the logistics of travel), it creates an engine for lots of potential friction.
Furthermore, when you add the status hierarchy of a typical organization, combined with the social norms that dictate how people's preferences and rights ought to be respected (and implicit agreements being made about how people have chosen to sacrifice some of those rights for altruism's sake), you add even more fuel to the aforementioned engine.
I think complaints such as these are probably okay to post, as long as everyone mentioned is afforded the right to update their behavior after enough time has passed to reflect and discuss these things (since actually negotiating what norms are appropriate here might end up being somewhat difficult).
Edit: I want to clarify that when there is a situation in which people have conflicting preferences and boundaries as I described, I do personally feel that those in leadership positions / higher status probably bear the responsibility of satisfying their subordinates' preferences to their satisfaction, given that the higher status people are having their own higher, longer-term preferences satisfied with the help of their subordinates.
I don't want to make it seem as though the ones bringing the complaints are as equally responsible for this situation as the ones being complained about.
comment by David Hornbein · 2023-09-07T17:56:34.148Z · LW(p) · GW(p)
think about how bad you expect the information would be if I selected for the worst, credible info I could share
Alright. Knowing nothing about Nonlinear or about Ben, but based on the rationalist milieu, then for an org that’s weird but basically fine I’d expect to see stuff like ex-employees alleging a nebulously “abusive” environment based on their own legitimately bad experiences and painting a gestalt picture that suggests unpleasant practices but without any smoking-gun allegations of really egregious concrete behavior (as distinct from very bad effects on the accusers); allegations of nepotism based on social connections between the org’s leadership and their funders or staff; accusations of shoddy or motivated research which require hours to evaluate; sources staying anonymous for fear of “retaliation” but without being able to point to any legible instances of retaliation or concrete threats to justify this; and/or thirdhand reports of lying or misdirection around complicated social situations.
[reads post]
This sure has a lot more allegations of very specific and egregious behavior than that, yeah.
EDIT: Based on Nonlinear's reply [LW · GW] and the thorough records they provide, it seems that the smoking-gun allegations of really egregious concrete behavior are probably just false. This leaves room for unresolvable disagreement on the more nebulous accusations, but as I said initially, that's the pattern I'd expect to see if Nonlinear were weird but basically fine.
Replies from: Benito, elityre, AprilSR↑ comment by Ben Pace (Benito) · 2023-09-07T18:33:19.270Z · LW(p) · GW(p)
Great prediction, I'm pleased that you said it. I'd also be curious to know specific parts that were most surprising to you reading the post, that didn't match up with this prediction.
Replies from: David Hornbein↑ comment by David Hornbein · 2023-09-07T18:49:48.247Z · LW(p) · GW(p)
- Offering a specific amount of pay, in cash and in kind, and then not doing the accounting to determine whether or not that amount was actually paid out. If I’m charitable to the point of gullibility, then this is unethical and culpable negligence. Probably it’s just fraud. (Assuming this allegation is true, of course, and
AFAIK it is not yet disputed.) - Screenshots of threats to retaliate for speaking up.
EDIT: Nonlinear has now replied [LW · GW] and disputed many of the allegations. I am persuaded that the allegation of fraud/negligence around payment is simply false. As for the screenshots of threats to retaliate, my opinion is that retaliation or threats to retaliate are perfectly justified in the face of the behavior which Nonlinear alleges. Nonlinear also provides longer chatlogs around one of the screenshotted texts which they argue recontextualizes it.
↑ comment by Eli Tyre (elityre) · 2023-09-07T18:21:56.486Z · LW(p) · GW(p)
Thank you for taking the time to preregister your thoughts. This was great, and helpful for me to read.
comment by aphyer · 2023-09-07T17:52:30.090Z · LW(p) · GW(p)
Going forward I think anyone who works
with Kat Woods, Emerson Spartz, or Drew Spartz,should sign legal employment contracts, and make sure all financial agreements are written down in emails and messages that the employee has possession of. I think all people considering employmentby the above people at any non-profits they runshould take salaries where money is wired to their bank accounts, and not do unpaid work or work that is compensated by ways that don’t primarily include a salary being wired to their bank accounts.
FTFY.
While I have no knowledge of or views on the situation above, this is just a good thing to do in general? Like, most sentences that begin with the phrase 'my boss, whose house I live at and who I have only a handshake agreement with on pay...' are not going to end well.
Replies from: Linda Linsefors, lalaithion, Benito↑ comment by Linda Linsefors · 2023-09-10T13:48:57.533Z · LW(p) · GW(p)
I have worked without legal contracts for people in EA I trust, and it has worked well.
Even if all the accusation of Nonlinear is true, I still have pretty high trust for people in EA or LW circles, such that I would probably agree to work with no formal contract again.
The reason I trust people in my ingroup is that if either of us screw over the other person, I expect the victim to tell their friends, which would ruin the reputation of the wrongdoer. For this reason both people have strong incentive to act in good faith. On top of that I'm wiling to take some risk to skip the paper work.
When I was a teenager I worked a bit under legally very sketch circumstances. They would send me to work in some warehouse for a few days, and draw up the contract for that work afterwards. Including me falsifying the date for my signature. This is not something I would have agreed to with a stranger, but the owner of my company was a friend of my parents, and I trusted my parents to slander them appropriately if they screwed me over.
I think my point is that this is not something very uncommon, because doing everything by the book is so much overhead, and sometimes not worth it.
It think being able to leverage reputation based and/or ingroup based trust is immensely powerful, and not something we should give up on.
For this reason, I think the most serious sin committed by Nonlinear, is their alleged attempt of silencing critics.
Update to clarify: This is based on the fact that people have been scared of criticising Nonlinear. Not based on any specific wording of any specific message.
Update: On reflection, I'm not sure if this is the worst part (if all accusations are true). But it's pretty high on the list.
I don't think making sure that no EA every give paid work to another EA, with out a formal contract, will help much. The most vulnerable people are those new to the movement, which are exactly the people who will not know what the EA norms are anyway. An abusive org can still recruit people with out contracts and just tell them this is normal.
I think a better defence mechanism is to track who is trust worthy or not, by making sure information like this comes out. And it's not like having a formal contract prevents all kinds of abuse.
Update based on responses to this comment: I do think having a written agreement, even just an informal expression of intentions, is almost always strictly superior to not having anything written down. When writing this I comment I was thinking in terms of formal contract vs informal agreement, which is not the same as verbal vs written.
↑ comment by Elizabeth (pktechgirl) · 2023-09-13T03:05:08.858Z · LW(p) · GW(p)
I don't think making sure that no EA every give paid work to another EA, with out a formal contract, will help much
I feel like people are talking about written records like it's a huge headache, but they don't need to be. When freelancing I often negotiate verbally, then write an email with terms to the client., who can confirm or correct them. I don't start work until they've confirmed acceptance of some set of terms. This has enough legal significance that it lowers my business insurance rates, and takes seconds if people are genuinely on the same page.
What my lawyer parent taught me was that contracts can't prevent people from screwing you over. (which is impossible). At my scale and probably most cases described here, the purpose of a contract is to prevent misunderstandings between people of goodwill. And it's so easy to do notably better than nonlinear did here.
Replies from: Linda Linsefors↑ comment by Linda Linsefors · 2023-09-13T13:05:43.144Z · LW(p) · GW(p)
This is a good point. I was thinking in terms of legal vs informal, not in terms of written vs verbal.
I agree that having something written down is basically always better. Both for clarity, as you say, and because peoples memories are not perfect. And it have the added bonus that if there is a conflict, you have something to refer back to.
↑ comment by Conor Moreton · 2023-09-13T01:20:55.812Z · LW(p) · GW(p)
(This is Duncan Sabien, logging in with the old Conor Moreton account b/c this feels important.)
While I think Linda's experience is valid, and probably more representative than mine, I want to balance it by pointing out that I deeply, deeply, deeply regret taking a(n explicit, unambiguous, crystal clear) verbal agreement, and not having a signed contract, with an org pretty central to the EA and rationality communities. As a result of having the-kind-of-trust that Linda describes above, I got overtly fucked over to the tune of many thousands of dollars and many months of misery and confusion and alienation, and all of that would've been prevented by a simple written paragraph with two signatures at the bottom.
(Such a paragraph would've either prevented the agreement from being violated in the first place, or would at least have made the straightforward violation that occurred less of a thing that people could subsequently spin webs of fog and narrativemancy around, to my detriment.)
As for the bit about telling your friends and ruining the reputation of the wrongdoer ... this option was largely NOT available to me, for fear-of-reprisal reasons as well as not wanting to fuck up the subsequent situation I found myself in, which was better, but fragile. To this day, I still do not feel like it's safe to just be fully open and candid about the way I was treated, and how many norms of good conduct and fair dealings were broken in the process. The situation was eventually resolved to my satisfaction, but there were years of suffering in between.
If @Rob Bensinger [LW · GW] does in fact cross-post Linda's comment, I request he cross-posts this, too.
(I will probably not engage with responses because I'm still trying to avoid being here; dropping a comment feels less risky on that front than having a back-and-forth exchange.)
Replies from: RobbBB, Linda Linsefors↑ comment by Rob Bensinger (RobbBB) · 2023-09-13T02:31:08.903Z · LW(p) · GW(p)
If @Rob Bensinger [LW · GW] does in fact cross-post Linda's comment, I request he cross-posts this, too.
I was going to ask if I could!
I understand if people don't want to talk about it, but I do feel sad that there isn't some kind of public accounting of what happened there.
(Well, I don't concretely understand why people don't want to talk about it, but I can think of possibilities!)
↑ comment by Linda Linsefors · 2023-09-13T12:52:04.439Z · LW(p) · GW(p)
Thanks for adding your perspective.
If @Rob Bensinger [LW · GW] does in fact cross-post Linda's comment, I request he cross-posts this, too.
I agree with this.
↑ comment by Daniel Wyrzykowski (daniel-wyrzykowski) · 2023-09-13T08:33:11.291Z · LW(p) · GW(p)
The contract is signed for when bad things and disagreements happen, not for when everything is going good. In my opinion “I had no contract and everything was good” is not as good example as “we didn’t have a contract, had major disagreement, and everything still worked out” would be.
Even though I hate bureaucracy and admin work and I prefer to skip as much as reasonable to move faster, my default is to have a written agreement, especially if working with a given person/org for the first time. Generally, the weaker party should have the final say on forgoing a contract. This is especially true the more complex and difficult situation is (eg. living/travelling together, being in romantic relationships).
I agree with the general view that both signing and not signing have prons and cons and sometimes it's better to not sign and avoid the overhead.
↑ comment by Rob Bensinger (RobbBB) · 2023-09-13T00:54:00.188Z · LW(p) · GW(p)
Can I cross-post this to the EA Forum? (Or you can do it, if you prefer; but I think this is a really useful comment.)
Replies from: Linda Linsefors↑ comment by Linda Linsefors · 2023-09-13T12:48:31.838Z · LW(p) · GW(p)
I'm glad you liked it. You have my permission to cross post.
↑ comment by lalaithion · 2023-09-08T16:20:46.182Z · LW(p) · GW(p)
Yeah, this post makes me wonder if there are non-abusive employers in EA who are nevertheless enabling abusers by normalizing behavior that makes abuse popular. Employers who pay their employees months late without clarity on why and what the plan is to get people paid eventually. Employers who employ people without writing things down, like how much people will get paid and when. Employers who try to enforce non-disclosure of work culture and pay.
None of the things above are necessarily dealbreakers in the right context or environment, but when an employer does those things they are making it difficult to distinguish themself from an abusive employer, and also enabling abusive employers because they're not obviously doing something nonstandard. This is highlighted by:
I relatedly think that the EA ecosystem doesn’t have reliable defenses against such predators.
If EAs want to have defenses, against these predators, they have to act in such a way that the early red flags here (not paid on time, no contracts just verbal agreements) are actually serious red flags by having non-abusive employers categorically not engage in them, and having more established EA employees react in horror if they hear about this happening.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2023-09-08T17:13:55.489Z · LW(p) · GW(p)
Yeah, this post makes me wonder if there are non-abusive employers in EA who are nevertheless enabling abusers by normalizing behavior that makes abuse popular. Employers who pay their employees months late without clarity on why and what the plan is to get people paid eventually. Employers who employ people without writing things down, like how much people will get paid and when. Employers who try to enforce non-disclosure of work culture and pay.
Do any of those things happen much in EA? (I don't think I've ever heard of an example of one of those things outside of Nonlinear, but maybe I'm out of the loop.)
Replies from: pktechgirl, D0TheMath, lalaithion↑ comment by Elizabeth (pktechgirl) · 2023-09-08T17:23:32.004Z · LW(p) · GW(p)
CEA was pretty bad at this a few years ago, although I'm told they've improved. Things like forgetting to pay contractors, inconsistent about what expenses were reimbursable, even having people start trials without settling on salary.
↑ comment by Garrett Baker (D0TheMath) · 2023-09-08T17:24:27.252Z · LW(p) · GW(p)
Last year SERI MATS was pretty late on many people’s stipends, though my understanding is they were just going through some growing pains during that time, and they’re on the ball nowadays.
Replies from: leon-lang↑ comment by Leon Lang (leon-lang) · 2023-09-08T22:36:06.890Z · LW(p) · GW(p)
(Fwiw, I don’t remember problems with stipend payout at seri mats in the winter program. I was a winter scholar 2022/23.)
Replies from: D0TheMath↑ comment by Garrett Baker (D0TheMath) · 2023-09-08T23:01:34.789Z · LW(p) · GW(p)
Yes. This was mats 2.0 in the summer of 2022.
↑ comment by lalaithion · 2023-09-09T02:05:58.120Z · LW(p) · GW(p)
Yeah, to be clear I don't have any information to suggest that the above is happening—I don't work in EA circles—except for the fact that Ben said the EA ecosystem doesn't have defenses against this happening, and that is one of the defenses I expect to exist.
↑ comment by Ben Pace (Benito) · 2023-09-07T18:30:30.609Z · LW(p) · GW(p)
Haha, I like your edit. I do think there are exceptions — for instance if you are independently wealthy, you might take no salary, and I expect startups cofounders have high-trust non-legal agreements while they're still getting started. But I think that trust is lost for Kat/Emerson/Drew and I would expect anyone in that relationship to regret it. And in general I agree it's a good heuristic.
Replies from: joel-becker↑ comment by Joel Becker (joel-becker) · 2023-09-07T18:58:20.636Z · LW(p) · GW(p)
Why include Drew?
Replies from: Ruby, brunoparga, adamzerner↑ comment by Ruby · 2023-09-10T03:20:45.171Z · LW(p) · GW(p)
I think if you are a cofounder of a organization and have a front row seat, that even if you were not directly doing the worst things, I want hold you culpable for not noticing or intervening.
Replies from: EmersonSpartz, joel-becker↑ comment by Emerson Spartz (EmersonSpartz) · 2023-09-10T14:48:55.969Z · LW(p) · GW(p)
Just FYI Drew is not a cofounder of Nonlinear. That is another inaccurate claim from the article. He did not join full time until April 2022.
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-09-10T17:51:18.711Z · LW(p) · GW(p)
Which part of the post claims that? The post seems to say the opposite:
After a year at Charity Entrepreneurship, in 2021 she cofounded [EA · GW] Nonlinear with Emerson Spartz, where she has worked for 2.5 years.
There might be another part that does refer to Drew as a co-founder, but I can't find anything of that sort.
Replies from: EmersonSpartz↑ comment by Emerson Spartz (EmersonSpartz) · 2023-09-10T19:39:01.785Z · LW(p) · GW(p)
"Alice quit being vegan while working there. She was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days."
Seems like other people besides Ruby are confused about this too, maybe also because Ben sometimes says "the Nonlinear cofounders" and Emerson/Kat/Drew
Replies from: Raemon↑ comment by Raemon · 2023-09-10T19:44:42.914Z · LW(p) · GW(p)
A source of terminological confusion here is that Lightcone team often internally uses the word cofounder to mean ‘person with a very strong investment and generalist skill set, who takes responsibility in a particular way’. Ie have used it to refer to multiple people on the Lightcone team who didn’t literally found the org but are pretty deeply involved.
The crux for me with Drew, and I assume with Ruby/Ben, is ‘does he have that kind of relationship with the org?’, rather than ‘did he literally cofound the org’.
I do think this terminology is probably confusing for other readers, and seems good to correct, although I would guess not actually misleading in an way that’s particularly relevant for most people's assessment of the situation.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2023-09-10T23:56:12.354Z · LW(p) · GW(p)
I think it is not correct to refer to a person of a "cofounder" of an org because they seem to be a generalist taking responsibility for the org, if they did not actually co-found the org and are not referred to as a cofounder by the org.
This seems like a simple error / oversight, rather than a deliberate choice.
But I definitely don't feel like the assessment of "this person was in a defacto cofounder role, in practice, so it's not a big deal if we call them a cofounder" holds water.
↑ comment by habryka (habryka4) · 2023-09-11T07:08:40.595Z · LW(p) · GW(p)
FWIW, I also don't think this holds water, and at least I don't use co-founder this way these days (though maybe Ray does). The LessWrong/Lightcone team developed very gradually, and I think it's reasonable to call the people who came on board in like the first 1-2 years of existence of the project co-founders, since it grew gradually and as a fiscally sponsored nonprofit we never went through a formal incorporation step that would have formalized equity shares in the same clear way, but I think while it might make sense to call anyone coming on later than that some title that emphasizes that they have a lot of responsibility and stake in the organization, it doesn't IMO make sense to refer to them as a "co-founder".
Replies from: Raemon↑ comment by Raemon · 2023-09-11T07:49:43.068Z · LW(p) · GW(p)
I’m not arguing that this usage is good, I just think it’s the usage Ben and Ruby were implicitly using. I’m guessing Drew is in a role that is closer to me, Jim or Ruby, which was a time period you were explicitly calling us cofounders. Which it sounds like you still endorse?
Replies from: Raemon, habryka4↑ comment by Raemon · 2023-09-11T07:58:37.416Z · LW(p) · GW(p)
(To be clear I agree the word is misleading here, and Ben should probably edit the word to something clearer. I also don’t really think it made sense for the Lightcone team to talk about itself having 5 cofounders, which I think we explicitly did at the time. I was just noting the language-usage-difference.
But also this doesn’t seem cruxy to me about the substance of the claim that “Drew was involved enough that he had some obligation to notice if fishy things were going on, even if they weren’t explicitly his responsibility”)
↑ comment by habryka (habryka4) · 2023-09-11T16:55:05.665Z · LW(p) · GW(p)
I don't think I would call Jim or Ruby cofounders, especially in any public setting. I do think to set expectations for what it's like to work with me on LessWrong, back then, I would frequently say something like "cofounder level stake and responsibility", though I think that has definitely shifted over time.
Replies from: Raemon↑ comment by Joel Becker (joel-becker) · 2023-09-11T11:05:33.392Z · LW(p) · GW(p)
I have this opposing consideration [LW(p) · GW(p)]. I think it does speak to your point -- I gather that part of the reason Alice and Chloe feel this way is that Drew did try to be helpful with respect to their concerns, at least to whatever degree was required for them to ask for him to be shielded from professional consequences.
Here's another (in my view weaker, but perhaps more directly relevant to your point) consideration. To the extent you believe that Nonlinear has been a disfunctional environment, in significant part due to domineering characteristics of senior staff, I think that you should also believe that a junior family member beginning to work in this environment is going to have a hard time reasoning through and pushing back against it. Happy to expand.
↑ comment by brunoparga · 2023-09-08T13:29:44.282Z · LW(p) · GW(p)
As I understand it – with my only source being Ben's post and a couple of comments that I've read – Drew is also a cofounder of Nonlinear. Also, this was reported:
Alice and Chloe reported a substantial conflict within the household between Kat and Alice. Alice was polyamorous, and she and Drew entered into a casual romantic relationship. Kat previously had a polyamorous marriage that ended in divorce, and is now monogamously partnered with Emerson. Kat reportedly told Alice that she didn't mind polyamory "on the other side of the world”, but couldn't stand it right next to her, and probably either Alice would need to become monogamous or Alice should leave the organization. Alice didn't become monogamous. Alice reports that Kat became increasingly cold over multiple months, and was very hard to work with. (footnote) After this, there were further reports of claims of Kat professing her romantic love for Alice, and also precisely opposite reports of Alice professing her romantic love for Kat. I am pretty confused about what happened.
So, based on what we're told, there was romantic entanglement between the employers – Drew included – and Alice, and such relationships, even in the best-case scenario, need to be handled with a lot of caution, and this situation seems to be significantly worse than a best-case scenario.
Replies from: joel-becker, sharmake-farah↑ comment by Joel Becker (joel-becker) · 2023-09-08T13:55:00.678Z · LW(p) · GW(p)
My understanding (definitely fallible, but I’ve been quite engaged in this case, and am one of the people Ben interviewed) has been that Alice and Chloe are not concerned about this, and in fact that they both wish to insulate Drew from any negative consequences. This seems to me like an informative and important consideration. (It also gives me reason to think that the benefits of gaining more information about this are less likely to be worth the costs.)
↑ comment by Noosphere89 (sharmake-farah) · 2023-09-09T16:47:05.106Z · LW(p) · GW(p)
This seems like a potentially downstream issue of rationalist/EA organizations ignoring a few Chesterton Fences that are really important, and one of those Chesterton Fences is not having dating/romantic relationships in the employment context if there is any power asymmetry issues. These can easily lead to abuse or worse issues.
In general, one impression I get from a lot of rationalist/EA organizations is that there are very few boundaries between work, romantic/dating and potentially living depending on the organization, and the ones it does have are either much too illegible and high context, especially social context, and/or are way too porous, in that they can be easily violated.
Yes, there are no preformed Cartesian boundaries that we can use, but that doesn't stop us from at least forming approximate boundaries and enforcing them, and while legible norms are never fun and have their costs, I do think that the benefits of legible norms, especially epistemically legible norms in the dating/romantic scene, especially in an employment context are very, very high value, so much that I think the downsides aren't enough to say that it's bad overall to enforce legible norms around dating/romantic relationships in the employment context. I'd say somewhat similar things around legible norms on living situations, pay etc.
Replies from: Viliam↑ comment by Viliam · 2023-09-09T19:58:48.207Z · LW(p) · GW(p)
Seems like some rationalists have a standard solution to Chesterton's Fence: "Yes, I absolutely understand why the fence is there. It was built for stupid people. Since I am smart, the same rules obviously do not apply to me."
And when later something bad happens (quite predictably, the outside view would say), the lesson they take seems to be: "Well, apparently those people were not smart enough or didn't do their research properly. Unlike me. So this piece of evidence does not apply to me."
*
I actually often agree with the first part. It's just that it is easy to overestimate one's own smartness. Especially because it isn't a single thing, and people can be e.g. very smart at math, and maybe average (i.e. not even stupid, just not exceptionally smart either) in human relations. Also, collective wisdom can be aware of rare but highly negative outcomes, which seem unlikely to you, because they are, in fact, rare.
What makes my blood boil is the second part. If you can't predict ahead who will turn out "apparently not that smart" and you only say it in hindsight after the bad thing has already happened, it means you are just making excuses to ignore the evidence. Even if, hypothetically speaking, you are the smartest person and the rules truly do not apply to you, it is still highly irresponsible to promote this behavior among rationalists in general (because you know that a fraction of them will later turn out to be "not that smart" and will get hurt, even if that fraction may not include you).
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2023-09-09T21:06:41.840Z · LW(p) · GW(p)
promote this behavior among rationalists in general
What are you imagining when you say "promote this behavior"? Writing lesswrong posts in favor? Choosing to live that way yourself? Privately recommending that people do that? Not commenting when other people say that they're planning to do something that violates the Chesterton's fence?
Replies from: Viliam↑ comment by Viliam · 2023-09-09T21:39:20.744Z · LW(p) · GW(p)
The example I had mostly in mind was experimenting with drugs. I think there were no posts on LW in favor of this, but it gets a lot of defense in comments. Like when someone mentions in some debate that they know rationalists who have overdosed, or who went crazy after experimenting with drugs, someone else always publicly objects against collectively taking the lesson.
If people do stupid things in private, that can't (and arguably shouldn't) be prevented.
↑ comment by Adam Zerner (adamzerner) · 2023-09-13T08:21:15.856Z · LW(p) · GW(p)
There were various suspicious/bad things Drew did.
Viewed in isolation, that could have a wide spectrum of explanations. Maybe we could call it weak-to-moderate evidence in favor of him being "bad".
But then we have to factor in the choice he's made to kinda hang around Emerson and Kat for this long. If we suppose[1] that we are very confident that Emerson and Kat are very bad people who've done very bad things, then, well, that doesn't reflect very favorably on Drew. I think it is moderate-to-strong evidence that Drew is "bad".
- ^
If you don't believe this, then of course it wouldn't make sense to view his hanging around Emerson and Kat as evidence of him being "bad".
↑ comment by Joel Becker (joel-becker) · 2023-09-13T11:14:05.044Z · LW(p) · GW(p)
To "there were various suspicious/bad things Drew did," I would reply [LW(p) · GW(p)]:
I have this opposing consideration [LW(p) · GW(p)]. [...] I gather that part of the reason Alice and Chloe feel this way is that Drew did try to be helpful with respect to their concerns, at least to whatever degree was required for them to ask for him to be shielded from professional consequences.
and, to "the choice he's made to kinda hang around Emerson and Kat for this long," I would reply:
Replies from: adamzernerTo the extent you believe that Nonlinear has been a disfunctional environment, in significant part due to domineering characteristics of senior staff, I think that you should also believe that a junior family member beginning to work in this environment is going to have a hard time reasoning through and pushing back against it.
↑ comment by Adam Zerner (adamzerner) · 2023-09-13T17:50:46.574Z · LW(p) · GW(p)
To the extent you believe that Nonlinear has been a disfunctional environment, in significant part due to domineering characteristics of senior staff, I think that you should also believe that a junior family member beginning to work in this environment is going to have a hard time reasoning through and pushing back against it.
Successfully pushing back against is certainly difficult. Instead, I would expect, in general, Good Person to not have a very strong relationship with their brother, Bad Person, in the first place, and either not end up working with them or quitting once they started working with them and observed various bad things.
comment by Liron · 2023-09-07T14:38:48.686Z · LW(p) · GW(p)
FWIW I’ve never known a character of high integrity who I could imagine writing the phrase “your career in EA would be over with a few DMs”.
Replies from: AprilSR, EmersonSpartz, adamzerner↑ comment by AprilSR · 2023-09-07T17:26:59.298Z · LW(p) · GW(p)
While I guess I will be trying to withhold some judgment out of principle, I legitimately cannot imagine any plausible context which will make this any different.
Replies from: adele-lopez-1↑ comment by Adele Lopez (adele-lopez-1) · 2023-09-07T19:39:37.488Z · LW(p) · GW(p)
Since I was curious and it wasn't ctrl-F-able, I'll post the immediate context here:
Maybe it didn't seem like it to you that it's shit-talking, but others in the community are viewing it that way. It's unprofessional - companies do not hire people who speak ill of their previous employer - and also extremely hurtful 😔. We're all on the same team here. Let's not let misunderstandings escalate further.
This is a very small community. Given your past behavior, if we were to do the same to you, your career in EA would be over with a few DMs, but we aren't going to do that because we care about you and we need you to help us save the world.
↑ comment by Emerson Spartz (EmersonSpartz) · 2023-09-07T14:46:41.669Z · LW(p) · GW(p)
Indeed, without context that is a cartoon villain thing to say. Not asking you to believe us, yet just asking you to withhold judgment until you've seen the evidence we have which will make that message seem very different in context.
Replies from: NeroWolfe, lc↑ comment by NeroWolfe · 2023-09-08T14:19:02.633Z · LW(p) · GW(p)
How complicated is providing context for that without a week of work on your side? The only plausible exculpatory context I can imagine is something akin to: "If somebody sent me a text like this, I would sever all contact with them, so I'm providing it as an example of what I consider to be unacceptable." I fail to see how hard it is to explain why the claims are false now and then provide detailed receipts within the week.
I don't know any of the parties involved here, but the Nonlinear side seems pretty fishy so far.
Replies from: NeroWolfe↑ comment by NeroWolfe · 2023-09-08T20:19:19.978Z · LW(p) · GW(p)
So, I'm new here, and apparently, I've misunderstood something. My comment didn't seem all that controversial to me, but it's been down-voted by everybody who gave it a vote. Can somebody pass me a clue as to why there is strong disagreement with my statement? Thanks.
Replies from: adamzerner, Viliam↑ comment by Adam Zerner (adamzerner) · 2023-09-09T07:17:37.852Z · LW(p) · GW(p)
I think that if a comment gets lots and lots of eyes on it, the upvotes and agreement votes will end up being reasonable enough. But I think there are other situations (not uncommon) where there are not enough eyes on it and the vote counts are unreasonable. I also think that there is a risk of unreasonable vote counts even once there are lots of eyes on the comment in question in situations like these where the dynamics are particularly mind-killing [? · GW].
For your comment, I don't see anything downvote worthy. My best guess is that the downvoters didn't think you were being charitable enough. Personally I think the belief that you were being uncharitable enough to justify a downvote is pretty unreasonable.
↑ comment by Viliam · 2023-09-09T20:14:00.522Z · LW(p) · GW(p)
As of now, the votes are positive. I guess it sometimes happens that some people like your comment, some people don't like it, and the ones who don't like it just noticed it first.
(By the way, I mostly agree with the spirit of your comment, but I think you used too strong words. So I didn't vote either way. For example, as mentioned elsewhere, a good reason to wait for a week might be that the "context" is someone else's words, and you want to get their consent to publish the record. Also, the conclusion that "the Nonlinear side seems pretty fishy" is like... yeah, I suppose that most readers feel the same, but the debate is precisely about whether Nonlinear can produce in a week some context that will make it seem "less fishy". They would probably agree that the text as it is written now does not put them in good light.)
↑ comment by lc · 2023-11-02T04:34:50.272Z · LW(p) · GW(p)
Was the follow-up promised here ever produced?
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-11-02T07:38:24.742Z · LW(p) · GW(p)
The prediction market is still reasonably optimistic that something will be published soon: https://manifold.markets/Rodeo/will-nonlinear-post-its-response-by
Replies from: lc↑ comment by lc · 2023-12-11T19:26:55.608Z · LW(p) · GW(p)
That market has since resolved "No", and the duplicated market for december 13 is now at 13%: https://manifold.markets/MarcusAbramovitch/will-nonlinear-post-its-response-by-9bcfa0ac9796
↑ comment by Adam Zerner (adamzerner) · 2023-09-08T22:21:42.446Z · LW(p) · GW(p)
I strongly disagree with this and am surprised that there is so much agreement with it.
Interpreted literally,
FWIW I’ve never known a character of high integrity who I could imagine writing the phrase “your career in EA would be over with a few DMs”.
contains the phrase "your career in EA would be over with a few DMs". I don't think it was meant to be interpreted literally though.
In which case it becomes a matter of things like context, subtext, and non-verbal cues. I can certainly imagine, in practice, a character of high integrity writing such a phrase.
For example, maybe I know the person well enough to justify the following charitable interpretation:
Replies from: LinchThat phrase could be interpreted as a subtle threat, especially in the context of us currently being in the midst of an ongoing argument. However, I know you well enough to think that it is unlikely that you intended this to be a threat.
Instead, I think you just intended to use a personal example to make the potential downsides of badmouthing very salient.
↑ comment by Linch · 2023-09-09T02:21:07.360Z · LW(p) · GW(p)
Interpreted literally,
FWIW I’ve never known a character of high integrity who I could imagine writing the phrase “your career in EA would be over with a few DMs”.
contains the phrase "your career in EA would be over with a few DMs". I don't think it was meant to be interpreted literally though.
Are you familiar with the use-mention distinction? It seems pretty relevant here.
For example, maybe I know the person well enough to justify the following charitable interpretation:
That phrase could be interpreted as a subtle threat, especially in the context of us currently being in the midst of an ongoing argument. However, I know you well enough to think that it is unlikely that you intended this to be a threat.
Instead, I think you just intended to use a personal example to make the potential downsides of badmouthing very salient.
This does not at all seem like a thing I would ever say except in the context of an obvious-to-me joke (and if I misread the room enough to later learn that someone didn't interpret me as joking, I'd be extremely mortified and apologize profusely).
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2023-09-09T04:44:23.108Z · LW(p) · GW(p)
Are you familiar with the use-mention distinction? It seems pretty relevant here.
FWIW, I didn't mean it as a cheap shot. I just wanted to establish that context is, in fact, relevant (use-mention is an example of relevant context). And from there, go on to talk about why I think there are realistic contexts where a high-character person would make the statement.
comment by Raemon · 2023-09-07T21:58:51.154Z · LW(p) · GW(p)
This is a pretty complex epistemic/social situation. I care a lot about our community having some kind of good process of aggregating information, allowing individuals to integrate it, and update, and decide what to do with it.
I think a lot of disagreements in the comments here and on EAF stem from people having an implicit assumption that the conversation here is about "should [any particular person in this article] be socially punished?". In my preferred world, before you get to that phase there should be at least some period focused on "information aggregation and Original Seeing. [LW · GW]"
It's pretty tricky, since in the default, world, "social punishment?" is indeed the conversation people jump to. And in practice, it's hard to have words just focused on epistemic-evaluation without getting into judgment, or without speech acts being "moves" in a social conflict.
But, I think it's useful to at least (individually) inhabit the frame of "what is true, here?" without asking questions like "what do those truths imply?".
With that in mind, some generally useful epistemic advice that I think is relevant here:
Try to have Multiple Hypotheses
It's useful to have at least two, and preferably three, hypotheses for what's going on in cases like this. (Or, generally whenever you're faced with a confusing situation where you're not sure what's true). If you only have one hypothesis, you may be tempted to shoehorn evidence into being evidence for/against that hypothesis, and you may be anchored on it.
If you have at least two hypotheses (and, like, "real ones", that both seem plausible to you), I find it easier to take in new bits of data, and then ask "okay, how would this fit into two different plausible scenarios"? which activates my "actually check" process.
I think three hypotheses is better than two because two can still end up in a "all the evidence ways in on a one-dimensional spectrum". Three hypotheses a) helps you do 'triangulation', and b) helps remind you to actually do the "what frame should I be having here? what are other additional hypotheses that I might not have thought of yet?"
Multiple things can be going on at once
If two people have a conflict, it could be the case that one person is at-fault, or both people are at-fault, or neither (i.e. it was a miscommunication or something).
If one person does an action, it could be true, simultaneously, that:
- They are somewhat motivated by [Virtuous Motive A]
- They are somewhat motivated by [Suspicious Motive B]
- They are motivated by [Random Innocuous Motive C]
I once was arguing with someone, and they said "your body posture tells me you aren't even trying to listen to me or reason correctly, you're just trying to do a status monkey smackdown and put me in my place." And, I was like "what? No, I have good introspective access and I just checked whether I'm trying to make a reasoned argument. I can tell the difference between doing The Social Monkey thing and the "actually figure out the truth" thing."
What I later realized is that I was, like, 65% motivated by "actually wanna figure out the truth", and like 25% motivated by "socially punish this person" (which was a slightly different flavor of "socially punish" then, say, when I'm having a really tribally motivated facebook fight, so I didn't recognize it as easily).
Original Seeing vs Hypothesis Evaluation vs Judgment
OODA Loops include four steps: Observe, Orient, Decide, Act
Often people skip over steps. They think they've already observed enough and don't bother looking for new observations. Or it doesn't even occur to them to do that explicitly. (I've noticed that I often skip to the orient step, where I figure out about "how do I organize my information? what sort of decision am I about to decide on?", and not actually do the observe step, where I'm purely focused on gaining raw data.
When you've already decided on a schema-for-thinking-about-a-problem, you're more likely to take new info that comes in and put it in a bucket you think you already understand.
Original Seeing [LW · GW] is different from "organizing information".
They are both different from "evaluating which hypothesis is true"
They are both different from "deciding what to do, given Hypothesis A is true"
Which is in turn different from "actually taking actions, given that you've decided what to do."
I have a sort of idealistic dream that someday, a healthy rationalist/EA community could collectively be capable of raising hypotheses, without people anchoring on them, and people share information in a way you robustly trust won't get automatically leveraged into a conflict/political move. I don't think we're close enough to that world to advocate for it in-the-moment, but I do think it's still good practice for people individually to be spending at least some of their time in node the OODA loop, and tracking which node they're currently focusing on.
Replies from: MondSemmel, M. Y. Zuo↑ comment by MondSemmel · 2023-09-07T22:41:48.270Z · LW(p) · GW(p)
Try to have Multiple Hypotheses
This section is begging for a reference to Duncan's post on Split and Commit [LW · GW].
IIRC Duncan has also written lots of other stuff about topics like how to assess accusations, community health stuff, etc. Though I'm somewhat skeptical to which extent his recommendations can be implemented by fallible humans with limited time and energy.
↑ comment by M. Y. Zuo · 2023-09-12T01:31:36.671Z · LW(p) · GW(p)
I agree, there is the possibility that both sides are somewhat unscrupulous and not entirely forthright.
At best it could be because the environment/stress/etc. is causing them to behave like this, at worst it's because they have delusions of grandeur without the substance to back that up.
Replies from: NeroWolfecomment by KatWoods (ea247) · 2023-09-07T19:46:43.780Z · LW(p) · GW(p)
One example of the evidence we’re gathering
We are working hard on a point-by-point response to Ben’s article, but wanted to provide a quick example of the sort of evidence we are preparing to share:
Her claim: “Alice claims she was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days.”
The truth (see screenshots below):
- There was vegan food in the house (oatmeal, quinoa, mixed nuts, prunes, peanuts, tomatoes, cereal, oranges) which we offered to cook for her.
- We picked up vegan food for her.
Months later, after our relationship deteriorated, she went around telling many people that we starved her. She included details that depicted us in a maximally damaging light - what could be more abusive than refusing to care for a sick girl, alone in a foreign country? And if someone told you that, you’d probably believe them, because who would make something like that up?
Evidence
- The screenshots below show Kat offering Alice the vegan food in the house (oatmeal, quinoa, cereal, etc), on the first day she was sick. Then, when she wasn’t interested in us bringing/preparing those, I told her to ask Drew to go pick up food, and Drew said yes.
- See more screenshots here of Drew’s conversations with her, saying that I got her mashed potatoes. I did this while I was sick, and went out and checked everything at the store, checking for sneaky non-vegan ingredients, like whey.
Initially, we heard she was telling people that she “didn’t eat for days,” but she seems to have adjusted her claim to “barely ate” for “2 days”.
It’s important to note that Alice didn’t lie about something small and unimportant. She accused of us a deeply unethical act - the kind that most people would hear and instantly think you must be a horrible human - and was caught lying.
We believe many people in EA heard this lie and updated unfavorably towards us. A single false rumor like this can unfairly damage someone’s ability to do good, and this is just one among many she told.
We chose this example not because it’s the most important (although it certainly paints us in a very negative and misleading light) but simply because it was the fastest claim to explain where we had extremely clear evidence without having to add a lot of context, explanation, find more evidence, etc.
Even so, it took us hours to put together and share. Both because we had to track down all of the old conversations, make sure we weren’t getting anything wrong, anonymize Alice, format the screenshots (they kept getting blurry), and importantly, write it up.
We also had to spend time dealing with all of the other comments while trying to pull this together. My inbox is completely swamped.
This claim was a few sentences in Ben’s article but took us hours to refute. Ben’s article is over 10,000 words and we’re working as fast as we can to respond to every point he made.
Again, we are not asking for the community to believe us unconditionally. We want to show everybody all of the evidence and also take responsibility for the mistakes that we did make.
We’re just asking that you not overupdate on hearing just one side, and keep an open mind for the evidence we’ll be sharing as soon as we can.
Replies from: KPier, Irenicon↑ comment by KPier · 2023-09-07T20:56:45.957Z · LW(p) · GW(p)
Cross posting from the EA Forum:
It could be that I am misreading or misunderstanding these screenshots, but having read through them a couple of times trying to parse what happened, here's what I came away with:
On December 15, Alice states that she'd had very little to eat all day, that she'd repeatedly tried and failed to find a way to order takeout to their location, and tries to ask that people go to Burger King and get her an Impossible Burger which in the linked screenshots they decline to do because they don't want to get fast food. She asks again about Burger King and is told it's inconvenient to get there. Instead, they go to a different restaurant and offer to get her something from the restaurant they went to. Alice looks at the menu online and sees that there are no vegan options. Drew confirms that 'they have some salads' but nothing else for her. She assures him that it's fine to not get her anything.
It seems completely reasonable that Alice remembers this as 'she was barely eating, and no one in the house was willing to go out and get her nonvegan foods' - after all, the end result of all of those message exchanges was no food being obtained for Alice and her requests for Burger King being repeatedly deflected with 'we are down to get anything that isn't fast food' and 'we are down to go anywhere within a 12 min drive' and 'our only criteria is decent vibe + not fast food', after which she fails to find a restaurant meeting those (I note, kind of restrictive if not in a highly dense area) criteria and they go somewhere without vegan options and don't get her anything to eat.
It also seems totally reasonable that no one at Nonlinear understood there was a problem. Alice's language throughout emphasizes how she'll be fine, it's no big deal, she's so grateful that they tried (even though they failed and she didn't get any food out of the 12/15 trip, if I understand correctly). I do not think that these exchanges depict the people at Nonlinear as being cruel, insane, or unusual as people. But it doesn't seem to me that Alice is lying to have experienced this as 'she had covid, was barely eating, told people she was barely eating, and they declined to pick up Burger King for her because they didn't want to go to a fast food restaurant, and instead gave her very limiting criteria and went somewhere that didn't have any options she could eat'.
On December 16th it does look like they successfully purchased food for her.
My big takeaway from these exchanges is not that the Nonlinear team are heartless or insane people, but that this degree of professional and personal entanglement and dependence, in a foreign country, with a young person, is simply a recipe for disaster. Alice's needs in the 12/15 chat logs are acutely not being met. She's hungry, she's sick, she conveys that she has barely eaten, she evidently really wants someone to go to BK and get an impossible burger for her, but (speculatively) because of this professional/personal entanglement, she lobbies for this only by asking a few times why they ruled out Burger King, and ultimately doesn't protest when they instead go somewhere without food she can eat, assuring them it's completely fine. This is also how I relate to my coworkers, tbh - but luckily, I don't live with them and exclusively socialize with them and depend on them completely when sick!!
Given my experience with talking with people about strongly emotional events, I am inclined towards the interpretation where Alice remembers the 15th with acute distress and remembers it as 'not getting her needs met despite trying quite hard to do so', and the Nonlinear team remembers that they went out of their way that week to get Alice food - which is based on the logs from the 16th clearly true! But I don't think I'd call Alice a liar based on reading this, because she did express that she'd barely eaten and request apologetically for them to go somewhere she could get vegan food (with BK the only option she'd been able to find) only for them to refuse BK because of the vibes/inconvenience.
↑ comment by Unreal · 2023-09-08T00:22:44.567Z · LW(p) · GW(p)
These texts have weird vibes from both sides. Something is off all around.
That said, what I'm seeing: A person failed to uphold their own boundaries or make clear their own needs. Instead of taking responsibility for that, they blame the other person for some sort of abuse.
This is called playing the victim. I don't buy it.
I think it would generally be helpful if people were informed by the Drama Triangle when judging cases like these.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2023-09-08T04:58:28.655Z · LW(p) · GW(p)
Alternative theory: Alice felt on thin ice socially + professionally. When she was sick she finally felt she had a bit of leeway and therefore felt even a little willing to make requests of these people who were otherwise very "elitist" wrt everyone, somewhat including her. She tries to not overstep. She does this by stating what she needs, but also in the same breath excusing her needs as unimportant, so that the people with more power can preserve the appearance of not being cruel while denying her requests. She does this because she doesn't know how much leeway she actually has.
Unfortunately this is a hard to falsify theory. But at a glance it seems consistent, and I think it's also totally a thing that happens.
Replies from: romeostevensit↑ comment by romeostevensit · 2023-09-08T23:23:03.814Z · LW(p) · GW(p)
+1 I think it's important to keep in context the other claims about employees being treated poorly/low status. Abuse can be hard to judge from the outside because it can revolve around each individual incident being basically okay in isolation. A difficult and unfortunately common case is where both experiences are basically true. A person genuinely had an experience of abuse while the purported abuser genuinely had an experience of things seeming okay/copacetic in day to day interactions. Eg "we'll destroy our enemies haha" can unfortunately be in a grey zone between lightheartedness, abuse, or the latter masked as the former.
Replies from: Unreal↑ comment by Unreal · 2023-09-16T16:09:50.246Z · LW(p) · GW(p)
After reading more of the article, I have a better sense of this context that you mention. It would be interesting to see Nonlinear's response to the accusations because they seem pretty shameful, as is.
I would actively advise against anyone working with Kat / Emerson, not without serious demonstration of reformation and, like, values-level shifts.
If Alice is willing to stretch the truth about her situation (for any reason) or outright lie in order to enact harsher punishment on others, even as a victim of abuse, I would be mistrustful of her story. And so far I am somewhat mistrustful of Alice and very mistrustful of Kat / Emerson.
Also, even if TekhneMakre's take is what in fact happened, it doesn't give Alice a total pass in that particular situation, to me. I get that it's hard to be clear-headed and brave when faced with potentially hostile or adversarial people, but I think it's still worth trying to be. I don't expect anyone to be brave, but I also don't treat anyone as totally helpless, even if the cards are stacked against them.
Replies from: Unreal↑ comment by Unreal · 2023-09-16T16:23:15.304Z · LW(p) · GW(p)
Neither here nor there:
I am sympathetic to "getting cancelled." I often feel like people are cancelled in some false way (or a way that leaves people with a false model), and it's not very fair. Mobs don't make good judges. Even well-meaning, rationalist ones. I feel this way about basically everyone who's been 'cancelled' by this community. Truth and compassion were never fully upheld as the highest virtue, in the end. Justice was never, imo, served, but often used as an excuse for victims to evade taking personal responsibility for something and for rescuers to have something to do. But I still see the value in going through a 'cancelling' process, for everyone involved, and so I'm not saying to avoid it either. It just sucks, and I get it.
That said, the people who are 'cancelled' tend to be stubborn hard-heads about it, and their own obstinacy tends to lead further to an even more extreme downfall. It's like some suicidal part of them kicks in, and drives the knife in deeper without anyone's particular help.
I agree it's good to never just give into mob justice, but for your own souls to not take damage, try not to clench. It's not worth protecting it, whatever it happens to be.
Save your souls. Not your reputation.
↑ comment by KatWoods (ea247) · 2023-09-07T21:24:28.003Z · LW(p) · GW(p)
Crossposted from the EA Forum:
We definitely did not fail to get her food, so I think there has been a misunderstanding - it says in the texts below that Alice told Drew not to worry about getting food because I went and got her mashed potatoes. Ben mentioned the mashed potatoes in the main post, but we forgot to mention it again in our comment - which has been updated
The texts involved on 12/15/21:
I also offered to cook the vegan food we had in the house for her.
I think that there's a big difference between telling everyone "I didn't get the food I wanted, but they did get/offer to cook me vegan food, and I told them it was ok!" and "they refused to get me vegan food and I barely ate for 2 days".
Also, re: "because of this professional/personal entanglement" - at this point, Alice was just a friend traveling with us. There were no professional entanglements.
↑ comment by Rob Bensinger (RobbBB) · 2023-09-08T00:51:07.336Z · LW(p) · GW(p)
I think that there's a big difference between telling everyone "I didn't get the food I wanted, but they did get/offer to cook me vegan food, and I told them it was ok!" and "they refused to get me vegan food and I barely ate for 2 days".
Agreed.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2023-09-08T00:51:14.634Z · LW(p) · GW(p)
This also updates me about Kat's take (as summarized by Ben Pace in the OP):
Kat doesn’t trust Alice to tell the truth, and that Alice has a history of “catastrophic misunderstandings”.
When I read the post, I didn't see any particular reason for Kat to think this, and I worried it might be just be an attempt to dismiss a critic, given the aggressive way Nonlinear otherwise seems to have responded to criticisms.
With this new info, it now seems plausible to me that Kat was correct (even though I don't think this justifies threatening Alice or Ben in the way Kat and Emerson did). And if Kat's not correct, I still update that Kat was probably accurately stating her epistemic state, and that a lot of reasonable people might have reached the same epistemic state.
↑ comment by Rob Bensinger (RobbBB) · 2023-09-08T00:50:18.505Z · LW(p) · GW(p)
(Crossposted)
It also seems totally reasonable that no one at Nonlinear understood there was a problem. Alice's language throughout emphasizes how she'll be fine, it's no big deal [...] I do not think that these exchanges depict the people at Nonlinear as being cruel, insane, or unusual as people.
100% agreed with this. The chat log paints a wildly different picture than what was included in Ben's original post.
Given my experience with talking with people about strongly emotional events, I am inclined towards the interpretation where Alice remembers the 15th with acute distress and remembers it as 'not getting her needs met despite trying quite hard to do so', and the Nonlinear team remembers that they went out of their way that week to get Alice food - which is based on the logs from the 16th clearly true! But I don't think I'd call Alice a liar based on reading this
Agreed. I did update toward "there's likely a nontrivial amount of distortion in Alice's retelling of other things", and toward "normal human error and miscommunication played a larger role in some of the Bad Stuff that happened than I previously expected". (Ben's post was still a giant negative update for me about Nonlinear, but Kat's comment is a smaller update in the opposite direction.)
↑ comment by Irenicon · 2023-09-08T02:17:26.539Z · LW(p) · GW(p)
Cross-posted from the EA Forum thread, mainly because it seems to be a minority opinion and I want to be clear that there are different ways to read these texts:
I think it's telling, that Kat thinks that the texts speak in their favor. Reading them was quite triggering for me because I see a scared person, who asks for basic things, from the only people she has around her, to help her in a really difficult situation, and is made to feel like she is asking for too much, has to repeatedly advocate for herself (while sick) and still doesn't get her needs met. On one hand, she is encouraged by Kat to ask for help but practically it's not happening. Especially Emerson and Drew in that second thread sounded like she is difficult and constantly pushed to ask for less or for something else than what she asked for. Seriously, it took 2.5 hours the first day to get a salad, which she didn't want in the first place?! And the second day it's a vegetarian, not vegan, burger.
The way Alice constantly mentioned that she doesn't want to bother them and says that things are fine when they are clearly not, is very upsetting. I can't speak to how Alice felt but it's no wonder she reports this as not being helped/fed when she was sick. To me, this is accurate, whether or not she got a salad and a vegetarian burger the next day.
Honestly, the burger-gate is a bit ridiculous. Ben did report in the original article that you disputed these claims (with quite a lot of detail) so he reported it accurately. To me, that was enough to not update too much based on this. I don't think it warranted the strongly worded letter to the Lightcone team and the subsequent dramatic claims about evidence that you want to provide to clear your name.
↑ comment by KatWoods (ea247) · 2023-09-09T12:58:00.710Z · LW(p) · GW(p)
The claim in the post was “Alice claims she was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days.”. (Bolding added)
If you look at the chat messages, you’ll see we have screenshots demonstrating that:
1. There was vegan food in the house, which we offered her.
2. I personally went out, while I was sick myself, to buy vegan food for her (mashed potatoes) and cooked it for her and brought it to her.
I would be fine if she told people that she was hungry when she was sick, and she felt sad and stressed. Or that she was hungry but wasn’t interested in any of the food we had in the house and we didn't get her Burger King.
But I think that there's a big difference between telling everyone "I didn't get the food I wanted, but they did get/offer to cook me vegan food, and I told them it was ok!" and "they refused to get me vegan food and I barely ate for 2 days"
I have sympathy for Alice. She was hungry (because of her fighting with a boyfriend [not Drew] in the morning and having a light breakfast) and she was sick. That sucks, and I feel for her. And that’s why I tried (and succeeded) in getting her vegan food.
In summary. “Alice claims she was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days.”. (Bolding added) This makes us sound like terrible people.
What actually happened: she was sick and hungry, and we offered to cook or bring over the vegan options in the house, then went out and bought and cooked her vegan food. We tried to take care of our sick friend (she wasn't working for us at the time), and we fed her while she was sick.
I encourage you to read the full post here [EA · GW], where I'm trying to add more details and address more points as they come up.
comment by Elizabeth (pktechgirl) · 2023-09-07T18:16:13.899Z · LW(p) · GW(p)
Several ex-employees have shared positive experiences with Nonlinear or Kat Woods on LW or EAF. I would like to ask those employees for some specifics:
- how explicit were salary negotiations (yours or those you heard about)? It seems like one of the things that went wrong here was extremely informal ~employment agreements, and I'd like to know if that was common practice.
- If negotiations were informal or after the fact (which isn't uncommon in EA), what happened when there was a disagreement? Did it feel like Kat/Emerson/Nonlinear went out of their way to be generous (as this person describes [EA(p) · GW(p)] in their explicit negotiations with Kat at Charity Science Health), or was it very stressful to get even basic needs met (like Ben describes getting medical attention in PR).
- I can imagine some of the problem lived in Alice and Chloe, and people who were better at advocating for their own needs would have been fine. I would never work in the conditions described here specifically because they would make me bad at getting my own needs met. I would still think this represented a problem on Nonlinear's part, but it's a much smaller problem than if they were deliberately exploitative.
- When did whatever quoted interaction take place, and with what org? Some of the positive information about Kat comes from different orgs several years ago, which I think has some relevance but less than people who worked for nonlinear in the last few years.
- at least two people commented on receiving coaching from Kat and finding it very positive. This isn't irrelevant, but the power dynamics are so different I don't find it that useful.
comment by lc · 2023-09-07T19:16:16.903Z · LW(p) · GW(p)
There are certain claims here that are concretely bad, but they're also mixed in confusingly with what seem like nonsense complaints that are just... the reality of people spending extended time with other people, like:
- "My roommates didn't buy me vegan food while I was sick"
- "Someone gives a lot of compliments to me but I don't think they're being genuine"
- "I feel 'low-value'"
If someone is being defrauded, yeah that's one thing, but I'd rather not litigate "Is Kat/Emerson an asshole" in the court of public opinion.
Replies from: adamzerner, Yarrow Bouchard↑ comment by Adam Zerner (adamzerner) · 2023-09-09T07:27:34.053Z · LW(p) · GW(p)
I disagree. I think the less central complaints that were included in the post provide meaningful context and thus are worth including.
↑ comment by [deactivated] (Yarrow Bouchard) · 2023-11-13T02:47:00.187Z · LW(p) · GW(p)
"Someone gives a lot of compliments to me but I don't think they're being genuine"
Au contraire. This is a common tactic of manipulation and abuse.
"I feel 'low-value'"
I think the point is that they were treated as low-value by their bosses.
Replies from: lc↑ comment by lc · 2023-11-13T02:57:08.515Z · LW(p) · GW(p)
Au contraire. This is a common tactic of manipulation and abuse.
...Is it?
Replies from: Yarrow Bouchard↑ comment by [deactivated] (Yarrow Bouchard) · 2023-11-13T03:20:11.336Z · LW(p) · GW(p)
Yup. See: "love bombing".
comment by orthonormal · 2023-09-07T15:31:57.658Z · LW(p) · GW(p)
Ben, I want to say thank you for putting in a tremendous amount of work, and also for being willing to risk attempts at retaliation when that's a pretty clear threat.
You're in a reasonable position to take this on, having earned the social standing to make character smears unlikely to stick, and having the institutional support to fight a spurious libel claim. And you're also someone I trust to do a thorough and fair job.
I wish there were someone whose opportunity cost were lower who could handle retaliation-threat reporting, but it's pretty likely that anyone with those attributes will have other important opportunities.
Replies from: Benito, tracingwoodgrains↑ comment by Ben Pace (Benito) · 2023-09-07T18:36:44.474Z · LW(p) · GW(p)
You're welcome! I think it was the right thing to do. I'll see whether I regret it all in a month from now...
↑ comment by TracingWoodgrains (tracingwoodgrains) · 2023-12-16T08:32:45.260Z · LW(p) · GW(p)
I respect you and have followed your general commentary with interest for some time. Given that, reviewing this comment section a few months later I want to explicitly state that I believe you made a number of understandable but major errors in your evaluation of this process and should reevaluate the appropriateness of publishing a one-sided article without adequate error-checking and framing requests to correct verifiable, material errors of fact as retaliation now that the more complete picture is available [LW · GW]. I'm coming at this fresh with the benefit of never having seen this post until the more complete story was out, but given what is now known I believe the publication and reaction to this post indicates major systemic errors in this sphere.
comment by spencerg · 2023-09-07T07:02:12.502Z · LW(p) · GW(p)
Hi all, I wanted to chime in because I have had conversations relevant to this post with just about all involved parties at various points. I've spoken to "Alice" (both while she worked at nonlinear and afterward), Kat (throughout the period when the events in the post were alleged to have happened and afterward), Emerson, Drew, and (recently) the author Ben, as well as, to a much lesser extent, "Chloe" (when she worked at nonlinear). I am (to my knowledge) on friendly terms with everyone mentioned (by name or pseudonym) in this post. I wish well for everyone involved. I also want the truth to be known, whatever the truth is.
I was sent a nearly final draft of this post yesterday (Wednesday), once by Ben and once by another person mentioned in the post.
I want to say that I find this post extremely strange for the following reasons:
(1) The nearly final draft of this post that I was given yesterday had factual inaccuracies that (in my opinion and based on my understanding of the facts) are very serious despite ~150 hours being spent on this investigation. This makes it harder for me to take at face value the parts of the post that I have no knowledge of. Why am I, an outsider on this whole thing, finding serious errors in the final hours before publication? That's not to say everything in the post is inaccurate, just that I was disturbed to see serious inaccuracies, and I have no idea why nobody caught these (I really don't feel like I should be the one to correct mistakes, given my lack of involvement, but it feels important to me to comment here since I know there were inaccuracies in the piece, so here we are).
(2) Nonlinear reached out to me and told me they have proof that a bunch of claims in the post are completely false. They also said that in the past day or so (upon becoming aware of the contents of the post), they asked Ben to delay his publication of this post by one week so that they could gather their evidence and show it to Ben before he publishes it (to avoid having him publish false information). However, he refused to do so.
This really confuses me. Clearly, Ben spent a huge amount of time on this post (which has presumably involved weeks or months of research), so why not wait one additional week for Nonlinear to provide what they say is proof that his post contains substantial misinformation? Of course, if the evidence provided by nonlinear is weak, he should treat it as such, but if it is strong, it should also be treated as such. I struggle to wrap my head around the decision not to look at that evidence. I am also confused why Ben, despite spending a huge amount of time on this research, apparently didn't seek out this evidence from Nonlinear long ago.
To clarify: I think it’s very important in situations like this not to let the group being criticized have a way to delay publication indefinitely. If I were in Ben’s shoes, I believe what I would have done is say something like, “You have exactly one week to provide proof of any false claims in this post (and I’ll remove any claim you can prove is false) then I’m publishing the post no matter what at that time.” This is very similar to the policy we use for our Transparent Replications project (where we replicate psychology results of publications in top journals), and we have found it to work well. We give the original authors a specific window of time during which they can point out any errors we may have made (which is at least a week). This helps make sure our replications are accurate, fair, and correct, and yet the teams being replicated have no say over whether the replications are released (they always are released regardless of whether we get a response).
It seems to me that basic norms of good epistemics require that, on important topics, you look at all the evidence that can be easily acquired.
I also think that if you publish misinformation, you can't just undo it by updating the post later or issuing a correction. Sadly, that's not the way human minds/social information works. In other words, misinformation can't be jammed back into the bottle once it is released. I have seen numerous cases where misinformation is released only later to be retracted, in which the misinformation got way more attention than the retraction, and most people came away only with the misinformation. This seems to me to provide a strong additional reason why a small delay in the publication date appears well worth it (to me, as an outsider) to help avoid putting out a post with potentially substantial misinformation. I hope that the lesswrong/EA communities will look at all the evidence once it is released, which presumably will be in the next week or so, in order to come to a fair and accurate conclusion (based on all the evidence, whatever that accurate final conclusion turns out to be) and do better than these other cases I’ve witnessed where misinformation won the day.
Of course, I don't know Ben's reason for jumping to publish immediately, so I can't evaluate his reasons directly.
Disclaimer: I am friends with multiple people connected to this post. As a reminder, I wish well for everyone involved, and I wish for the truth to be known, whatever that truth happens to be. I have acted (informally) as an advisor to nonlinear (without pay) - all that means, though, is that every so often, team members there will reach out to me to ask for my advice on things.
Note: I've updated this comment a few times to try to make my position clearer, to add some additional context, and to fix grammatical mistakes.
Replies from: habryka4, DanielFilan, xarkn↑ comment by habryka (habryka4) · 2023-09-07T08:17:30.647Z · LW(p) · GW(p)
I don't have all the context of Ben's investigation here, but as someone who has done investigations like this in the past, here are some thoughts on why I don't feel super sympathetic to requests to delay publication:
In this case, it seems to me that there is a large and substantial threat of retaliation. My guess is Ben's sources were worried about Emerson hiring stalkers, calling their family, trying to get them fired from their job, or threatening legal action. Having things be out in the public can provide a defense because it is much easier to ask for help if the conflict happens in the open.
As a concrete example, Emerson has just sent me an email saying:
Given the irreversible damage that would occur by publishing, it simply is inexcusable to not give us a bit of time to correct the libelous falsehoods in this document, and if published as is we intend to pursue legal action for libel against Ben Pace personally and Lightcone for the maximum damages permitted by law. The legal case is unambiguous and publishing it now would both be unethical and gross negligence, causing irreversible damage.
For the record, the threat of libel suit and use of statements like "maximum damages permitted by law" seem to me to be attempts at intimidation. Also, as someone who has looked quite a lot into libel law (having been threatened with libel suits many times over the years), describing the legal case as "unambiguous" seems inaccurate and a further attempt at intimidation.
My guess is Ben's sources have also received dozens of calls (as have I have received many in the last few hours), and I wouldn't be surprised to hear that Emerson called up my board, or would otherwise try to find some other piece of leverage against Lightcone, Ben, or Ben's sources if he had more time. While I am not that worried about Emerson, I think many other people are in a much more vulnerable position and I can really resonate with not wanting to give someone an opportunity to gather their forces (and in that case I think it's reasonable to force the conflict out in the open, which is far from an ideal arena, but does provide protection against many types of threats and adversarial action).
Separately, the time investment for things like this is really quite enormous and I have found it extremely hard to do work of this type in parallel to other kinds of work, especially towards the end of a project like this, when the information is ready for sharing, and lots of people have strong opinions and try to pressure you in various ways. Delaying by "just a week" probably translates into roughly 40 hours of productive time lost, even if there isn't much to do, because it's so hard to focus on other things. That's just a lot of additional time, and so it's not actually a very cheap ask.
Lastly, I have also found that the standard way that abuse in the extended EA community has been successfully prevented from being discovered is by forcing everyone who wants to publicize or share any information about it to jump through a large number of hoops. Calls for "just wait a week" and "just run your posts by the party you are criticizing" might sound reasonable in isolation, but very quickly multiply the cost of any information sharing, and have huge chilling effects that prevent the publishing of most information and accusations. Asking the other party to just keep doing a lot of due diligence is easy and successful and keeps most people away from doing investigations like this.
As I have written about before, I myself ended up being intimidated by this for the case of FTX and chose not to share my concerns about FTX more widely, which I continue to consider one of the worst mistakes of my career.
My current guess is that if it is indeed the case that Emerson and Kat have clear proof that a lot of the information in this post is false, then I think they should share that information publicly. Maybe on their own blog, or maybe here on LessWrong or on the EA Forum. It is also the case that rumors about people having had very bad experiences working with Nonlinear are already circulating around the community and this is already having a large effect on Nonlinear, and as such, being able to have clear false accusations to respond against should help them clear their name, if they are indeed false.
I agree that this kind of post can be costly, and I don't want to ignore the potential costs of false accusations, but at least to me it seems like I want an equilibrium of substantially more information sharing, and to put more trust in people's ability to update their models of what is going on, and less paternalistic "people are incapable of updating if we present proof that the accusations are false", especially given what happened with FTX and the costs we have observed from failing to share observations like this.
A final point that feels a bit harder to communicate is that in my experience, some people are just really good at manipulation, throwing you off-balance, and distorting your view of reality, and this is a strong reason to not commit to run everything by the people you are sharing information on. A common theme that I remember hearing from people who had concerns about SBF is that people intended to warn other people, or share information, then they talked to SBF, and somehow during that conversation he disarmed them, without really responding to the essence of their concerns. This can take the form of threats and intimidation, or the form of just being really charismatic and making you forget what your concerns were, or more deeply ripping away your grounding and making you think that your concerns aren't real, and that actually everyone is doing the thing that seems wrong to you, and you are going to out yourself as naive and gullible by sharing your perspective.
[Edit: The closest post we have to setting norms on when to share information with orgs you are criticizing is Jeff Kauffman's post [EA · GW] on the matter. While I don't fully agree with the reasoning within it, in there he says:
Sometimes orgs will respond with requests for changes, or try to engage you in private back-and forth. While you’re welcome to make edits in response to what you learn from them, you don’t have an obligation to: it’s fine to just say “I’m planning to publish this as-is, and I’d be happy to discuss your concerns publicly in the comments.”
[EDIT: I’m not advocating this for cases where you’re worried that the org will retaliate or otherwise behave badly if you give them advance warning, or for cases where you’ve had a bad experience with an org and don’t want any further interaction. For example, I expect Curzi didn’t give Leverage an opportunity to prepare a response to My Experience with Leverage Research, and that’s fine.]
This case seems to me to be fairly clearly covered by the second paragraph, and also, Nonlinear's response to "I am happy to discuss your concerns publicly in the comments" was to respond with "I will sue you if you publish these concerns", to which IMO the reasonable response is to just go ahead and publish before things escalate further. Separately, my sense is Ben's sources really didn't want any further interaction and really preferred having this over with, which I resonate with, and is also explicitly covered by Jeff's post.
So in as much as you are trying to enforce some kind of existing norm that demands running posts like this by the org, I don't think that norm currently has widespread buy-in, as the most popular and widely-quoted post on the topic does not demand that standard (I separately think the post is still slightly too much in favor of running posts by the organizations they are criticizing, but that's for a different debate).]
Replies from: jkaufman, EmersonSpartz↑ comment by jefftk (jkaufman) · 2023-09-07T17:11:32.813Z · LW(p) · GW(p)
This case seems to me to be fairly clearly covered by the second paragraph, and also, Nonlinear's response to "I am happy to discuss your concerns publicly in the comments" was to respond with "I will sue you if you publish these concerns"
agreed
Replies from: EmersonSpartz↑ comment by Emerson Spartz (EmersonSpartz) · 2023-09-07T21:23:52.343Z · LW(p) · GW(p)
In case it wasn't clear, we didn't say 'don't publish', we said 'don't publish until we've had a week to gather and share the evidence we have':
↑ comment by jefftk (jkaufman) · 2023-09-07T23:26:29.277Z · LW(p) · GW(p)
I'm trying to support two complementary points:
-
The norm I've been pushing of sharing things with EA organizations ahead of time is only intended for cases where you have a neutral or better relationship with the organization, and not situations like this one where there are allegations of mistreatment, or you don't trust them to behave cooperatively.
-
A threat to sue if changes are not made to the text of the post is not cooperative.
↑ comment by Rebecca (bec-hawk) · 2023-09-07T22:03:41.626Z · LW(p) · GW(p)
You say "if published as is", not "if published now". Is what you're saying in the comment that, if Ben had waited a week and then published the same post, unedited, you would not want to sue? That is not what is conveyed in the email.
Replies from: EmersonSpartz↑ comment by Emerson Spartz (EmersonSpartz) · 2023-09-07T22:42:36.283Z · LW(p) · GW(p)
Yes, that is what I intended to communicate here, and I was worried people might think I was trying to suppress the article so I bolded this request to ensure people didn't misunderstand:
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2023-09-07T23:19:08.640Z · LW(p) · GW(p)
For what it's worth, I also interpreted the "if published as is" as "if you do not edit the post to no longer be libelous" and not "if you do not give us a week to prepare a contemporaneous rebuttal".
I think if you wanted to reliably communicate that you were not asking for changes to the text of the post, you would have needed to be explicit about that?
↑ comment by Martin Randall (martin-randall) · 2023-09-10T02:28:06.266Z · LW(p) · GW(p)
Please don't post screenshots of comments that include screenshots of comments. It is harder to read and to search and to reply. You can just quote the text, like habryka did above.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-10T07:34:16.918Z · LW(p) · GW(p)
Consider that making it harder to search for the text may be the whole point of posting a screenshot.
↑ comment by Emerson Spartz (EmersonSpartz) · 2023-09-07T09:05:43.191Z · LW(p) · GW(p)
There is a reason courtrooms give both sides equal chances to make their case before they ask the jury to decide.
It is very difficult for people to change their minds later, and most people assume that if you’re on trial, you must be guilty, which is why judges remind juries about “innocent before proven guilty”.
This is one of the foundations of our legal system, something we learned over thousands of years of trying to get better at justice. You’re just assuming I’m guilty and saying that justifies not giving me a chance to present my evidence.
Also, if we post another comment thread a week later, who will see it? EAF/LW don’t have sufficient ways to resurface old but important content.
Re: “my guess is Ben’s sources have received dozens of calls” - well, your guess is wrong, and you can ask them to confirm this.
You also took my email strategically out of context to fit the Emerson-is-a-horned-CEO-villain narrative. Here’s the full one:
↑ comment by lberglund (brglnd) · 2023-09-07T09:10:27.700Z · LW(p) · GW(p)
Also, if we post another comment thread a week later, who will see it? EAF/LW don’t have sufficient ways to resurface old but important content.
This doesn't seem like an issue. You could instead write a separate post a week later which has a chance of gaining traction.
Replies from: Viliam, ea247↑ comment by Viliam · 2023-09-07T10:24:10.792Z · LW(p) · GW(p)
Yep. Posts critical of Less Wrong are often highly upvoted on Less Wrong, so I'd say a good defense (one containing factual statements, not just "this is 100% wrong and I will sue you") has like 80% chance to get 100 or more karma.
I didn't understand the part about "resurfacing old content", but one can simply link the old article from the new one, and ask moderators to link the new article from the old one. (The fact that the new article will be on the front page but the old one will no longer be there, seems to work in favor of the new article.) Even if moderators for some mysterious reason refused to make the link, a comment under the old article saying "there is a response from Nonlinear" with a link would probably be highly upvoted.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-09-07T18:20:04.463Z · LW(p) · GW(p)
Oli's comment is a good summary of my relevant concerns! And I'm definitely happy to link prominently to any response by Nonlinear, and make edits if things are shown to be false.
As well as a bunch of other reasons already mentioned (and some not), another one is that most of the things they proposed to show me didn't seem that cruxy to me? Maybe a few of stories are wrong, but I believe the people were really very hurt by their time at Nonlinear, and I believe both were quite credibly intimidated, and I'm pretty sure a lot of folks in the relevant ecosystems would like to know if I believe that. When we talked Nonlinear mostly wanted to say that Alice told lies about things like why she quit being vegan, but even if that's true tons of my evidence doesn't come from Alice or from her specific stories, so the delay request didn't seem like it would likely change my mind. Maybe it will, but I think it's more important to say when I believe that terrible behavior has occurred, so I didn't feel beholden to delay for them.
↑ comment by KatWoods (ea247) · 2023-09-07T10:24:40.371Z · LW(p) · GW(p)
Yes, we intend to. But given that our comments just asking for people to withhold judgment are getting downvoted, that doesn’t bode well for future posts getting enough upvotes to be seen.
It's going to take us at least a week to gather all the evidence, then it will take a decent amount of time to write up.
In the meantime, people have heard terrible things about us and nobody's a perfect rationalist who will simply update. Once you've made up your mind about somebody, it can be really hard to change.
Additionally, once things are on the internet, they're usually there for good. Now it might be that the first thing people find when looking up Nonlinear is this post, even if we do disprove the claims.
A post that would most likely have been substantially different if he'd seen all of our evidence first. He already made multiple updates to the post based on the things we shared, and he would have made far more if he had given us the chance to actually present our evidence.
Not to mention that now that he's published this and sent them money, it's psychologically difficult for him to update.
Replies from: AprilSR, bec-hawk↑ comment by Rebecca (bec-hawk) · 2023-09-07T10:40:47.534Z · LW(p) · GW(p)
You could possibly do a more incremental version of this, e.g. link to a Google Drive where you upload the pieces of evidence as you find them? That way people could start updating right away rather than waiting until everything's been put together. And then you could add a comment linking to the write-up when it's done.
↑ comment by TekhneMakre · 2023-09-07T19:10:55.027Z · LW(p) · GW(p)
I want to note a specific pattern that I've noticed. I am not commenting on this particular matter overall; the events with Nonlinear may or may not be an instance of the pattern. It goes like this:
- Fred does something unethical / immoral.
- People start talking about how Fred did something bad.
- Fred complains that people should not be talking the way they are talking, and Fred specifically invokes the standard of the court system, saying stuff like "there's a reason courts presume innocence / allow the accused to face the accuser / give a right to a defense attorney / have discovery / have the right to remain silent / right to avoid incriminating oneself / etc. etc.".
Fred's implication is that people shouldn't be talking the way they're talking because it's unjust.
... Of course, this pattern could also happen when step 1 is Fred not doing something bad; and either way, maybe Fred is right... But I suspect that in reality, Fred uses this as a way of isolated demands for rigor.
↑ comment by Adam Zerner (adamzerner) · 2023-09-09T04:20:08.822Z · LW(p) · GW(p)
You also took my email strategically out of context to fit the Emerson-is-a-horned-CEO-villain narrative. Here’s the full one:
I don't get that impression. Nothing in the full one stands out to me as important context that would really change anything non-trivially.
↑ comment by Adam Zerner (adamzerner) · 2023-09-09T04:33:55.255Z · LW(p) · GW(p)
You seem to be disregarding other considerations at play here.
Zooming out, if we forget about the specifics of this situation and instead think about the more general question of whether or not one should honor requests to delay such publications, one consideration is wanting to avoid unjustifiably harming someones reputation (in this case yours, Kat's, and Nonlinear's).
But I think habryka lists some other important considerations too in his comment [LW(p) · GW(p)]:
- Guarding against retaliation
- Guarding against lost productivity
- Guarding against reality-distortion fields
Personally, I don't have strong feelings about where the equilibrium should be here. However, I do feel strongly that the discussion needs to look at the considerations on both sides.
Also, I raise my eyebrow a fair bit at those who do have strong feelings about where the equilibrium should be. At least if they haven't thought about it for many hours. It strikes me as a genuinely difficult task to enumerate and weigh the considerations at play.
Replies from: spencerg↑ comment by spencerg · 2023-09-09T10:45:21.913Z · LW(p) · GW(p)
If we want to look at general principles rather than specific cases, if the original post had not contained a bunch of serious misinformation (according to evidence that I have access to) then I would have been much more sympathetic to not delaying.
But the combination of serious misinformation + being unwilling to delay a short period to get the rest of the evidence I find to be a very bad combination.
I also don’t think the retaliation point is a very good one, as refusing to delay doesn’t actually prevent retaliation.
I don’t find the lost productivity point is particularly strong given that this was a major investigation already involving something like 150 hours of work. In that context, another 20 hours carefully reviewing evidence seems minimal (if it’s worth ~150 hours to investigate it’s worth 170 to ensure it’s accurate presumably)
Guarding against reality distortion fields is an interesting point I hadn’t thought of until Oliver brought it up. However, it doesn’t seem (correct me if I’m wrong) that Ben felt swayed away from posting after talking to nonlinear for 3 hours - if that’s true then it doesn’t seem like much of a concern here. I also think pre-committing to a release date helps a bit with that.
↑ comment by DanielFilan · 2023-09-07T18:25:24.906Z · LW(p) · GW(p)
The nearly final draft of this post that I was given yesterday had factual inaccuracies that (in my opinion and based on my understanding of the facts) are very serious
Could you share examples of these inaccuracies?
Replies from: DanielFilan↑ comment by DanielFilan · 2023-09-07T20:37:31.124Z · LW(p) · GW(p)
Spencer responded [EA(p) · GW(p)] to a similar request in the EA forum. Copy-pasting the response here in quotes, but for further replies etc. I encourage readers to follow the link:
Yes, here two examples, sorry I can’t provide more detail:
-there were claims in the post made about Emerson that were not actually about Emerson at all (they were about his former company years after he left). I pointed this out to Ben hours before publication and he rushed to correct it (in my view it’s a pretty serious mistake to make false accusations about a person, I see this as pretty significant)!
-there was also a very disparaging claim made in the piece (I unfortunately can’t share the details for privacy reasons; but I assume nonlinear will later) that was quite strongly contradicted by a text message exchange I have
↑ comment by xarkn · 2023-09-07T13:16:21.688Z · LW(p) · GW(p)
You are not directly vouching for anyone here, but as a general point I'd like to argue that friendship is a poor predictor of ethical behavior.
It may be tempting to consider positive social experiences and friendship as evidence that someone behaves generally ethically and with high standards, but when dealing with more capable people, it's not. Maintaining ethical behavior and building trust in low-stakes settings like friendship with few temptations to try and exploit for profit is trivially easy. Especially if you are socially skilled and capable of higher level power games and manipulation. The cutthroat moves are saved exclusively for situations where the profits are large enough.
(And a skilled manipulator will rarely engage in obviously cutthroat moves anyways, because the cost of being outed as an unethical cutthroat is high enough to outweight the potential profit of most situations..)
Because you're someone with influence in the community, anyone with a manipulative bent and any smarts will absolutely give you their best impression. You have more value as an ally, and probably provide few opportunities for direct profit otherwise.
↑ comment by Viliam · 2023-09-07T20:05:26.385Z · LW(p) · GW(p)
Following this tangent, I would say that judging other people is a skill. Some people are better at it, some are worse, and the Dunning–Kruger effect very likely applies. Learning this skill is both explicit (what to notice) and implicit (you get burned -- you learn what to fear).
Examples of explicit lessons:
- Notice how the person treats people other than you -- very likely, they will treat you the same in the future, when they no longer need to impress you. Similarly, if the person tells you to treat other people badly, in the future they will probably do the same to you, or tell other people to do it.
- Sometimes there are good excuses for seemingly bad behavior, but you should make a factual list of what the person actually did (not what they said; not what other people did) and seriously consider the hypothesis that this is what they actually are, and everything else is just bullshit you want to believe.
I also think that manipulators are often repetitive and use relatively simple strategies. (No disrespect meant here; a flawless execution of a simple strategy is a powerful weapon.) For example, they ask you what is the most important thing you want to achieve in your life, and later they keep saying "if you want {the thing you said}, you have to {do what I want now}". These strategies are probably taught somewhere; they also copy them from each other; and some natural talents may reinvent them on their own.
If you want to extract resources from people (money, work, etc.), it is often a numbers game. You do not need a 100% success rate. It is much easier to have a quick way to preselect vulnerable victims, then do something with a 10% success rate in the preselected set, and then approach 10 victims.
The idea of someone who behaves ethically for years, and then stabs in the back at the optimal moment, sounds unlikely to me. How would a person achieve such high skill, if they never practice it? It seems more likely to me that someone would practice unethical behavior in low-stakes situations, and when they get reliably good, they increase the stakes (perhaps suddenly). To avoid bad reputation, there are two basic strategies: either keep regularly moving to new places and meeting new people who don't know you and don't know anyone who knows you; or only choose victims you can successfully silence.
comment by jimrandomh · 2023-09-07T09:44:23.245Z · LW(p) · GW(p)
So, Nonlinear-affiliated people are here in the comments disagreeing, promising proof that important claims in the post are false. I fully expect that Nonlinear's response, and much of the discussion, will be predictably shoved down the throat of my attention, so I'm not too worried about missing the rebuttals, if rebuttals are in fact coming.
But there's a hard-won lesson I've learned by digging into conflicts like this one, which I want to highlight, which I think makes this post valuable even if some of the stories turn out to be importantly false:
If a story is false, the fact that the story was told, and who told it, is valuable information. Sometimes it's significantly more valuable than if the story was true. You can't untangle a web of lies by trying to prevent anyone from saying things that have falsehoods embedded in them. You can untangle a web of lies by promoting a norm of maximizing the available information, including indirect information like who said what.
Think of the game Werewolf, as an analogy. Some moves are Villager strategies, and some moves are Werewolf strategies, in the sense that, if you notice someone using the strategy, you should make a Bayesian update in the direction of thinking the person using that strategy is a Villager or is a Werewolf.
Replies from: Linch, frontier64, jaan↑ comment by Linch · 2023-09-09T02:42:54.091Z · LW(p) · GW(p)
As I mentioned to you before, I suspect werewolf/mafia/avalon is a pretty bad analogy for how to suss out the trustworthiness of people irl:
- in games, the number of werewolves etc is often fixed and known to all players ahead of time; irl a lot of the difficulty is figuring out whether (and how many) terminally bad actors exist, vs honest misunderstandings, vs generically suss people.
- random spurious accusations with zero factual backing are usually considered town/vanilla/arthurian moves in werewolf games; irl this breeds chaos and is a classic DARVO tactic.
- In games, the set of both disputed and uncontested facts are discrete and often small; this is much less the case irl.
- in games, bad guys have a heavy incentive to be uncorrelated (and especially to be seen as being uncorrelated) early on; irl there are very few worlds where regularly agreeing with the now-known-to-be-bad-actors is a positive update on your innocence.
- EDIT: Rereading this comment, I think it was unclear. Basically in games, if we know Alice and Bob seem in-sync, (eg they vote similarly, often go on the same missions), if we later learn that Alice is definitely evil, this is not always an update that Bob is evil. (and in some fairly common scenarios, it's actually a positive update on Bob's innocence).
- This almost never happens in real life.
- Similarly, if Alice repeatedly endorses Bob, and we later learn Alice is evil, irl we can often write off Alice's endorsements of Bob. In games, there are sometimes structural incentives such that Alice's endorsements of Bob are more trustworthy when Alice is evil (Good guys are often innocent/clueless, bad guys know a lot of information, bad guys usually don't want to be paired with other bad guys).
- EDIT: Rereading this comment, I think it was unclear. Basically in games, if we know Alice and Bob seem in-sync, (eg they vote similarly, often go on the same missions), if we later learn that Alice is definitely evil, this is not always an update that Bob is evil. (and in some fairly common scenarios, it's actually a positive update on Bob's innocence).
- In games, the set of actions available to both good and bad actors are well-defined and often known in advance; irl does not have this luxury.
- etc
All these points, but especially the second one, means that people should be very hesitant to generalize hard-won lessons about macrolevel social dynamics from social deception games to real life.
Replies from: EliasSchmied, johnswentworth↑ comment by Elias Schmied (EliasSchmied) · 2023-09-10T17:33:11.765Z · LW(p) · GW(p)
random spurious accusations with zero factual backing are usually considered town/vanilla/arthurian moves in werewolf games; irl this breeds chaos and is a classic DARVO tactic.
In my experience this is only true for beginner play (where werewolves are often too shy to say anything), and in advanced play it is a bad guy tactic for the same reasons as IRL. Eg I think in advanced Among Us lobbies it's an important skill to subtly push an unproductive thread of conversation without making it obvious that you were the one who distracted everybody.
It's not clear/concrete to me in what ways points 3 and 5 are supposed to invalidate the analogy.
in games, bad guys have a heavy incentive to be uncorrelated (and especially to be seen as being uncorrelated); irl there are very few worlds where regularly agreeing with the now-known-to-be-bad-actors is a positive update on your innocence.
I don't understand this - it reads to me like you're saying a similar thing is true for the game and real life? But that goes against your position.
Replies from: Linch, Linch↑ comment by Linch · 2023-09-11T19:10:18.273Z · LW(p) · GW(p)
Eg I think in advanced Among Us lobbies it's an important skill to subtly push an unproductive thread of conversation without making it obvious that you were the one who distracted everybody.
I'm not much of an avid Among Us player, but I suspect this only works in Among Us because of the (much) heavier-than usual time pressures. In the other social deception games I'm aware of, the structural incentives continue to point in the other direction, so the main reason for bad guys to make spurious accusations is for anti-inductive reasons (if everybody knows that spurious accusations are a vanilla tactic, then obviously spurious accusation becomes a good "bad guy" play to fake being good).
↑ comment by Linch · 2023-09-11T00:24:04.801Z · LW(p) · GW(p)
I don't understand this - it reads to me like you're saying a similar thing is true for the game and real life? But that goes against your position.
Sorry that was awkwardly worded. Here's a simplified rephrase:
In games, bad guys want to act and look not the same. In real life, if you often agree with known bad folks, most think you're not good.
Put in a different way, because of the structure of games like Avalon (it's ~impossible for all the bad guys to not be found out, minions know who each other are, all minions just want their "team" to win so having sacrificial lambs make sense, etc), there are often equilibria where in even slightly advanced play, minions (bad guys) want to be seen as disagreeing with other minions earlier on. So if you find someone disagreeing with minions a lot (in voting history etc), especially in non-decision-relevant ways, this is not much evidence one way or another (and in some cases might even be negative evidence on your goodness). Similarly, if Mildred constantly speaks highly of you, and we later realize that Mildred is a minion, this shouldn't be a negative update on you (and in some cases is a positive), because minions often have structural reasons to praise/bribe good guys. At higher levels obviously people become aware of this dynamic so there's some anti-inductive play going on, but still. Frequently the structural incentives prevail.
In real life there's a bit of this dynamic but the level one model ("birds of a feather flock together") is more accurate, more of the time.
↑ comment by johnswentworth · 2023-09-12T01:57:33.066Z · LW(p) · GW(p)
This is very tangential, but: if that's your experience with e.g. one night ultimate werewolf, then I strongly recommend changing the mix of roles so that the numbers on each side are random and the werewolf side ends up in the majority a nontrivial fraction of the time. Makes the game a lot more fun/interesting IMO, and negates some of the points you list about divergence between such games and real life.
Replies from: Linch↑ comment by frontier64 · 2023-09-08T20:03:42.993Z · LW(p) · GW(p)
The game theory behind Werewolf goes deeper than that. Werewolf is an iterated game, if you play it at least once on a friday you're probably playing at least four more times in succession. A good way to pick up whether someone is a Villager or a Baddie is to notice how their behavior during the game correlates with their revealed role at the end of the game.
Alice is a noob player and is always quiet when she's a Baddie and talkative and open when she's a Villager. She's giving off easy tells that an observant player like Bob picks up on. He can then notice these tells while in the middle of a game and exploit them to win more against Alice.
Declan is a more skilled but somewhat new player. He is open and talkative regardless of his role. This makes it very easy for him to play Villager but he struggles to win when a Baddie because his open behavior leads to him often being caught out on provable lies.
Carol is a sophisticated Werewolf player. Each game she is maximizing not just to win that game, but to also win future games against the same players. Carol knows that she is the most sophisticated player in her group. When she's a Villager she can figure out which other players are Baddies much more often than the other Villagers. Her best plan as Villager then is to convince the other Villagers that her reads and analysis are correct without regard to the truthfulness of her persuasive strategies. Some people notice that she's not being 100% truthful and call it out as Werewolf behavior, but everyone at the table acknowledges that this is just how Carol plays and sometimes she lies even as a Villager. This serves her well in her next game as a Baddie where she uses the same tactics and doesn't give away any tells. Carol is no more suspicious or less open about her own info on average as a Baddie than as a Villager.
Replies from: Linch, Viliam↑ comment by Linch · 2023-09-09T18:38:37.961Z · LW(p) · GW(p)
Errol is a Logical Decision Theorist. Whenever he's playing a game of Werewolf, he's trying to not just win that game, but to maximize his probability of winning across all versions of the game, assuming he's predictable to other players. Errol firmly commits to reporting whether he's a werewolf whenever he gets handed that role, reasoning that behind the veil of ignorance, he's much more likely to land as villager than as werewolf, and that villager team always having a known villager greatly increases his overall odds of winning. Errol follows through with his commitments. Errol is not very fun to play with and has since been banned from his gaming group.
↑ comment by Viliam · 2023-09-09T20:25:46.914Z · LW(p) · GW(p)
Each game she is maximizing not just to win that game, but to also win future games against the same players.
This sounded really wrong to me. Like, what is the analogy in real life? I am a good guy today, but I predict that I may become a criminal tomorrow, so I am already optimizing to make it difficult to figure out?
But I suppose, in real life, circumstances also change, so things that are not criminal today may become criminal tomorrow, so you can be a good guy today and also optimize to make yourself safe when the circumstances change, even if your values won't.
↑ comment by jaan · 2023-09-08T10:18:22.350Z · LW(p) · GW(p)
the werewolf vs villager strategy heuristic is brilliant. thank you!
Replies from: jimrandomh↑ comment by jimrandomh · 2023-09-08T17:51:39.904Z · LW(p) · GW(p)
Credit to Benquo's writing [LW · GW] for giving me the idea.
comment by orthonormal · 2023-09-07T20:00:04.267Z · LW(p) · GW(p)
I'm surprised (unless I've missed it) that nobody has explicitly pointed out the most obvious reason to take the responses of the form "Kat/Emerson/Drew have been really good to me personally" as very weak evidence at best.
The allegations imply that in the present situation, Kat/Emerson/Drew would immediately tell anyone in their orbit to come and post positive testimonials of them under promises of reward or threat of retaliation (precisely as the quoted Glassdoor review says).
P(generic positive testimonials | accusation true) ≈ P(generic positive testimonials | accusation false).
The only thing that would be strong evidence against the claims here would be direct counterevidence to the claims in the post. Everything else so far is a smokescreen.
Replies from: david-mears, Zack_M_Davis↑ comment by David Mears (david-mears) · 2023-09-08T00:24:03.052Z · LW(p) · GW(p)
The currently top comment on the EA Forum copy of this post says that at least one person who wrote a positive testimonial was asked to leave a comment by Nonlinear (but they didn’t say it had to be positive) https://forum.effectivealtruism.org/posts/32LMQsjEMm6NK2GTH/sharing-information-about-nonlinear?commentId=kqQK2So3L5NJKEcYE [EA(p) · GW(p)]
Replies from: AprilSR↑ comment by Zack_M_Davis · 2023-09-07T20:18:42.144Z · LW(p) · GW(p)
Kat/Emerson/Drew would immediately tell anyone in their orbit to come and post positive testimonials of them under promises of reward or threat of retaliation
A loyal friend would also post positive testimonials, without promises or threats. (But I agree that this is weak evidence about allegations regarding behavior towards other people.)
Replies from: orthonormal↑ comment by orthonormal · 2023-09-07T20:22:38.923Z · LW(p) · GW(p)
Which is why I said that the probabilities are similar, rather than claiming the left side exceeds the right side.
comment by Elizabeth (pktechgirl) · 2023-09-07T18:19:06.874Z · LW(p) · GW(p)
I generally got a sense from speaking with many parties that Emerson Spartz and Kat Woods respectively have very adversarial and very lax attitudes toward legalities and bureaucracies, with the former trying to do as little as possible that is asked of him
Could you give more detail here? I feel like "viewing bureaucracies as obstacles to be maneuvered around" is not particularly uncommon in EA and rationality, including at Lightcone, so I assume you mean something more than that.
comment by geoffreymiller · 2023-09-07T19:08:15.005Z · LW(p) · GW(p)
A brief note on defamation law:
The whole point of having laws against defamation, whether libel (written defamation) or slander (spoken defamation), is to hold people to higher epistemic standards when they communicate very negative things about people or organizations -- especially negative things that would stick in the readers/listeners minds in ways that would be very hard for subsequent corrections or clarifications to counter-act.
Without making any comment about the accuracy or inaccuracy of this post, I would just point out that nobody in EA should be shocked that an organization (e.g. Nonlinear) that is being libeled (in its view) would threaten a libel suit to deter the false accusations (as they see them), to nudge the author(e.g. Ben Pace) towards making sure that their negative claims are factually correct and contextually fair.
That is the whole point and function of defamation law: to promote especially high standards of research, accuracy, and care when making severe negative comments. This helps promote better epistemics, when reputations are on the line. If we never use defamation law for its intended purpose, we're being very naive about the profound costs of libel and slander to those who might be falsely accused.
EA Forum is a very active public forum, where accusations can have very high stakes for those who have devoted their lives to EA. We should not expect that EA Forum should be completely insulated from defamation law, or that posts here should be immune to libel suits. Again, the whole point of libel suits is to encourage very high epistemic standards when people are making career-ruining and organization-ruining claims.
(Note: I've also cross-posted this to EA Forum here [EA · GW] )
Replies from: habryka4, jimrandomh, RobbBB, Viliam, aphyer↑ comment by habryka (habryka4) · 2023-09-07T23:04:23.715Z · LW(p) · GW(p)
(Copying my response from the EA Forum)
I agree there are some circumstances under which libel suits are justified, but the net-effect on the availability of libel suits strikes me as extremely negative for communities like ours, and I think it's very reasonable to have very strong norms against threatening or going through with these kinds of suits. Just because an option is legally available, doesn't mean that a community has to be fine with that option being pursued.
That is the whole point and function of defamation law: to promote especially high standards of research, accuracy, and care when making severe negative comments. This helps promote better epistemics, when reputations are on the line.
This, in-particular, strikes me as completely unsupported. The law does not strike me as particularly well-calibrated about what promotes good communal epistemics, and I do not see how preventing negative evidence from being spread, which is usually the most undersupplied type of evidence already, helps "promote better epistemics". Naively the prior should be that when you suppress information, you worsen the accuracy of people's models of the world.
As a concrete illustration of this, libel law in the U.S. and the U.K. function very differently. It seems to me that the U.S. law has a much better effects on public discourse, by being substantially harder actually make happen. It is also very hard to sue someone in a foreign court for libel (i.e. a US citizen suing a german citizen is very hard).
This means we can't have a norm that generically permits libel suits, since U.K. libel suits follow a very different standard than U.S. ones, and we have to decide for ourselves where our standards for information control like this is.
IMO, both U.S. and UK libel suits should both be very strongly discouraged, since I know of dozens of cases where organizations and individuals have successfully used them to prevent highly important information from being propagated, and I think approximately no case where they did something good (instead organizations that frequently have to deal with libel suits mostly just leverage loopholes in libel law that give them approximate immunity, even when making very strong and false accusations, usually with the clarity of the arguments and the transparency of the evidence taking a large hit).
Replies from: Randaly↑ comment by Randaly · 2023-09-08T05:24:30.799Z · LW(p) · GW(p)
IMO, both U.S. and UK libel suits should both be very strongly discouraged, since I know of dozens of cases where organizations and individuals have successfully used them to prevent highly important information from being propagated, and I think approximately no case where they did something good (instead organizations that frequently have to deal with libel suits mostly just leverage loopholes in libel law that give them approximate immunity, even when making very strong and false accusations, usually with the clarity of the arguments and the transparency of the evidence taking a large hit).
(Unavoidably political, as lawsuits often are)
A central example of the court system broadly, and libel lawsuits narrowly, promoting better epistemics are the allegations that the 2020 election was fraudulent.
It is certainly not true that there are always loopholes that give immunity, see e.g. Fox News' very expensive settlement in Dominion v. Fox News.
More broadly: "Trump, his attorneys, and his supporters falsely asserted widespread election fraud in public statements, but few such assertions were made in court." The false allegations of fraud were dependent on things like hearsay, false claims that opponents weren't given a chance to respond to, and vague or unsupported claims; virtually all discussion on the internet, and this post in particular, feature all three; the court system explicitly bans these. (Note that people who can't support their case under legal standards of evidence often just settle or don't bring a case in the first place.)
↑ comment by jimrandomh · 2023-09-07T21:22:14.361Z · LW(p) · GW(p)
The whole point of having laws against defamation, whether libel (written defamation) or slander (spoken defamation), is to hold people to higher epistemic standards when they communicate very negative things about people or organizations
This might be true of some other country's laws against defamation, but it is not true of defamation law in the US. Under US law, merely being wrong, sloppy, and bad at reasoning would not be sufficient to make something count as defamation; it only counts if the writer had actual knowledge that the claims were false, or was completely indifferent to whether they were true or false.
Replies from: geoffreymiller, RobbBB↑ comment by geoffreymiller · 2023-09-08T16:02:36.972Z · LW(p) · GW(p)
Jim - I didn't claim that libel law solves all problems in holding people to higher epistemic standards.
Often, it can be helpful just to incentivize avoiding the most egregious forms of lying and bias -- e.g. punishing situations when 'the writer had actual knowledge that the claims were false, or was completely indifferent to whether they were true or false'.
↑ comment by Rob Bensinger (RobbBB) · 2023-09-07T22:27:18.129Z · LW(p) · GW(p)
Jim's point here is compatible with "US libel laws are a force for good epistemics", since a law can be aimed at lying+bullshitting and still disincentivize bad reasoning (to some degree) as a side-effect.
But I do think Jim's point strongly suggests that we should have a norm against suing someone merely for reasoning poorly or getting the wrong answer. That would be moving from "lawsuits are good for norm enforcement" to "frivolous lawsuits are good for norm enforcement", which is way less plausible.
↑ comment by Rob Bensinger (RobbBB) · 2023-09-07T21:16:28.791Z · LW(p) · GW(p)
Without making any comment about the accuracy or inaccuracy of this post, I would just point out that nobody in EA should be shocked that an organization (e.g. Nonlinear) that is being libeled (in its view) would threaten a libel suit to deter the false accusations (as they see them), to nudge the author(e.g. Ben Pace) towards making sure that their negative claims are factually correct and contextually fair.
Wikipedia claims: "The 1964 case New York Times Co. v. Sullivan, however, radically changed the nature of libel law in the United States by establishing that public officials could win a suit for libel only when they could prove the media outlet in question knew either that the information was wholly and patently false or that it was published 'with reckless disregard of whether it was false or not'."
Spartz isn't a "public official", so maybe the standard is laxer here?
If not, then it seems clear to me that Spartz wouldn't win in a fair trial, because whether or not Ben got tricked by Alice/Chloe and accidentally signal-boosted others' lies, it's very obvious that Ben is neither deliberately asserting falsehoods, nor publishing "with reckless disregard".
(Ben says he spent "100-200 hours" researching this post, which is way beyond the level of thoroughness we should require for criticizing an organization on LessWrong or the EA Forum!)
I think there should be a strong norm against threatening people with libel merely for saying a falsehood; the standard should at minimum be that you have good reason to think the person is deliberately lying or bullshitting.
(I think the standard should be way higher than that, too, given the chilling effect of litigiousness; but I won't argue that here.)
Replies from: jimrandomh, frontier64, geoffreymiller, kave↑ comment by jimrandomh · 2023-09-07T23:22:21.484Z · LW(p) · GW(p)
Spartz isn't a "public official", so maybe the standard is laxer here?
The relevant category (from newer case law than New Your Times Co v. Sullivan) is public figure, not public official, which is further distinguished into general-purpose and limited-purpose public figures. I haven't looked for case law on it, but I suspect that being the cofounder of a 501(c)(3) is probably sufficient by itself to make someone a limited-purpose public figure with respect to discussion of professional conduct within that 501(c)(3).
Also, the cases specifically call out asymmetric access to media as a reason for their distinctions, and it seems to me that in this case, no such asymmetry exists. The people discussed in the post are equally able to post on LessWrong and the EA Forum (both as replies and as a top-level post), and, to my knowledge, neither Ben nor anyone else has restricted or threatened to restrict that.
↑ comment by frontier64 · 2023-09-08T20:40:28.980Z · LW(p) · GW(p)
Wikipedia claims: "The 1964 case New York Times Co. v. Sullivan, however, radically changed the nature of libel law in the United States by establishing that public officials could win a suit for libel only when they could prove the media outlet in question knew either that the information was wholly and patently false or that it was published 'with reckless disregard of whether it was false or not'."
This is typically referred to as showing "actual malice." But as you correctly surmised, this case law is irrelevant. Sullivan has been extended to cover public figures as well, but Spartz is not a public figure.[1]
I am not a California attorney, but the caselaw says that the elements of a libelous statement are that it is:
- false,
- defamatory,
- unprivileged,
- and has a natural tendency to injure or causes special damage.
Libel only applies to statements of fact or mixed statements of fact and opinion, but not exclusive statements of opinion.[2] This post clearly has many direct statements of fact.[3] Many of these statements of fact have a natural tendency to injure Spartz's and Nonlinear's reputation. I'm certain that them being published has already cause Spartz and Nonlinear a substantial amount of damages. So if they are false and Spartz decides to bring a case against Ben then I would bet that Ben would be found liable for libel
Public figures are typically those who have general fame or notoriety in the community. Marilyn Monroe, Bill Clinton, Kim Kardashian, Riley Reid? all public figures. Your high school teacher, your maid, or the boss of a medium sized company? not public figures. ↩︎
i.e. Fact: "Johnny cheated on his wife with Jessica," Opinion: "Johnny is a terrible person," Mixed: "I hate how much Johnny cheats on his wife." ↩︎
"Before she went on vacation, Kat requested that Alice bring a variety of illegal drugs across the border for her (some recreational, some for productivity). Alice argued that this would be dangerous for her personally, but Emerson and Kat reportedly argued that it is not dangerous at all and was “absolutely risk-free” " ↩︎
↑ comment by gwern · 2023-09-08T22:52:41.492Z · LW(p) · GW(p)
Public figures are typically those who have general fame or notoriety in the community.
He very obviously is one. As habryka points out, he has a WP entry backed by quite a few sources about him, specifically. He has an entire 5400-word New Yorker profile about him, which is just one of several you can grab from the WP entry (eg. Bloomberg). For comparison, I don't think even Eliezer has gotten an entire New Yorker profile yet! If this is not a 'public figure', please do explain what you think it would take. Does he need a New York Times profile as well? (I regret to report that he only has 1 or 2 paragraphs thus far.)
Now, I am no particular fan of decreeing people 'public figures' who have not particularly sought out fame (and would not appreciate becoming a 'public figure' myself); however, most people would say that by the time you have been giving speeches to universities or agreeing to let a New Yorker journalist trail you around for a few months for a profile to boost your fame even further, it is safe to say that you have probably long since crossed whatever nebulous line divides 'private' from 'public figure'.
↑ comment by habryka (habryka4) · 2023-09-08T20:48:28.849Z · LW(p) · GW(p)
Emerson Spartz has a Wikipedia article, and the critique is highly relevant to him in-particular. My best understanding is that Emerson is a public figure for the purpose of this article (though not necessarily for the purpose of all articles), but it doesn't seem super clear cut to me.
↑ comment by geoffreymiller · 2023-09-08T15:29:31.970Z · LW(p) · GW(p)
Rob - you claim 'it's very obvious that Ben is neither deliberately asserting falsehoods, nor publishing "with reckless disregard'.
Why do you think that's obvious? We don't know the facts of the matter. We don't know what information he gathered. We don't know the contents of the interviews he did. As far as we can tell, there was no independent editing, fact-checking, or oversight in this writing process. He's just a guy who hasn't been trained as an investigative journalist, who did some investigative journalism-type research, and wrote it up.
Number of hours invested in research does not necessarily correlate with objectivity of research -- quite the opposite, if someone has any kind of hidden agenda.
I think it's likely that Ben was researching and writing in good faith, and did not have a hidden agenda. But that's based on almost nothing other than my heuristic that 'he seems to be respected in EA/LessWrong circles, and EAs generally seem to act in good faith'.
But I'd never heard of him until yesterday. He has no established track record as an investigative journalist. And I have no idea what kind of hidden agendas he might have.
So, until we know a lot more about this case, I'll withhold judgment about who might or might not be deliberately asserting falsehoods.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2023-09-08T17:33:31.365Z · LW(p) · GW(p)
Why do you think that's obvious?
I know Ben, I've conversed with him a number of times in the past and seen lots of his LW comments, and I have a very strong and confident sense of his priorities and values. I also read the post, which "shows its work" to such a degree that Ben would need to be unusually evil and deceptive in order for this post to be an act of deception.
I don't have any private knowledge about Nonlinear or about Ben's investigation, but I'm happy to vouch for Ben, such that if he turns out to have been lying, I ought to take a credibility hit too.
He's just a guy who hasn't been trained as an investigative journalist
If he were a random non-LW investigative journalist, I'd be a lot less confident in the post's honesty.
Number of hours invested in research does not necessarily correlate with objectivity of research
"Number of hours invested" doesn't prove Ben isn't a lying sociopath (heck, if you think that you can just posit that he's lying about the hours spent), but if he isn't a lying sociopath, it's strong evidence against negligence.
So, until we know a lot more about this case, I'll withhold judgment about who might or might not be deliberately asserting falsehoods.
That's totally fine, since as you say, you'd never heard of Ben until yesterday. (FWIW, I think he's one of the best rationalists out there, and he's a well-established Berkeley-rat community member who co-runs LessWrong and who tons of other veteran LWers can vouch for.)
My claim isn't "Geoffrey should be confident that Ben is being honest" (that maybe depends on how much stock you put in my vouching and meta-vouching here), but rather:
- I'm pretty sure Emerson doesn't have strong reason to think Ben isn't being honest here.
- If Emerson lacks strong reason to think Ben is being dishonest, then he definitely shouldn't have threatened to sue Ben.
E.g., I'm claiming here that you shouldn't sue someone for libel if you feel highly uncertain about whether they're being honest or dishonest. It's ethically necessary (though IMO not sufficient) that you feel pretty sure the other person is being super dishonest. And I'd be very surprised if Emerson has rationally reached that epistemic state (because I know Ben, and I expect he conducted himself in his interactions with Nonlinear the same way he normally conducts himself).
Replies from: tracingwoodgrains, geoffreymiller, RobbBB↑ comment by TracingWoodgrains (tracingwoodgrains) · 2023-12-16T06:51:56.771Z · LW(p) · GW(p)
Reading these comments three months later, I want to note that I am downgrading your credibility as well and I think it's worth specifically stating as such, because while it seems abundantly clear your intentions are good and you do not participate in bad faith, the series of extremely harsh comments I've been directing towards Ben's work in the update thread [LW · GW] apply to your analysis of his work as well. I think you treated number of hours as a reason to assign credibility without considering balance in those hours, and failed to consider the ways in which refusing to look at contrary evidence credibly promised to be available soon suggests reckless disregard for truth.
↑ comment by geoffreymiller · 2023-09-08T20:04:55.288Z · LW(p) · GW(p)
Fair enough. Thanks for replying. It's helpful to have a little more background on Ben. (I might write more, but I'm busy with a newborn baby here...)
↑ comment by Rob Bensinger (RobbBB) · 2023-09-08T17:37:42.050Z · LW(p) · GW(p)
(But insofar as you continue to be unsure about Ben, yes, you should be open to the possibility that Emerson has hidden information that justifies Emerson thinking Ben is being super dishonest. My confidence re "no hidden information like that" is downstream of my beliefs about Ben's character.)
↑ comment by Viliam · 2023-09-07T21:28:41.548Z · LW(p) · GW(p)
What you described was perhaps the intent behind the law, but that's not necessarily how it is used in practice. You can use the law to intimidate people who have less money than you, simply by giving the money to a lawyer... and then the other side needs to spend about the same money on their lawyer... or risk losing the case. "The process is the punishment."
(I have recently contributed money to a defense fund of a woman who exposed a certain criminal organization in my country. The organization was disbanded, a few members were convicted, one of them ended up in prison, but the boss is politically well-connected and keeps avoiding punishment. In turn, the boss filed five lawsuits against her for "damaging a good reputation of a legal entity". He already lost one of the lawsuits, and is likely to lose all of them, but he has lots of money so he probably doesn't care. Meanwhile, the legal expenses have almost ruined the woman, so she needs to ask people for contributions. non-English link)
comment by katriel (katriel-friedman) · 2023-09-07T13:53:59.760Z · LW(p) · GW(p)
(Crossposted from EA Forum)
On an earlier discussion [EA(p) · GW(p)] of Nonlinear's practices, I wrote:
I worked closely with Kat for a year or so (2018-2019) when I was working at (and later leading) Charity Science Health. She's now a good friend.
I considered Kat a good and ethical leader. I personally learned a lot from working with her. In her spending and life choices, she has shown a considerable moral courage: paying herself only $12K/year, dropping out of college because she didn't think it passed an impact cost-benefit test. Obviously that doesn't preclude the possibility that she has willfully done harmful things, but I think willfully bad behavior by Kat Woods is quite unlikely, a priori.
I would also like to share my experience negotiating my salary with Kat when I first joined Charity Science Health, i.e., before we were friends. It was extremely positive. She was very committed to frugality, and she initially offered me the position of Associate Director at a salary of $25K/year, the bottom end of the advertised salary range. We exchanged several long emails discussing the tradeoffs in a higher or lower salary (team morale, risk of value drift, resources available for the core work, counterfactual use of funds, etc.). The correspondence felt like a genuine, collaborative search for the truth. I had concluded that I needed to make at least $45K/year to feel confident I was saving the minimum I would need in retirement, and in the end we agreed on $45K. Subsequently Kat sent me a contract for $50K, which I perceived as a goodwill gesture. My positive experience seems very different from what is reported here.
Replies from: martin-randall↑ comment by Martin Randall (martin-randall) · 2023-09-10T14:11:43.114Z · LW(p) · GW(p)
For me your comment is a red flag.
It implies at least a 2x multiplier on salaries for equivalent work. This practice is linked with gender pay gaps, favoritism, and a culture of pay secrecy. It implies that other similar matters, such as expenses, promotions, work hours, and time-off, may be similarly unequal. And yes, there is a risk to team morale.
It risks discriminating against people on characteristics that are, or should be, protected from discrimination. My risk of value drift is influenced by my political and religious views. My need for retirement savings is influenced by my age. My baseline for frugal living is influenced by my children and my spouse and my health.
It shows poor employer-employee boundaries. I would be concerned that if I were to ask for time off from my employer, the answer would depend on management's opinion of what I was planning to do with the time, rather than on company policy and objective factors.
In general, if some employees are having extremely positive experiences, and other employees are having extremely negative experiences, that is not reassuring. Still, I am glad you had a good experience.
comment by Linda Linsefors · 2023-09-10T14:07:39.816Z · LW(p) · GW(p)
Thanks for writing this post.
I've heard enough bad stuff about Nonlinear from before, that I was seriously concerned about them. But I did not know what to do. Especially since part of their bad reputation is about attacking critics, and I don't feel well positioned to take that fight.
I'm happy some of these accusations are now out in the open. If it's all wrong and Nonlinear is blame free, then this is their chance to clear their reputation.
I can't say that I will withhold judgment until more evidence comes in, since I already made a preliminary judgment even before this post. But I can promise to be open to changing my mind.
comment by lc · 2023-09-07T18:49:35.585Z · LW(p) · GW(p)
Note that during our conversation, Emerson brought up HPMOR and the Quirrell similarity, not me.
Began laughing hysterically here.
Replies from: PoignardAzur↑ comment by PoignardAzur · 2023-12-03T21:55:37.184Z · LW(p) · GW(p)
Yeah, stumbling on this after the fact, I'm a bit surprised that among the 300+ comments barely anybody is explicitly pointing this out:
I think of myself as playing the role of a wise old mentor who has had lots of experience, telling stories to the young adventurers, trying to toughen them up, somewhat similar to how Prof Quirrell[8] toughens up the students in HPMOR through teaching them Defense Against the Dark Arts, to deal with real monsters in the world.
I mean... that's a huge, obvious red flag, right? People shouldn't claim Voldemort as a role model unless they're a massive edgelord. Quirrell/Voldemort in that story is "toughening up" the students to exploit them; he teaches them to be footsoldiers, not freedom fighters or critical thinkers (Harry is the one who does that) because he's grooming them to be the army of his future fascist government. This is not subtext, it's in the text.
HPMOR's Quirrell might be the EA's Rick Sanchez.
comment by NeroWolfe · 2023-09-19T14:11:11.210Z · LW(p) · GW(p)
The "give us a week" message appears either misleading or overly optimistic. Unless there have been replies from Nonlinear in a separate thread, I don't think they have explained anything beyond their initial explanation of getting food. Coupled with the fact that it's hard to imagine a valid context or explanation for some of the things they confirm to have happened (drug smuggling, driving without a license or much experience), I have to conclude that they're not likely to change my mind at this point. I realize that probably doesn't matter to them since I'm just a random person on the internet, but it's disappointing that they haven't made a better effort to explain or atone.
Thanks, @Ben Pace, [LW · GW] for doing the initial work on this. I agree with your other message stating that you're done with this; you don't need to get sucked further down this hole than you already are.
comment by Alexandra Bos (AlexandraB) · 2023-09-07T14:26:00.531Z · LW(p) · GW(p)
To share some anecdotal data: I personally have had positive experiences doing regular coaching calls with Kat this year and feel that her input has been very helpful.
I would encourage us all to put off updating until we also get the second side of the story - that generally seems like good practice to me whenever it is possible.
(also posted this comment on the EA forum)
Replies from: adamzerner, philh, adamzerner↑ comment by Adam Zerner (adamzerner) · 2023-09-10T20:59:31.501Z · LW(p) · GW(p)
I strongly disagree with the premise that we haven't gotten the second side of the story.
I actually believe that the Bayesian evidence for what the second side of the story is is quite strong.
- As Ben explains [LW(p) · GW(p)], Nonlinear was given three hours to provide their side of the story. I would strongly expect there to be a Pareto Principle thing that applies here. In the first hour, I'd expect that -- let's just make up numbers -- 70% of the thrust (ie. the big idea, not necessarily every little detail) of the "second side of the story" would be provided. Then in the next two hours, 90% of the thrust would have been provided. And from there, there continue to be diminishing returns.
- Emerson did say that Ben's paraphrasing [LW · GW] was a "Good summary!". There are caveats and more details discussed here [LW(p) · GW(p)], but even after taking those caveats and details into account, I still believe that Nonlinear's response to their opinion of the paraphrase would have been very different if there were in fact important things about the paraphrase that were wrong or omitted.
- Similarly, I would expect [LW · GW] that, if there were important things here that were wrong or omitted, Nonlinear would write a comment expressing this within a day or two. As ElliotJDavies says [LW(p) · GW(p)], the accusations have happened for over a year, and so you'd think that Nonlinear would be able to provide the thrust of their response relatively quickly.
- Note: In her post [LW · GW], Kat does discuss the point that even a relatively easy to dispute claim took them hours to refute (tracking down conversations and stuff). However, a simple "Here are the top five important and cruxy things that we believe are wrong or omitted. It will take some time to collect all of the evidence, but here is a quick description of the main things that we anticipate providing." probably shouldn't take more than a few hours.
There is a bunch more Bayesian evidence than this, but I think these three bullet points get the point across and are a good starting point.
I suppose people might dispute what "second side of the story" really means. My thoughts on this are that something along the lines of "received strong Bayesian evidence for the second side of the story" is the right place to draw the boundaries [? · GW] around what it means to have "gotten" the second side of the story.
Suppose that your friend Alice tells you about an argument she had with her partner Bob, and how Bob was being very contentious or something. It depends on the context of course, but I could imagine Alice's telling you this not being very strong Bayesian evidence in favor of Bob actually acting very contentiously. In which case, I think it would make sense to say that you haven't "gotten the second side of the story". I don't think that's the type of thing that is happening with Nonlinear though.
↑ comment by philh · 2023-09-07T15:58:35.621Z · LW(p) · GW(p)
So there's a danger of: "I read the accusation, the response comes out and for whatever reason I don't see it or I put it on my to-read list and forget, and I come out believing the false accusation".
There's also a danger of: "I don't read the accusation, or read it and withhold judgment, pending the response. Then the response doesn't come out when it was promised, and I think oh, these things sometimes take longer than expected, it'll be here soon. And at some point I just forget that it never came out at all." Or: "Then when the response comes out, it's been long enough since I read the original that I don't notice it's actually a pretty weak response to it."
So I dunno what good policy is in general.
Replies from: ElliotJDavies, EmersonSpartz↑ comment by ElliotJDavies · 2023-09-07T16:37:57.663Z · LW(p) · GW(p)
This is also my concern. Especially considering non-linear have been aware of these accusations for over a year now, and don't have a ready response.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-09-07T18:22:00.844Z · LW(p) · GW(p)
Yeah, I think I'd be like "the situation seems to me like they really hurt some people and had basically successfully intimidated them into silence", so to me it seems great to move the accusations into public view.
↑ comment by Emerson Spartz (EmersonSpartz) · 2023-09-07T17:38:21.352Z · LW(p) · GW(p)
Personally, I think it's correct to update somewhat, but in situations like this where only one side has shared their perspective, I'm much more likely to overupdate ("those monsters!") so I have to guard against that.
Replies from: Benito, philh↑ comment by Ben Pace (Benito) · 2023-09-07T18:28:08.570Z · LW(p) · GW(p)
I did hear your side for 3 hours and you changed my mind very little and admitted to a bunch of the dynamics ("our intention wasn't just to have employees, but also to have members of our family unit") and you said my summary was pretty good. You mostly laughed at every single accusation I brought up and IMO took nothing morally seriously and the only ex ante mistake you admitted to was "not firing Alice earlier". You didn't seem to understand the gravity of my accusations, or at least had no space for honestly considering that you'd seriously hurt and intimidated some people.
I think I would have been much more sympathetic to you if you had told me that you'd been actively letting people know about how terrible an experience your former employees had, and had encouraged people to speak with them, and if you at literally any point had explicitly considered the notion that you were morally culpable for their experiences.
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2023-09-11T17:23:01.766Z · LW(p) · GW(p)
and IMO took nothing morally seriously
Are there any good examples of this, because this would be pretty important for us to know.
↑ comment by philh · 2023-09-07T22:10:28.154Z · LW(p) · GW(p)
Thinking about "situations like this" does sound like it could be helpful. Some come to mind, and caveat that it's hard to remember how I felt at different points in time but:
- Case one: if the accused ever gave their perspective, I don't remember it.
- Case two: the accused sharing their perspective initially made me more sympathetic to them, but that that was a mistake on my part because it turned out to be full of lies.
- Case three: the accused sharing their perspective made me less sympathetic to them.
- Case four: I dismissed the accusations offhand and think I was right to do so.
So this is weak evidence, but I don't feel like I personally have a history of overupdating in the direction of "those monsters".
↑ comment by Adam Zerner (adamzerner) · 2023-09-11T21:18:27.556Z · LW(p) · GW(p)
To share some anecdotal data: I personally have had positive experiences doing regular coaching calls with Kat this year and feel that her input has been very helpful.
I'm not sure if it was intended as such, but I see this to be very weak evidence for, for lack of a better phrase, the "correct judgement" being "in favor of" rather than "against" Nonlinear.
I say this because people who are manipulative and unethical (and narcissistic, and sociopathic...) tend to also be capable of being very charming, likable, and helpful. So I think it is very possible that Kat both is "significantly in the wrong" about various things while also having lots of positive interactions with others, in such a way that would make you think "surely someone so nice and friendly would never do all of these other unethical things".
comment by Adam Zerner (adamzerner) · 2023-09-08T04:08:56.964Z · LW(p) · GW(p)
It's probably too late to do this for the OP, but in the future, I propose having two separate posts in situations like these.
- One discussing any general thoughts on things like communication cultures [? · GW] and community norms.
- And a second discussing any things specific to the particular incident. Which in this case would be discussions about Kat, Emerson, Alice and Chloe.
Why? Because as discussed here [LW(p) · GW(p)], I think most people shouldn't spend more than a few minutes paying attention to (2). On the other hand, (1) seems like a perfectly good conversation for most people to spend time on.
And as a bonus, I pretty strongly suspect that firmly factoring out (1) from (2) would be quite helpful in making progress on (2).
comment by frontier64 · 2023-09-07T16:09:23.996Z · LW(p) · GW(p)
This is honestly really weird and typical of what I expect from the people who spend their time being business-side community members in EA.
I (using Lightcone funds) am going to pay them each $5,000 after publishing this post.
I don't think you understand just what this means. You're paying your sources to contribute to muckraking.
Nonlinear seems like the standard rationalist org that does weird stuff commercially, hires weird people, and has weird rules about social and sexual stuff. The disgruntled, former friend-employee was sleeping with one of the bosses. Like, why should I care that one of the other bosses told stupid stories about what a badass negotiator he is? Once your workplace devolves into the employees sleeping with the bosses, the regular standards of workplace decorum are out the window.
I think in general, the sense I get from this post is just that, you're applying a regular standard of workplace decorum to a clearly unusual and non-standard workplace. Like what's really weird to me is how little play the whole "intern is sleeping with the head boss's brother and the boss's girlfriend is maybe trying to sleep with the same intern" situation gets from this post. It's very clearly like, totally insane and anyone normal hearing about a workplace like this would instantly recognize that there' going to be a million other weird and BAD things going on.
Yes if I was a working professional who worked in financial regulation compliance at some $10k/month office building with my own office and my boss asked me to clean up his cereal it would be fucking weird. I would get out of there. But I'm not. I'm some person who's willingly working for next to no pay, traveling around the world with my boss, living in the same home as them, doing illegal drugs with them, sleeping with them, etc. Being asked to: clean up their cereal/drive without a license/not hang out with low value people/eat meat is just one more thing that is consistent with the office environment. Then someone else hears about all the stupid shit that went on in my workplace and decides it's worth $5,000 and a blog post.
Good on the employees for leaving, maybe they were un-corrupted pure youth who were swayed by the mighty lies and persuasive ability of Woods and the Emersons.
Replies from: DanielFilan, bec-hawk, orthonormal↑ comment by DanielFilan · 2023-09-07T18:32:39.222Z · LW(p) · GW(p)
Being asked to... not hang out with low value people... is just one more thing that is consistent with the office environment.
Maybe I'm naive, but I don't think there's approximately any normal relationship in which it's considered acceptable to ask someone to not associate with ~anyone other than current employees. The closest example I can think of is monasticism, but in that context (a) that expectation is clear and (b) at least in the Catholic church there's a higher internal authority who can adjudicate abuse claims.
Replies from: EmersonSpartz, frontier64↑ comment by Emerson Spartz (EmersonSpartz) · 2023-09-08T07:10:08.891Z · LW(p) · GW(p)
Just FYI, the original claim is a wild distortion of the truth. We'll be providing evidence in our upcoming post.
↑ comment by frontier64 · 2023-09-07T18:57:05.825Z · LW(p) · GW(p)
This is within the context of me saying that the office environment is incredibly weird and atypical.
Replies from: orthonormal, DanielFilan↑ comment by orthonormal · 2023-09-07T19:37:35.485Z · LW(p) · GW(p)
Plenty of "weird and atypical" things aren't red flags; this one, however, is a well-known predictor of abusive environments.
↑ comment by DanielFilan · 2023-09-07T19:59:01.359Z · LW(p) · GW(p)
Sorry, I was using "normal" to mean "not abusive". Even in weird and atypical environments, I find it hard to think of situations where "don't hang out with your family" is an acceptable ask (with the one exception listed in my comment).
Replies from: frontier64↑ comment by frontier64 · 2023-09-07T21:13:34.452Z · LW(p) · GW(p)
Is your point that "being asked to not hang out with low value people" is inherently abusive in a way worse than everything else going on in that list? Like maybe it's terrible, but I don't put it in it's own separate category apart from "sleeping with my boss." That's kind of my general point: none of the stuff said in this post is unusual for an environment where the employee lives and sleeps with their boss.
Replies from: DanielFilan↑ comment by DanielFilan · 2023-09-07T21:19:03.834Z · LW(p) · GW(p)
Is your point that "being asked to not hang out with low value people" is inherently abusive in a way worse than everything else going on in that list?
Yes
↑ comment by Rebecca (bec-hawk) · 2023-09-07T16:32:55.623Z · LW(p) · GW(p)
According to the post, the employees actively wanted to live somewhere else and were in a practical sense prevented from doing so. They also weren't willing to work for next to no pay - that is again specifically one of the issues that was raised. It's also plausible to me that the romantic attraction component was endogenous to the weirdness they were objecting to. It seems like the gist of your argument is 'weird things they were happy to do' >= 'weird things they say they weren't happy to do', but a significant proportion of the components on the LHS should actually be on the RHS. That doesn't mean that any of it is true, but your argument seems like a misreading of the post.
I agree that the payment does create some suboptimal incentives, but I'm operating under the assumption that Ben decided on giving the sources money after hearing about the bulk of what happened, and that they didn't predict he would do so, rather than something like (to make it more extreme) 'if you tell me enough crazy stuff to make this worth a forum post, I'll reimburse you for your trouble'.
Replies from: Benito, frontier64↑ comment by Ben Pace (Benito) · 2023-09-07T18:34:52.389Z · LW(p) · GW(p)
I indeed only brought up that I would like to compensate them after they had spent many many hours processing their experiences, explaining them, writing long docs about the hurt they had experienced, and expressed a great deal of fear/intimidation.
Replies from: DanielFilan↑ comment by DanielFilan · 2023-09-07T18:53:05.223Z · LW(p) · GW(p)
Sure, but wasn't there some previous occasion where Lightcone made a grant to people after they shared negative stories about a former employer (maybe to Zoe Curzi? but I can't find that atm)? If so, then presumably at some point you get a reputation for doing so.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-09-07T21:57:19.502Z · LW(p) · GW(p)
Yep, Oli gave Zoe Curzi $15k [LW(p) · GW(p)]. I do think the reputation-for-it is relevant, and will probably change the dynamics the next time that someone comes to me/Lightcone with reports of terrible behavior, but in this case Alice and Chloe (and others) spent the majority of the time I'm referring to talking to CEA, who has no such reputation.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2023-09-08T02:00:27.355Z · LW(p) · GW(p)
Notably, one way to offset the reputational issue is to sometimes give people money for saying novel positive things about an org. The issue is less "people receive money for updating us" and more "people receive money only if they updated us in a certain direction", or even worse "people receive money only if they updated us in a way that fits a specific narrative (e.g., This Org Is Culty And Abusive)".
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-09-08T02:13:44.033Z · LW(p) · GW(p)
I'm especially excited about giving money to people who have been credibly silenced and intimidated. I think this is good, but will systematically spread info about wrongdoing.
If it's money for "credible signs of intimidation" maybe that's less gameable.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2023-09-08T17:20:09.073Z · LW(p) · GW(p)
Actually, I do know of an example of y'all offering money to someone for defending an org you disliked and were suspicious of. @habryka [LW · GW], did that money get accepted?
(The incentive effects are basically the same whether it was accepted or not, as long as it's public knowledge that the money was offered; so it seems good to make this public if possible.)
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-09-08T17:49:54.137Z · LW(p) · GW(p)
That summary is inaccurate, so I don't think there is any org for which that is true. I offered money to both Zoe Curzi and to Cathleen for doing info-gathering on Leverage stuff, but that was explicitly for both positive and negative information (and happens to have been offered in roughly equal measure, with Zoe writing a quite negative piece and Cathleen writing a relatively positive piece).
↑ comment by frontier64 · 2023-09-07T17:39:11.945Z · LW(p) · GW(p)
According to the post, the employees actively wanted to live somewhere else and were in a practical sense prevented from doing so
No not really, they weren't prevented from living where they so chose. To me living in fun, exotic locations, but you have to live with your boss sounds simply like a trade-off that the employees were willing to make. I don't see anything in the post to suggest that they were prevented from doing otherwise. Just that to do otherwise they would probably have had to pick a different job!
They also weren't willing to work for next to no pay - that is again specifically one of the issues that was raised
Like, why did they do it then? Were they forced to? Was someone making them take this job? I don't see allegations of this nature in the post. Are you saying that Kat and Emerson have some obligation to accede to their employees requests for higher pay? I can see that the employees wanted higher pay, but the fact remains that they worked for Kat and Emerson and earned next to no pay.
What I see is that the bosses were making an offer, come work for us and we'll pay for your expenses and let you live with us rent free. But they weren't making an offer to, come work for us and we'll pay you a salary. Yes, employees often prefer to get paid more and to get paid in different ways. This doesn't mean an employer who offers them a worse deal is preventing them from taking the better deal. Your response suggests that if Bob presents Charles options A and B, Charles doesn't really have a free choice if he prefers unoffered option C. If the employees thought they could get a job that pays more somewhere else they could have taken that other job.[1]
'weird things they were happy to do' >= 'weird things they say they weren't happy to do'
I'm not saying "happy to do" I'm saying "chose to do freely and willingly without any undue coercion."
but a significant proportion of the components on the LHS should actually be on the RHS
This seems wrong on it's face to me from the body of the post. Ben says:
I do have a strong heuristic that says consenting adults can agree to all sorts of things that eventually hurt them (i.e. in accepting these jobs), even if I paternalistically might think I could have prevented them from hurting themselves. That said, I see clear reasons to think that Kat, Emerson and Drew intimidated these people into accepting some of the actions or dynamics that hurt them, so some parts do not seem obviously consensual to me.
And I honestly don't see any of the clear reasons Ben suggests. I see intimidation designed to prevent the employees from badmouthing Kat and Emerson, but not any intimidation to keep working for them. Ben just cites to Emerson's comment that, "he gets mad at his employees who leave his company for other jobs that are equally good or less good." Which sounds weird to me, but doesn't suggest retaliation or intimidation.
To me, the clearly consensual[2] LHS 'having sex with the boss' suggests that most everything is LHS. If someone can freely leave a job, and is having sex with their boss totally freely, I don't think their complaints about other, smaller workplace troubles have much validity.
Something I have experience with! ↩︎
If it turns out that this wasn't consensual my opinion on the whole situation changes significantly. But I have seen 0 allegations suggesting the intern-boss relationship wasn't wholly consensual (besides the whole intern-boss thing) so I'm not going to read those allegations in on my own. ↩︎
↑ comment by Ruby · 2023-09-07T20:20:33.463Z · LW(p) · GW(p)
I don't think the post fully conveyed it, but I think the employees were quite afraid of leaving and expected this to get them a lot of backlash or consequences. A particularly salient for people early in EA careers is what kind of reference they'll get.
Think about the situation of leaving your first EA job after a few months. Option 1: say nothing about why you left, have no explanation for leaving early, don't really get a reference. Option 2: explain why the conditions were bad, risk the ire of Nonlinear (who are willing to say things like "your career could be over in a couple of DMs"). It's that kind of bind that gets people to keep persisting, hope it'll get better.
↑ comment by Rebecca (bec-hawk) · 2023-09-07T21:56:15.646Z · LW(p) · GW(p)
The agreement was $75k, which is very much not next to nothing, and regardless of the split of expenses/cash, it doesn't seem like they added up to close to that?
Replies from: EmersonSpartz↑ comment by Emerson Spartz (EmersonSpartz) · 2023-09-07T23:01:39.721Z · LW(p) · GW(p)
Just to clear up a view things:
- It was $70k in approximate/expected total compensation. The $1k a month was just a small part of the total compensation package.
- Despite false claims to the contrary, it wasn't just verbally agreed, we have written records.
- Despite false claims to the contrary, we were roughly on track to spend that much. This is another thing we will show evidence for ASAP, but there is a lot of accounting/record keeping etc to do to organize all the spending information, etc.
↑ comment by orthonormal · 2023-09-07T17:44:06.388Z · LW(p) · GW(p)
I believe that a commitment to transparently reward whistleblowers, in cases where you conclude they are running a risk of retaliation, is a very good policy when it comes to incentivizing true whistleblowing.
comment by Emerson Spartz (EmersonSpartz) · 2023-09-11T10:30:26.612Z · LW(p) · GW(p)
@Ben Pace [LW · GW] Can you please add at the top of the post "Nonlinear disputes at least 85 of the claims in this post and intends to publish a detailed point-by-point response.
They also published this short update [LW · GW] giving an example of the kind of evidence they plan to demonstrate."
We keep hearing from people who don't know this. Our comments get buried, so they think your summary at the bottom contains the entirety of our response, though it is just the tip of the iceberg. As a result, they think your post marks the end of the story, and not the opening chapter.
↑ comment by Ben Pace (Benito) · 2023-09-11T17:03:31.203Z · LW(p) · GW(p)
I've left an edit at the top.
↑ comment by Adam Zerner (adamzerner) · 2023-09-12T00:26:07.336Z · LW(p) · GW(p)
I think it would be helpful [LW(p) · GW(p)] to mention some sort of rough estimate of how many of those claims Nonlinear believes to be important and cruxy. 70 of them? 20? 5?
Separately, I think it would be helpful to focus on the ones that are important and cruxy. As Kat mentioned, it can take many hours to dispute any one claim. It seems wise to focus on the important ones rather than getting lost in the weeds debating and digging through the evidence for the unimportant ones.
comment by Max H (Maxc) · 2023-09-08T00:56:56.958Z · LW(p) · GW(p)
I think in almost any functioning professional ecosystem, there should be some general principles like:
- If you employ someone, after they work for you, unless they've done something egregiously wrong or unethical, they should be comfortable continuing to work and participate in this professional ecosystem.
- If you employ someone, after they work for you, they should feel comfortable talking openly about their experience working with you to others in this professional ecosystem.
Any breaking of the first rule is very costly, and any breaking of the second rule is by-default a red-line for me not being willing to work with you.
I agree these are good principles, but I want to point out that fairly stringent non-disparagement and non-disclosure provisions are pretty common in ordinary (non-EA) business contexts, including employment contracts and severance agreements.
My own view is that such norms are often bad, and EAs and rationalists should strive to do better in our own dealings. But NDAs are not a red-line for me personally, and not even a particularly big negative signal, given how common they are.
(I may be interpreting you overly literally or missing context; I haven't read most of the rest of this piece.)
↑ comment by Rob Bensinger (RobbBB) · 2023-09-08T02:06:48.968Z · LW(p) · GW(p)
But NDAs are not a red-line for me personally
An NDA to keep the organization's IP private seems fine to me; an NDA to prevent people from publicly criticizing their former workplace seems line-crossing to me.
Replies from: jkaufman, Maxc↑ comment by jefftk (jkaufman) · 2023-09-08T14:15:55.912Z · LW(p) · GW(p)
an NDA to prevent people from publicly criticizing their former workplace seems line-crossing to me.
I don't like these, but they are (were) depressingly common. I know at least one org that's generally well regarded by EAs that used them.
Replies from: habryka4, nathaniel-monson, zerker2000↑ comment by habryka (habryka4) · 2023-09-08T17:17:39.153Z · LW(p) · GW(p)
I know at least one org that's generally well regarded by EAs that used them.
Oh, wow, please tell me the name of that organization. That seems very important to model, and I would definitely relate very differently to any organization that routinely does this (as well as likely advocate for that organization to no longer be well-regarded).
Replies from: lincolnquirk, jkaufman↑ comment by lincolnquirk · 2023-09-11T23:24:06.447Z · LW(p) · GW(p)
Jeff is talking about Wave. We use a standard form of non-disclosure and non-disparagement clauses in our severance agreements: when we fire or lay someone off, getting severance money is gated on not saying bad things about the company. We tend to be fairly generous with our severance, so people in this situation usually prefer to sign and agree. I think this has successfully prevented (unfair) bad things from being said about us in a few cases, but I am reading this thread and it does make me think about whether some changes should be made.
I also would re-emphasize something Jeff said - that these things are quite common - if you just google for severance package standard terms, you'll find non-disparagement clauses in them. As far as I am aware, we don't ask current employees or employees who are quitting without severance to not talk about their experience at Wave.
Replies from: habryka4, pktechgirl↑ comment by habryka (habryka4) · 2023-09-12T00:31:37.816Z · LW(p) · GW(p)
Wow, I see that as a pretty major breach of trust, especially if the existence of the non-disparagement clause is itself covered by the NDA, which I know is relatively common, and seems likely the case based on Jeff's uncertainty about whether he can mention the organization.
I... don't know how to feel about this. I was excited about you being a board member of EV, but now honestly would pretty strongly vote against that and would have likely advocated against that if I had known this a few weeks earlier. I currently think I consider this a major lapse of judgement and integrity, unless there was some culture in which it was clear that it was OK for people to criticize you anyways (though from what you are saying the non-disparagement clause was intentionally trying to cover this).
I... really don't know what to say. Wave has been at the top of my list of projects that I've had good feelings about for years in EA, but now I think that is actually quite likely in substantial parts because of information control on your part. I've recommended that people go work for you, and I've mentioned your organization many times in the past few years as a place that seems like it's done pretty clearly good stuff, while having a culture that seems to get stuff done. I do think I right now regret those recommendations.
I might change my mind on this after reflecting more, but this does really seem like a huge deal to me. I don't know how I could have found out about this, and I have talked to people for dozens of hours about Wave over the years, and this very meaningfully changed my actions over the years in ways that I now feel quite betrayed about.
Replies from: lincolnquirk, elityre, GuySrinivasan, adamzerner↑ comment by lincolnquirk · 2023-09-12T03:41:49.732Z · LW(p) · GW(p)
I'm sorry you feel that way. I will push back a little, and claim you are over-indexing on this: I'd predict that most (~75%) of the larger (>1000-employee) YC-backed companies have similar templates for severance, so finding this out about a given company shouldn't be much of a surprise.
I did a bit of research to check my intuitions + it does seem like non-disparagement is at least widely advised (for severance specifically and not general employment), e.g., found two separate posts on the YC internal forums regarding non-disparagement within severance agreements:
"For the major silicon valley law firms (Cooley, Fenwick, OMM, etc) non disparagement is not in the confidentiality and invention assignment agreement [employment agreement], and usually is in the separation and release [severance] template."
(^ this person also noted that it would be a red flag to find non-disparagement in the employment agreement.)
Replies from: habryka4, austin-chen, adamzerner"One thing I’ve learned - even when someone has been terminated with cause, a separation agreement [which includes non-disparagement] w a severance can go a long way."
↑ comment by habryka (habryka4) · 2023-09-12T06:19:31.897Z · LW(p) · GW(p)
I mean, yeah, sometimes there are pretty widespread deceptive or immoral practices, but I wouldn't consider them being widespread that great of an excuse to do them anyways (I think it's somewhat of an excuse, but not a huge one, and it does matter to me whether employees are informed that their severance is conditional on signing a non-disparagement clause when they leave, and whether anyone has ever complained about these, and as such you had the opportunity to reflect on your practices here).
I feel like the setup of a combined non-disclosure and non-disparagement agreement should have obviously raised huge flags for you, independently of its precedent in Silicon Valley.
I think a non-disparagement clause can make sense in some circumstances, but I find really very little excuse to combine that with a non-disclosure clause. This is directly asking the other person to engage in a deceptive relationship with anyone who wants to have an accurate model of what it's like to work for you. They are basically forced to lie when asked about their takes on the organization, since answering with "I cannot answer that" is now no longer an option due to revealing the non-disparagement agreement. And because of the disparagement clause they are only allowed to answer positively. This just seems like a crazy combination to me.
I think this combination is really not a reasonable thing to ask off of people in a community like ours, where people put huge amounts of effort into sharing information on the impact of different organizations, and where people freely share information about past employers, their flaws, their advantages, and where people (like me) have invested years of their life into building out talent pipelines and trying to cooperate on helping people find the most impactful places for them to work.
Like, I don't know what you mean by over-indexing. De-facto I recommended that people work for Wave, on the basis of information that you filtered for me, and most importantly, you contractually paid people off to keep that filtering hidden from me. How am I supposed to react with anything but betrayal? Like, yeah, it sounds to me like you paid at least tens (and maybe hundreds) of thousands of dollars explicitly so that I and other people like me would walk away with this kind of skewed impression. What does it mean to over-index on this?
I don't generally engage in high-trust relationships with random companies in Silicon Valley, so the costs for me there are much lower. I also generally don't recommend that people work there in the same way that I did for Wave, and didn't spend years of my life helping build a community that feeds into companies like Wave.
Replies from: jkaufman, Larks, Vaniver, Maxc, adamzerner↑ comment by jefftk (jkaufman) · 2023-09-12T11:19:56.192Z · LW(p) · GW(p)
They are basically forced to lie when asked about their takes on the organization, since answering with "I cannot answer that" is now no longer an option due to revealing the non-disparagement agreement. And because of the disparagement clause they are only allowed to answer positively. This just seems like a crazy combination to me.
I agree this is very awkward.
If people asked about my time at Wave I would just not talk about it; I wouldn't selectively say positive things.
↑ comment by Larks · 2023-09-13T13:34:51.016Z · LW(p) · GW(p)
If most firms have these clauses, one firm doesn't, and most people don't understand this, it seems possible that most people would end up with a less accurate impression of their relative merits than if all firms had been subject to equivalent evidence filtering effects.
In particular, it seems like this might matter for Wave if most of their hiring is from non-EA/LW people who are comparing them against random other normal companies.
↑ comment by Vaniver · 2023-09-12T23:10:20.618Z · LW(p) · GW(p)
So I agree that I wish fewer organizations would ask for non-disparagement clauses, especially for employees that are leaving. I don't yet agree that this is the 'obvious standard' / non-disparagement is haram instead of makruh.
A related thing that's coming to mind is that I have mediated a handful of disputes under conditions of secrecy. I currently don't view this as a betrayal of you (that I've accepted information that I cannot share with you) but do you view it as me betraying you somehow?
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-09-13T00:42:14.718Z · LW(p) · GW(p)
A related thing that's coming to mind is that I have mediated a handful of disputes under conditions of secrecy. I currently don't view this as a betrayal of you (that I've accepted information that I cannot share with you) but do you view it as me betraying you somehow?
I think if, during those disputes, you committed to only say positive things about either party (in pretty broad generality, as non-disparagement clauses tend to do), and that you promised to keep that commitment of yours secret, and if because of that I ended up with a mistaken impression on reasonably high-stakes decisions, then yeah, I would feel betrayed by that.
I think accepting confidentiality is totally fine. It's costly, but I don't see a way around it in many circumstances. The NDA situation feels quite different to me, where it's really a quite direct commitment to providing filtered evidence, combined with a promise to keep that filtering secret, which seems very different from normal confidentiality to me.
↑ comment by Max H (Maxc) · 2023-09-16T05:12:13.704Z · LW(p) · GW(p)
I can understand the sentiment here, but contracts are generally voluntary agreements. It feels like at least some part of your feelings should be directed at the other party in these agreements. Probably not in anywhere close to equal measure, given the power dynamics between the signing parties, and your own relationship with and trust level in each.
But my guess is that most of the people you sent to Wave were capable of understanding what they were signing and thinking through the implications of what they were agreeing to, even if they didn't actually have the conscientiousness / wisdom / quick-thinking to do so. (Except, apparently, Elizabeth [LW(p) · GW(p)]. Bravo, @Elizabeth [LW · GW]!)
Some signers may have really needed the severance money, which makes things trickier, but not unsolvable. For the future, you might want to announce to your friends now that, if they forgo signing a severance agreement in order to share information with you, you'll reimburse them (though please think through the particulars and how this could be exploited or go wrong, first).
Also, another thing you might be missing:
though my sense is this only happened because Jeff did something slightly risky under his NDA, by leaking some relevant information (there are not that many places Jeff worked, so him saying he knew about one organization, and having to check for permission, was leaking some decent number of bits, possibly enough to risk a suit if Lincoln wanted to),
If I were @jefftk [LW · GW], I would probably have been more worried about the risk of violating a contract I had knowingly and willingly agreed to than about getting sued! That's a risk to honor and reputation, and a serious deontological line to cross for a lot of people[1], even if the contract is unfair in some ways, or a betrayal of a thirdparty's trust.
(To be clear, I agree with Jeff's own assessment that he didn't really take much of a risk of any kind here. I'm not actually questioning his honor; just using it as an example to illustrate the point. Though I do predict Jeff's initial concern was more about damaging a relationship through a breach of trust, than of getting sued per se.)
- ^
In some parts of the multiverse, breaching a contract can get you sent to Abaddon...
↑ comment by Elizabeth (pktechgirl) · 2023-09-16T21:44:52.352Z · LW(p) · GW(p)
But my guess is that most of the people you sent to Wave were capable of understanding what they were signing and thinking through the implications of what they were agreeing to, even if they didn't actually have the conscientiousness / wisdom / quick-thinking to do so. (Except, apparently, Elizabeth [LW(p) · GW(p)]. Bravo, @Elizabeth [LW · GW]!)
I appreciate the kudos here, but feel like I should give more context.
I think some of what led to me to renegotiate was a stubborn streak and righteousness about truth. I mostly hear when those traits annoy people, so it’s really nice to have them recognized in a good light here. But that righteous streak was greatly enabled by the fact that my mom is a lawyer who modeled reading legal documents before signing (even when its embarrassing your kids who just want to join their friends at the rockclimbing birthday party), and that I could afford to forgo severance. Obviously I really wanted the money, and I couldn’t afford to take this kind of stand every week. But I believe there were people who couldn’t even afford to add a few extra days, and so almost had to cave
To the extent people in that second group were unvirtuous, I think the lack of virtue occurred when they didn’t create enough financial slack to even have the time to negotiate. By the time they were laid off without a cushion it was too late. And that’s not available to everyone- Wave paid well, but emergencies happen, any one of them could have a really good reason their emergency fund was empty.
So the main thing I want to pitch here is that “getting yourself into a position where virtue is cheap” is an underrated strategy.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2023-09-17T01:23:21.560Z · LW(p) · GW(p)
Rereading my emails, it looks like I noticed the provision and pushed back on it, and was told I needed to follow up with a different person. I can't find any record of having done that, and don't remember any of this well. Looking at timestamps, though, my guess at what happened is that I was intending to follow up but ran out of time and needed to accept the offer as-is.
(We did have enough of a financial cushion that we could have waived severance without risk to our family, but it was also enough money that I didn't want to risk it.)
Replies from: pktechgirl↑ comment by Elizabeth (pktechgirl) · 2023-09-17T01:50:03.226Z · LW(p) · GW(p)
I forget how long they gave us at first (my deadline got extended). I do think that companies should give people long deadlines for this, and short deadlines are maybe the most antisocial part of this? People are predictably stressed out and have a lot to deal with (because they've been laid off or fired), and now they have to read complicated paperwork, find a lawyer, and negotiate with a company? That's a lot.
Non-disparagement and non-disclosure feel complicated to me and I can see how strong blanket statements became the norm, but using tight deadlines to pressure people on significant legal and financial decisions seems quite bad.
↑ comment by jefftk (jkaufman) · 2023-09-16T21:19:57.321Z · LW(p) · GW(p)
For the future, you might want to announce to your friends now that, if they forgo signing a severance agreement in order to share information with you, you'll reimburse them
Uh, this could be quite expensive. For example, if someone with a salary of $250k is given 16 weeks plus two weeks for every year of service that could easily be $100k+.
Replies from: Maxc↑ comment by Max H (Maxc) · 2023-09-16T23:56:30.465Z · LW(p) · GW(p)
Well, either the information is worth that much or more (to someone), in which case the true value of the option Wave is offering is ~0, or it's not, in which case the package deal is worth some non-zero amount that might still be significantly less than the headline value of the unencumbered financial benefits.
By successfully executing these agreements, Wave and their terminated employees managed to capture some value for themselves, at the cost of imposing negative externalities on parties not directly involved (Oli, other prospective Wave employees, would-be startup employees more generally).
And I'm saying (a) we should probably assign some blame to all parties involved in creating this externality and (b) Oli himself might be in a position to do something unilaterally to disincentivize others from creating or benefiting from it in the future.
Even a limited monetary offer might be a way to add force / credibility / publicity to the approach that Ben and Oli appear to already be taking, of making it well-known that they consider making these kinds of offers to be harmful and norm-violating. So it seemed worth throwing out there as a suggestion, even if it unrealistic or unworkable at scale.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2023-09-17T01:14:24.869Z · LW(p) · GW(p)
Well, either the information is worth that much or more (to someone), in which case the true value of the option Wave is offering is ~0, or it's not, in which case the package deal is worth some non-zero amount that might still be significantly less than the headline value of the unencumbered financial benefits.
What do you think the altruistic value in 2017 (ex-ante) was of negotiating releasing one laid off Wave employee from a non-disparagement+non-disclosure? (When the alternative is that they stay quiet about their time at Wave, not say selectively positive things.)
Replies from: Maxc↑ comment by Max H (Maxc) · 2023-09-17T02:36:44.610Z · LW(p) · GW(p)
Pretty low! I know nothing about the specifics, but I personally would probably not have predicted that the information gained from such a release would be worth much to anyone. One reason is that I predict (retrodict?) that if there were a lot of value in this information, at least one of the laid-off employees would have declined the severance agreement or negotiated for better terms.
Also, in my model, a lot of the value isn't exactly altruistic. In a lot of possible worlds, most of the value would accrue in the form of a better working life for well-off people who in principle have the resources and selfish interest to pay for such benefits, even if there's no mechanism for them to actually do so. The counterfactual EA who learns that e.g. Lincoln Quirk is a terrible boss (but everything else about Wave is otherwise as it appears), instead goes off to work in some equally high-paying and high-impact role, but is personally happier during their working hours.
↑ comment by Adam Zerner (adamzerner) · 2023-09-12T11:38:52.055Z · LW(p) · GW(p)
I really appreciate this as a push towards holding people/companies to a higher moral standard, and as an expectation that you think about such questions yourself rather than falling back to "well everyone else is doing it".
↑ comment by Austin Chen (austin-chen) · 2023-09-12T04:12:14.903Z · LW(p) · GW(p)
Yeah fwiw I wanted to echo that Oli's statement seems like an overreaction? My sense is that such NDAs are standard issue in tech (I've signed one before myself), and that having one at Wave is not evidence of a lapse in integrity; it's the kind of thing that's very easy to just defer to legal counsel on. Though the opposite (dropping the NDA) would be evidence of high integrity, imo!
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-09-12T05:52:47.522Z · LW(p) · GW(p)
Most people in the world lie from time to time, and are aware that their friends lie. Nonetheless I don't think that Lincoln would lie to me. As a result, I trust his word.
Most CEOs get people who work for them to sign contracts agreeing that they won't share negative/critical information about the company. Nonetheless I didn't think that Lincoln would get people he works with to sign contracts not to share negative/critical information about Wave. As a result, I trusted the general perception I had of Wave.
I currently feel a bit tricked, not dissimilar to if I found out Lincoln had intentionally lied to me on some minor matter. While it is common for people to lie, it's not the relationship I thought I had here.
Replies from: austin-chen↑ comment by Austin Chen (austin-chen) · 2023-09-12T14:56:14.746Z · LW(p) · GW(p)
I definitely feel like "intentionally lying" is still a much much stronger norm violation than what happened here. There's like a million decisions that you have to make as a CEO and you don't typically want to spend your decisionmaking time/innovation budget on random minutiae like "what terms are included inside our severance agreements?" I would be a bit surprised if "should we include a NDA & non-disclosure" had even risen to the level of a conscious decision of Lincoln's at any point throughout Wave's history, as opposed to eg getting boilerplate legal contracts from their lawyers/an online form and then copying that for each severance agreement thereafter.
Replies from: Viliam, jkaufman, adamzerner↑ comment by Viliam · 2023-09-19T08:46:27.187Z · LW(p) · GW(p)
There's like a million decisions that you have to make as a CEO and you don't typically want to spend your decisionmaking time/innovation budget on random minutiae like "what terms are included inside our severance agreements?"
Technically true, but also somewhat reminds me of this [LW · GW].
↑ comment by jefftk (jkaufman) · 2023-09-12T17:57:00.926Z · LW(p) · GW(p)
I would be a bit surprised if "should we include a NDA & non-disclosure" had even risen to the level of a conscious decision of Lincoln's at any point throughout Wave's history
I think it's pretty likely that at least one departing employee would have pushed back on it some, so I wouldn't be surprised?
Replies from: austin-chen↑ comment by Austin Chen (austin-chen) · 2023-09-13T20:58:21.261Z · LW(p) · GW(p)
Yeah, I guess that's fair -- you have much more insight into the number of and viewpoints of Wave's departing employees than I do. Maybe "would be a bit surprised" would have cashed out to "<40% Lincoln ever spent 5+ min thinking about this, before this week", which I'd update a bit upwards to 50/50 based on your comment.
For context, I don't think I pushed back on (or even substantively noticed) the NDA in my own severance agreement, whereas I did push back quite heavily on the standard "assignment of inventions" thing they asked me to sign when I joined. That said, I was pretty happy with my time and trusted my boss enough to not expect for the NDA terms to matter.
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2023-09-13T21:39:56.373Z · LW(p) · GW(p)
Below [LW(p) · GW(p)] you can see Elizabeth writing about how she successfully pushed back and got it removed from her agreement, so it does seem like my guess was correct! [EDIT: except nothing in her post mentions Lincoln, so probably not]
(I didn't know about Elizabeth's situation before her post)
Replies from: pktechgirl↑ comment by Elizabeth (pktechgirl) · 2023-09-13T23:12:21.164Z · LW(p) · GW(p)
It's been a while but I think I remember who I negotiated with and it wasn't Lincoln (or Drew, the other co-founder). I find it pretty plausible that person had the authority to make changes to my agreement without running them by the founders, but would not have had the authority to change the default. So it's entirely possible multiple people pushed back but it never reached the conscious attention of the founders.
And it may not have even come up that often. I think I am several sigmas out in my willingness to read legal paperwork, push back, and walk away from severance payments, so you'd need a large sample to have it come up frequently. Wave probably hasn't laid off or fired that many people with severance, and presumably the founders were less likely to hear about pushback as the company grew.
So it just seems really likely to me that Wave didn't invest its limited energy in writing its own severance agreement, and the situation didn't have enough feedback loops to make people with decision-making power question that.
↑ comment by Adam Zerner (adamzerner) · 2023-09-12T18:11:19.315Z · LW(p) · GW(p)
Epistemic status: Thinking out loud. Overall I'm rather confused about what to think here.
Yeah. And there is a Chesterton's Fence element here too. Like as CEO, if you really want to go with a non-standard legal thing, you probably would want to make sure you understand why the standard thing is what it is.
Which, well, I guess you can just pay someone a few hundred dollars to tell you. Which I'd expect someone with the right kind of moral integrity to do. And I'd expect the answer to be something along the lines of:
If you actually treat people well, it only offers a pretty small degree of protection. And standard thinking only accounts for selfish company interests, not actual altruistic concern for employees or the norms you do or don't endorse. So if you do care about the latter and intend to treat people well, it would probably make sense to get rid of it.
Although, perhaps it'd take a special lawyer to actually be frank with you and acknowledge all of that. And you'd probably want to get a second and third and fourth opinion too. But still, seeking that out seems like a somewhat obvious thing to do for someone with moral integrity. And if you do in fact get the response I described above, ditching the non-disparagement seems like a somewhat obvious way to respond.
↑ comment by Adam Zerner (adamzerner) · 2023-09-12T04:13:49.831Z · LW(p) · GW(p)
Hm, I wonder how this evidence should cause us to shift our beliefs.
At first I was thinking that it shifts towards non-disparagement not being too bad. I don't think it's intuitively an obviously terrible thing. And thinking about YC, I get the sense that they actually do want to Be Good. And that, if true, they wouldn't really stand for so many YC-backed companies having non-disparagement stuff.
But then I remembered to be a little cynical. Over the years, I feel like I've seen YC companies do a bunch of unethical things. In such a way that I just don't think YC is policing its companies and pushing very hard against it. Although, I do think that people like Paul Graham do actually want the companies to Be Good. But anyway, I think that regardless of how YC feels about it, they wouldn't really police it, and so the observation that tons of YC-backed companies have this clause doesn't really shift my beliefs very much.
Replies from: Davidmanheim↑ comment by Davidmanheim · 2023-09-12T13:47:01.737Z · LW(p) · GW(p)
A very general point about how we are supposed to update in a complex system:
Evidence that a company you trust uses these should cause you to update BOTH slightly more towards "this isn't too bad," and slightly more towards "YC companies, and this company in particular, are unethical."
↑ comment by Rob Bensinger (RobbBB) · 2023-09-13T01:15:08.227Z · LW(p) · GW(p)
This is formally correct.
(Though one of those updates might be a lot smaller than the other, if you've e.g. already thought about one of those topics a lot and reached a confident conclusion.)
↑ comment by Eli Tyre (elityre) · 2023-09-12T03:18:07.709Z · LW(p) · GW(p)
How much does it make a difference that Lincoln just came out and volunteered that information? The non-disparagement contracts are not themselves hidden.
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-09-12T06:07:59.325Z · LW(p) · GW(p)
They were hidden up until this very moment, from me, presumably with a clause in the NDA that contractually committed everyone who signed them to keep them hidden from me.
I am pretty sure many past Wave employees would have brought them up to me had they not been asked to sign an NDA in order to get their severance package. I agree it's worth something that Lincoln just said it straightforwardly, though my sense is this only happened because Jeff did something slightly risky under his NDA, by leaking some relevant information (there are not that many places Jeff worked, so him saying he knew about one organization, and having to check for permission, was leaking some decent number of bits, possibly enough to risk a suit if Lincoln wanted to), and me finding this out was sheer luck, and in most worlds I would have never found out.
Replies from: pktechgirl, jkaufman, Linch↑ comment by Elizabeth (pktechgirl) · 2023-09-12T18:51:54.736Z · LW(p) · GW(p)
FWIW: I have an NDA from Wave. I negotiated at the time to be able to mention the existence of the NDA, and that it didn't restrict private conversation, just public statements. You and I have probably talked about Wave, and I guess it never occurred to me to mention the NDA because I knew it was standard and it wasn't restricting my private speech. I wasn't keeping it secret, I've talked about it with people when it has come up, but I didn't make a point of doing so.
So I don't think it's obvious you'd know about the NDA if it weren't self-protecting.
It's possible I should have disclosed the NDA every time I said something positive about Wave in public. I think that would have occurred to me if I'd ever been talking about Wave qua Wave, but it was always as an example in posts that were focused on something else, so that feels like a lot of overhead.
Edit: I guess I should say I think the ban on disclosing the existence of the agreement is very bad, and that's why I negotiated to change it (and would have walked if they hadn't, despite not having anything I was burning to say). But I had that right and still didn't mention it to habryka in medicine.
Replies from: Raemon↑ comment by Raemon · 2023-09-12T20:38:01.851Z · LW(p) · GW(p)
Aside: can we taboo "NDA" in this discussion? It seems pretty fucked that it means both non-disparagement-agreement and non-disclosure-agreement and it's annoying to track which one people are referring to.
Replies from: pktechgirl↑ comment by Elizabeth (pktechgirl) · 2023-09-13T23:32:55.130Z · LW(p) · GW(p)
Oh man, it's worse than that. My original paperwork had both a non-disparagement clause and a non-disclosure clause relating to the agreement itself. The latter was removed in my agreement but presumably not others'.
While I have the emails open, I want to note that the lawyer described the agreement as pretty standard.
↑ comment by jefftk (jkaufman) · 2023-09-12T11:23:26.877Z · LW(p) · GW(p)
There are enough EA orgs that I know something about (and that other people know I know something about) that I think the number of bits I was leaking here was pretty low?
Another thing that's not visible is that I sent Lincoln an email linking to this thread, which I expect is why he jumped in with more context. I really appreciate him doing so, and don't want [EA · GW] him and Wave to end up worse off than in a world in which he'd stayed quiet.
↑ comment by Linch · 2023-09-12T17:44:29.104Z · LW(p) · GW(p)
possibly enough to risk a suit if Lincoln wanted to
Would be pretty tough to do given the legal dubiousness re: enforceability of non-disparagement agreements in the US (note: the judgement applies retroactively)
↑ comment by SarahSrinivasan (GuySrinivasan) · 2023-09-12T15:02:19.464Z · LW(p) · GW(p)
Did you previously know that
these things are quite common - if you just google for severance package standard terms, you'll find non-disparagement clauses in them
? I mean I agree(d, for a long time prior to any of all this) that these clauses are terrible for the ecosystem. But it feels like this should be like a vegan learning their associate eats meat and has just noticed that maybe that's problematic?
I think this is how your mind should have changed:
- large update that companies in general are antagonists on a personal level (if you didn't already know this)
- small update that Wave is bad to work with, insofar as it's a company, mostly screened off by other info you have about it
- very small update that Lincoln is bad to work with
- with a huge update that they are incredibly good to work with on this specific dimension if "does make me think about whether some changes should be made" results in changes way before the wider ecosystem implements them
- moderate update that Lincoln isn't actively prioritizing noticing and rooting out all bad epistemic practice, among the many things they could be prioritizing, when it goes against "common wisdom" and feels costly, which means if you know of other common wisdom things you think are bad, maybe they implement those
↑ comment by Adam Zerner (adamzerner) · 2023-09-12T01:35:41.879Z · LW(p) · GW(p)
I disagree. I see it as a bad thing, but moreso a minor bad thing than a major one.
From a first-order consequentialist perspective, I strongly suspect that Wave treats people quite well and that this policy isn't silencing anything to a non-trivial degree.
Looking at the nth-order effects of this policy, or from a more "virtues as heuristics" perspective, I think it probably has some sort of small negative consequences. Like marginally normalizing an unfair and unhealthy norm. And also normalizing the idea of doing sketchy things in the name of the greater good. But I'm pretty confident that overall, the negative consequences here aren't large.
Furthermore, I think that Working With Monsters [LW · GW] is important. Well, there's some threshold. I'm not sure where that threshold is. I'm extremely confident that Nonlinear has crossed that threshold by a large margin, for what that's worth. But in general I feel like the threshold should be on the high side. It's just too hard to coordinate to get anything done if you get hung up on these sorts of things. Especially if you have shorter timelines. And with that said, I suspect quite strongly that Wave is way below the threshold and that it'd make sense to continue being strong "allies" with them.
Replies from: pktechgirl, cata↑ comment by Elizabeth (pktechgirl) · 2023-09-12T01:47:55.765Z · LW(p) · GW(p)
I strongly suspect that Wave treats people quite well and that this policy isn't silencing anything to a non-trivial degree
What are you basing this on?[1]
- ^
I'm a former employee of wave, so I want to make it clear that this question is not driven by private information. I would have asked that question in response to that sentence no matter what the proper noun was. I have been on about "it's impossible to make a utilitarian argument for lying[2] because truth is necessary to calculate utils" for months.
- ^
Except when you are actively at war with someone and are considering other usually-banned actions like murder and property destruction.
↑ comment by Adam Zerner (adamzerner) · 2023-09-12T02:00:02.227Z · LW(p) · GW(p)
Hm. Something along these lines I think:
- A prior that most organizations don't have >= moderately-sized issues that really need to be silenced. Which is vaguely informed by my own experiences working for various companies, chatting with friends and acquaintances about their experiences, etc.
- A prior that rationalists and rationalist-adjacent people are a good deal above average in terms of how well they treat people.
- I've read a bunch of benkuhn's [LW · GW] writing and Dan Luu's writing. From this writing, I'm very confident that both of them are really awesome people. And they're associated with Wave. And I remember Ben writing Wave-specific things that made me feel good about Wave. I see all of this as, I dunno, weak-to-moderate evidence of Wave being "good".
- I see now that lincolnquirk [LW · GW] is a cofounder of Wave. I don't remember anything specific about him, but the name rings a bell of "I have above-average opinions of you compared to other rationalists". And I have pretty good opinions about the average rationalist.
↑ comment by Elizabeth (pktechgirl) · 2023-09-12T03:17:09.224Z · LW(p) · GW(p)
How does this differ from what you’d expect to see if an organization had substantial downsides, but supressed negative information?
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2023-09-12T03:47:07.925Z · LW(p) · GW(p)
I think what I'd expect to see in terms of stories of people being mistreated would be roughly the same. Because if they are mistreating people, evidence of that would likely be suppressed.
So I think where I'm coming from is moreso that various things, IMO, point towards the prior probability[1] of mistreatment being low in the first place.
(This is a fun opportunity to work on some Bayesian reasoning. Not to be insensitive about the context that it's in. Please let me know if you/anyone has comments or advice. Maybe I'm missing something here.)
- ^
In the sense of, before opening your eyes and looking at what stories about Wave are out there, what would I expect the probability of there being bad things to be. Or something like that.
As an analogy, suppose you told me that Alice is a manager at Widget Corp. Then you tell me that she is a rationalist. Then you show me her blog, I read it, and I get good vibes. We can ask at this point what I think her probability of her mistreating employees and stuff being. And given what I know, I'd say that it's very low. From there, you can say, "Ok, now go out and google stuff about Alice and Widget Corp. How do the results of googling shift your beliefs?" I think they probably wouldn't shift my beliefs much, since regardless of whether she does bad stuff, if the information is being suppressed, I'm unlikely to observe it. But I can still think that the probability of bad stuff is low, despite the suppression.
↑ comment by cata · 2023-09-12T21:33:16.309Z · LW(p) · GW(p)
I apologize for derailing the N(D|D)A discussion, but it's kind of crazy to me that you think that Nonlinear (based on the content of this post?) has crossed a line such that you wouldn't work with them, by a large margin? Why not? That post you linked is about working with murderers, not working with business owners who seemingly took advantage of their employees for a few months, or who made a trigger-happy legal threat!
Compared to (for example) any random YC company with no reputation to speak of, I didn't see anything in this post that made it look like working with them would either be more likely to be regrettable for you, or more likely to be harmful to others, so what's the problem?
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2023-09-12T21:59:28.038Z · LW(p) · GW(p)
That is a very fair question to ask. However, it's not something that I'm interested in diving into. Sorry.
I will say that Scientific Evidence, Legal Evidence, Rational Evidence [LW · GW] comes to mind. A lot of the evidence we have probably wouldn't be admissible as legal evidence, and perhaps some not even as scientific evidence. But IMO, there is in fact a very large amount of Bayesian evidence that Nonlinear has crossed the line (hard to articulate where exactly the line is) by a very large margin.
Faster Than Science [LW · GW] also comes to mind.
The Sin of Underconfidence [LW · GW] also comes to mind.
As does the idea of being anchored to common sense, and resistant to reason as memetic immune disorder [LW · GW]. Like if you described this story to a bunch of friends at a bar, I think the obvious, intuitive, "normie" conclusion would be that Nonlinear crossed the line by a wide margin (a handful of normie friends I mentioned this to felt this way).
I'll also point out that gut instincts can certainly count [? · GW] as Bayesian evidence, and I'm non-trivially incorporating mine here.
If there was a way to bet on it, I'd be eager to. If anyone wants to, I'd probably be down to bet up to a few hundred dollars. I'd trust a lot of random people here (above 100 karma, let's say) to approach the bet in an honorable way and I am not concerned about the possibility that I end up feeling unhappy with how things turn out (worst case it's a few hundred bucks, oh well).
↑ comment by Elizabeth (pktechgirl) · 2023-09-12T00:59:02.508Z · LW(p) · GW(p)
Without saying anything about Wave in particular, I do think the prevalence of NDAs biases the information people know about start-ups in generality. The prevalence of early excitement vs. the hard parts makes they too optimistic, and get into situations they could have known would be bad for them. it's extra hard because the difficulties at bigtech companies are much discussed.
So I think the right thing to weigh against the averted slander is "the harm to employees who joined, who wouldn't have if criticisms had been more public". Maybe there are other stakeholders here, but employees seem like the biggest.
↑ comment by jefftk (jkaufman) · 2023-09-10T13:02:50.927Z · LW(p) · GW(p)
I'm working on figuring out what I can say, sorry!
↑ comment by Nathaniel Monson (nathaniel-monson) · 2023-09-08T15:23:17.998Z · LW(p) · GW(p)
Can you name the organization?
Replies from: jkaufman↑ comment by zerker2000 · 2023-09-11T03:40:46.144Z · LW(p) · GW(p)
To be clear, are we talking about non-disclosure agreements, or non-*disparagement* agreements?
Replies from: jkaufman↑ comment by jefftk (jkaufman) · 2023-09-11T11:10:14.865Z · LW(p) · GW(p)
The latter, often coupled with the former to prevent disclosure of the existence of the agreement.
↑ comment by Max H (Maxc) · 2023-09-08T03:19:44.408Z · LW(p) · GW(p)
The NLRB agrees with you, but those are exactly the kind of NDAs that are (were?) common.
Also, to clarify, by "not a red-line for me", I don't mean that I would actually accept such terms, especially not without negotiation. I just don't consider merely offering them to me to be a deal-breaker or even particularly strong evidence of anything bad.
"Offering" them as a condition of employment for lower-paid or more junior roles, to people who aren't in a position to negotiate or even understand what they are signing is a different matter, and the proliferation of such practices is pretty sad and alarming.
↑ comment by Adam Zerner (adamzerner) · 2023-09-11T06:23:18.925Z · LW(p) · GW(p)
I can provide a non-EA data point.
My first job out of college was working as a web developer for Mobiquity. I was fired after about 11 months. I suspect that the biggest reason why I was fired was an illegal one.
In firing me, they offered me a severance agreement. I read it carefully. It gave me however many months of pay, but it also required that I not discuss (including criticize) stuff that happened when I worked there. I talked to the HR guy about this and expressed to him that it seemed weird and that I don't want to commit to such a restriction. He said it is non-negotiable and an industry-standard thing to have in severance agreements. I chose to forgo the thousands of dollars and not sign the severance agreement.
<ramble>
Here's some context for what lead to my being fired.
- I had written a blog post about my experiences learning to code and finding a job. In this post, I mentioned that I was currently making $60k/year. My intent was to help readers get a feel for what type of salary they could expect.
- Mobiquity encouraged us to write and share personal blog posts. So I shared it (I had written it months prior without Mobiquity in mind).
- A manager saw that I had mentioned my salary and told/asked(?) me to take the salary out of the post.
- I wasn't sure what to do. So I discussed it with some friends, and while those discussions were ongoing, took the salary out of the post.
- One friend in particular explained to me that it is illegal for an employer to ask me to do that, that there are laws protecting employees and giving them the right to discuss salary, and that the purpose of these laws is to try to even the playing field in terms of salary negotiation and leverage.
- In talking to other friends -- web developers at Mobiquity who were hired at the same time as me -- I learned that (iirc) all but one were making less than me. Mostly $30-45k/year, iirc (2015 in Gainesville, FL). Some were struggling financially and really could have used more money. I asked how they felt about discussing salary. They were into it, appreciative that I started the conversation, and frustrated that they were making so much less than me (I was straight out of a coding bootcamp, they were people with like 5-15 years of experience freelancing).
- So, I decided to add the salary back.
- I don't remember what happened next, but I think things fizzled out for a few months.
- Then on Slack the HR guy posted that they're hiring and are offering a referral bonus.
- Someone on Slack asked what the salary range for the position is.
- HR Guy said they aren't going to provide that information.
- People got a little upset and discussed it on Slack, saying things like "How am I supposed to tell my friends to apply if I don't know what the pay range is?".
- HR Guy, iirc, said there's pros and cons and he's not trying to be a bad guy, but the cons of causing animosity outweigh the pros.
- Perhaps due to my being a naive 22 year old... I posted something along the lines of "@channel - HR Guy says he thinks the cons outweigh the pros. This is based on the assumption that talking about salary is something that will cause animosity amongst us and that we don't want it to be a thing that is openly discussed. Instead of assuming this, it seems like it'd make sense to discuss it. How do you guys feel about it?"
- HR Guy called me into his office, was fuming, and brought in upper management to yell at me.
- I was fired two weeks later.
- There were a few other tension points too.
- For example, maybe a month or two before getting fired, our project (like pretty much every project at that company) was behind schedule.
- Some VP flew in to our office, called an engineers-only meeting, said she wanted to hear it from the engineers and skip the managers: how plausible is it that we could actually meet the deadline?
- I spoke up and said that it is highly unlikely, even with working overtime, that we meet the deadline and recommended that she inform the client and begin the process of damage control. I explained that if you look at our team's historic velocity and the number of story points left, we'd basically need to double our velocity to meet the deadline. Which, even with overtime, is implausible. She mentioned bringing other people into the project to speed things up. I explained Mythical Man Month.
- My manager called me in and was upset with me. He said she wasn't actually asking for the truth and wanted to be told that we'd get it done. I basically said that's not my concern.
- My performance wasn't the best, but it also wasn't the worst. IMO. I actually forget if they named performance as the reason I was fired -- I just remember it being something that sounded generic and fake -- but I had never been warned about my performance or been put on a performance improvement plan. Thinking about it now, there was a different engineer who was fired for performance reasons months before, and he had been put on a performance improvement plan. I think this points pretty strongly towards performance not being the main reason I was fired.
- Aside: I was young and naive. I certainly knew that my actions would ruffle people's feathers. I didn't think they'd be able to actually do anything about it though since I was "in the right" and if they did want to fire or demote me or something, they'd have to "explain themselves", and in doing so it'd be clear that they are "in the wrong". Now I understand that this very much is not the case.
- Still, I understood that there was some chance I was wrong and I get fired. I think I underestimated the probability by a good amount, but I at least was correct about the magnitude of how bad it would be if I did actually get fired. My assessment was that it wouldn't be very bad at all since, at least towards the end, I had been applying to other jobs.
- Funny thing: before being fired, I was in our offices fantasy football league. After being fired, I ended up winning. HR Guy was the commissioner of the fantasy football league. It wasn't one of those leagues where the company is nice and offers a prize without requiring employees to use their own money to join the league. So, after being fired, I had to coordinate with HR Guy to receive my winnings from the fantasy football league.
</ramble>
↑ comment by Adam Zerner (adamzerner) · 2023-09-12T11:31:37.999Z · LW(p) · GW(p)
I just googled for non-disparagement stuff. I found this. Looks like there's been a push in early 2023 to consider them unlawful.
What's happening: Overly broad non-disparagement clauses — which some companies require workers to sign in order to receive severance benefits — were recently ruled unlawful by the National Labor Relations Board.
...
Why it matters: The ruling and guidance could free workers to speak up about what happened inside their companies before they lost their jobs, and help each other navigate the layoff process, among other things.
...
The bottom line: This is the most pro-labor NLRB and general counsel in recent memory and they're pushing to strengthen worker rights.
comment by geoffreymiller · 2023-09-07T21:10:41.222Z · LW(p) · GW(p)
(Note: this was cross-posted to EA Forum here [EA · GW]; I've corrected a couple of minor typos, and swapping out 'EA Forum' for 'LessWrong' where appropriate)
A note on EA LessWrong posts as (amateur) investigative journalism:
When passions are running high, it can be helpful to take a step back and assess what's going on here a little more objectively.
There are all different kinds of EA Forum LessWrong posts that we evaluate using different criteria. Some posts announce new funding opportunities; we evaluate these in terms of brevity, clarity, relevance, and useful links for applicants. Some posts introduce a new potential EA cause area; we evaluate them in terms of whether they make a good empirical case for the cause area being large-scope, neglected, and tractable. Some posts raise a theoretical issues in moral philosophy; we evaluate those in terms of technical philosophical criteria such as logical coherence.
This post by Ben Pace is very unusual, in that it's basically investigative journalism, reporting the alleged problems with one particular organization and two of its leaders. The author doesn't explicitly frame it this way, but in his discussion of how many people he talked to, how much time he spent working on it, and how important he believes the alleged problems are, it's clearly a sort of investigative journalism.
So, let's assess the post by the usual standards of investigative journalism. I don't offer any answers to the questions below, but I'd like to raise some issues that might help us evaluate how good the post is, if taken seriously as a work of investigative journalism.
Does the author have any training, experience, or accountability as an investigative journalist, so they can avoid the most common pitfalls, in terms of journalist ethics, due diligence, appropriate degrees of skepticism about what sources say, etc?
Did the author have any appropriate oversight, in terms of an editor ensuring that they were fair and balanced, or a fact-checking team that reached out independently to verify empirical claims, quotes, and background context? Did they 'run it by legal', in terms of checking for potential libel issues?
Does the author have any personal relationship to any of their key sources? Any personal or professional conflicts of interest? Any personal agenda? Was their payment of money to anonymous sources appropriate and ethical?
Were the anonymous sources credible? Did they have any personal or professional incentives to make false allegations? Are they mentally healthy, stable, and responsible? Does the author have significant experience judging the relative merits of contradictory claims by different sources with different degrees of credibility and conflicts of interest?
Did the author give the key targets of their negative coverage sufficient time and opportunity to respond to their allegations, and were their responses fully incorporated into the resulting piece, such that the overall content and tone of the coverage was fair and balanced?
Does the piece offer a coherent narrative that's clearly organized according to a timeline of events, interactions, claims, counter-claims, and outcomes? Does the piece show 'scope-sensitivity' in accurately judging the relative badness of different actions by different people and organizations, in terms of which things are actually trivial, which may have been unethical but not illegal, and which would be prosecutable in a court of law?
Does the piece conform to accepted journalist standards in terms of truth, balance, open-mindedness, context-sensitivity, newsworthiness, credibility of sources, and avoidance of libel? (Or is it a biased article that presupposed its negative conclusions, aka a 'hit piece', 'takedown', or 'hatchet job').
Would this post meet the standards of investigative journalism that's typically published in mainstream news outlets such as the New York Times, the Washington Post, or the Economist?
I don't know the answers to some of these, although I have personal hunches about others. But that's not what's important here.
What's important is that if we publish amateur investigative journalism in EA Forum LessWrong, especially when there are very high stakes for the reputations of individuals and organizations, we should try to adhere, as closely as possible, to the standards of professional investigative journalism. Why? Because professional journalists have learned, from centuries of copious, bitter, hard-won experience, that it's very hard to maintain good epistemic standards when writing these kinds of pieces, it's very tempting to buy into the narratives of certain sources and informants, it's very hard to course-correct when contradictory information comes to light, and it's very important to be professionally accountable for truth and balance.
comment by cata · 2023-09-07T19:37:24.656Z · LW(p) · GW(p)
Relevant: https://www.lesswrong.com/posts/NCefvet6X3Sd4wrPc/uncritical-supercriticality [LW · GW]
Replies from: elityreAnd it is triple ultra forbidden to respond to criticism with violence. There are a very few injunctions in the human art of rationality that have no ifs, ands, buts, or escape clauses. This is one of them. Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.
↑ comment by Eli Tyre (elityre) · 2023-09-08T00:48:25.560Z · LW(p) · GW(p)
I'm unclear on why you you posted this comment. Is this a reminder not to resort to violence? Who are you reminding?
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2023-09-08T00:57:29.394Z · LW(p) · GW(p)
I dunno why cata posted it, but I almost quoted this myself to explain why I dislike the proposed "bad argument gets lawsuit" norm.
Replies from: cata↑ comment by cata · 2023-09-08T01:45:55.725Z · LW(p) · GW(p)
Yes, that's what I was thinking. To me the lawsuit threat is totally beyond the pale.
Replies from: elityre↑ comment by Eli Tyre (elityre) · 2023-09-08T03:28:44.716Z · LW(p) · GW(p)
Lawsuits are of an importantly different category than violence. Lawsuits are one of the several mechanisms that society uses settle disputes without needing to resort to violence.
They may be inappropriate here, but I want to reject the equivocation between suing (or threatening to sue) someone and shooting (or threatening to shoot) them.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2023-09-08T07:30:17.739Z · LW(p) · GW(p)
As I think of it, the heart of the "bad argument gets counterargument" notion is "respond to arguments using reasoning, not coercion", rather than "literal physical violence is a unique category of thing that is never OK". Both strike me as good norms, but the former seems deeper and more novel to me, closer to the heart of things. I'm a fan of Scott's gloss (and am happy to cite it instead, if we want to construe Eliezer's version of the thing as something narrower):
[...] What is the “spirit of the First Amendment”? Eliezer Yudkowsky writes:
"There are a very few injunctions in the human art of rationality that have no ifs, ands, buts, or escape clauses. This is one of them. Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever."
Why is this a rationality injunction instead of a legal injunction? Because the point is protecting “the marketplace of ideas” where arguments succeed based on the evidence supporting or opposing them and not based on the relative firepower of their proponents and detractors. [...]
What does “bullet” mean in the quote above? Are other projectiles covered? Arrows? Boulders launched from catapults? What about melee weapons like swords or maces? Where exactly do we draw the line for “inappropriate responses to an argument”?
A good response to an argument is one that addresses an idea; a bad argument is one that silences it. If you try to address an idea, your success depends on how good the idea is; if you try to silence it, your success depends on how powerful you are and how many pitchforks and torches you can provide on short notice.
Shooting bullets is a good way to silence an idea without addressing it. So is firing stones from catapults, or slicing people open with swords, or gathering a pitchfork-wielding mob.
But trying to get someone fired for holding an idea is also a way of silencing an idea without addressing it. I’m sick of talking about Phil Robertson, so let’s talk about the Alabama woman who was fired for having a Kerry-Edwards bumper sticker on her car (her boss supported Bush). Could be an easy way to quiet support for a candidate you don’t like. Oh, there are more Bush voters than Kerry voters in this county? Let’s bombard her workplace with letters until they fire her! Now she’s broke and has to sit at home trying to scrape money together to afford food and ruing the day she ever dared to challenge our prejudices! And the next person to disagree with the rest of us will think twice before opening their mouth!
The e-version of this practice is “doxxing”, where you hunt down an online commenter’s personally identifiable information including address. Then you either harass people they know personally, spam their place of employment with angry comments, or post it on the Internet for everyone to see, probably with a message like “I would never threaten this person at their home address myself, but if one of my followers wants to, I guess I can’t stop them.” This was the Jezebel strategy that Michael was most complaining about. Freethought Blogs is also particularly famous for this tactic and often devolves into sagas that would make MsScribe herself proud.
A lot of people would argue that doxxing holds people “accountable” for what they say online. But like most methods of silencing speech, its ability to punish people for saying the wrong things is entirely uncorrelated with whether the thing they said is actually wrong. It distributes power based on who controls the largest mob (hint: popular people) and who has the resources, job security, and physical security necessary to outlast a personal attack (hint: rich people). If you try to hold the Koch Brothers “accountable” for muddying the climate change waters, they will laugh in your face. If you try to hold closeted gay people “accountable” for promoting gay rights, it will be very easy and you will successfully ruin their lives. Do you really want to promote a policy that works this way?
There are even more subtle ways of silencing an idea than trying to get its proponents fired or real-life harassed. For example, you can always just harass them online. The stronger forms of this, like death threats and rape threats, are of course illegal. But that still leaves many opportunities for constant verbal abuse, crude sexual jokes, insults aimed at family members, and dozens of emails written in all capital letters about what sorts of colorful punishments you and the people close to you deserve. [...]
My answer to the “Doctrine Of The Preferred First Speaker” ought to be clear by now. The conflict isn’t always just between first speaker and second speaker, it can also be between someone who’s trying to debate versus someone who’s trying to silence. Telling a bounty hunter on the phone “I’ll pay you $10 million to kill Bob” is a form of speech, but its goal is to silence rather than to counterargue. So is commenting “YOU ARE A SLUT AND I HOPE YOUR FAMILY DIES” on a blog. And so is orchestrating a letter-writing campaign demanding a business fire someone who vocally supports John Kerry.
Bad argument gets counterargument. Does not get bullet. Does not get doxxing. Does not get harassment. Does not get fired from job. Gets counterargument. Should not be hard.
comment by MondSemmel · 2023-09-07T14:28:01.220Z · LW(p) · GW(p)
Meta: Here is a link to the crosspost on the EA Forum [EA · GW].
comment by Catherine Low (catherine-low-1) · 2023-09-10T10:30:58.793Z · LW(p) · GW(p)
(Also shared on the EA Forum)
I’m one of the Community Liaisons for CEA’s Community Health and Special Projects team. The information shared in this post is very troubling. There is no room in our community for manipulative or intimidating behaviour.
We were familiar with many (but not all) of the concerns raised in Ben’s post based on our own investigation. We’re grateful to Ben for spending the time pursuing a more detailed picture, and grateful to those who supported Alice and Chloe during a very difficult time.
We talked to several people currently or formerly involved in Nonlinear about these issues, and took some actions as a result of what we heard. We plan to continue working on this situation.
From the comments on this post, I’m guessing that some readers are trying to work out whether Kat and Emerson’s intentions were bad. However, for some things, intentions might not be very decision-relevant. In my opinion, meta work like incubating new charities, advising inexperienced charity entrepreneurs, and influencing funding decisions should be done by people with particularly good judgement about how to run strong organisations, in addition to having admirable intentions.
I’m looking forward to seeing what information Nonlinear shares in the coming weeks.
Replies from: catherine-low-1↑ comment by Catherine Low (catherine-low-1) · 2023-09-11T17:24:13.945Z · LW(p) · GW(p)
To add more detail to "some actions", we can confirm that:
- Nonlinear has not been invited or permitted to run sessions or give talks relating to their work, or host a recruiting table at EAG and EAGx conferences this year. Kat ran a session on a personal topic, and Kat Emerson and Drew had a community office hour slot at EAG Bay Area 2023 in February. Since then we have not invited or permitted Kat or Emerson to run any type of session.
- We have been considering blocking them from attending future conferences since May, and were planning on making that decision if/when Kat or Emerson applied to attend a future conference
comment by aphyer · 2023-09-08T01:55:42.862Z · LW(p) · GW(p)
[Chloe was] paid the equivalent of $75k[1] [LW(p) · GW(p)] per year (only $1k/month, the rest via room and board)
So, it's not the most important thing in the post, but this sounds hella sketchy. Are you sure these are the numbers that were given?
$75k/yr minus $1k/mo leaves $63k/year in 'room and board'. The median household income in New York City is $70,663/yr per census.gov. Where were they boarding her, the Ritz Carlton?
Replies from: AprilSR, EmersonSpartz↑ comment by Emerson Spartz (EmersonSpartz) · 2023-09-08T07:21:34.213Z · LW(p) · GW(p)
This is more false info. The approximate/expected total compensation was $70k which included far more than room and board and $1k a month.
Chloe has also been falsely claiming we only had a verbal agreement but we have multiple written records.
We'll share specifics and evidence in our upcoming post.
comment by Viliam · 2023-09-08T15:41:50.404Z · LW(p) · GW(p)
By the way, this topic was already briefly discussed [EA(p) · GW(p)] on the EA forum 10 months ago.
The comments there feel like a copy of the comments here, so I guess the only new information for those who don't read the EA forum is the very fact that this was already publicly discussed 10 months ago.
comment by Algon · 2023-09-08T23:17:16.834Z · LW(p) · GW(p)
EDIT 2: To be clear, I was doing the exercise Ben recommended at the start of the post: predicting what I think the worst, credible info he could've found was. And this was a quickly written sketch of my off-the-cuff predictions. To be clear, the upper bounds I give are loose bounds and I lumped some really bad stuff with not-so-bad stuff. The me who wrote this comment a couple days ago would've been shocked if anyone in Non-linear had done the worst stuff in this list. But admittedly somewhat less shocked than the median LW user, I think. Milder forms of misconduct, like corruption and contract breaking, would've been most of the 1/100 chance I mention below.
A quick sketch of my estimates about how bad Ben's claims are:
I think there's like a ~1/1000-1/100 chance for each person amongst Non-linear to have been a rapist.
Maybe a 1/100-1/10 chance of other bad things having been done by people in Non-linear. Much/most of that probability mass is in miscelaneous forms of corruption. But I'd put maybe ~1/100 chance of any of corruption, blackmail, embezzlement, breaking major contracts, risking people's lives, beating someone up, and admittedly very unlikely, but possibly seriously physically injuring/killing someone.
Minor corruption or failures in rationality happening in the organization happening as a whole somewhere seem quite likely?
Edit: To be clear, this isn't because I've heard anything wrong about Non-linear previously. I've just updated hard enough on evidence of malfeasance amongst rats/EAs that I think that rat orgs are pretty average in how uncorrupt they are, and rats are pretty average in how moral they are, given their demographics. It is hard to say whether the initial part of this post, and some tangential discussion of it, affected my estimates.
↑ comment by orthonormal · 2023-09-09T18:35:03.107Z · LW(p) · GW(p)
I think the downvotes are coming because people don't realize you're doing the exercise at the start of the post, and rather think that you're making these claims after having read the rest of the post. I don't think you should lose karma for that, so I'm upvoting; but you may want to state at the top that's what you're doing.
Replies from: Algon↑ comment by Algon · 2023-09-09T19:28:35.138Z · LW(p) · GW(p)
Ex ante, it is obvious I should've mentioned that. But I just saw a bunch of comments making these guesses, and I thought my edit made it clear that these estimates weren't based on the post. Also, I really should've been more clear about the low likelihood of the really bad stuff.
Replies from: bideup↑ comment by bideup · 2023-09-10T13:05:37.431Z · LW(p) · GW(p)
Even now I would like it if you added an edit at the start to make it clearer what you’re doing! Before reading the replying comment and realising the context, I was mildly shocked by such potentially inflammatory speculation and downvoted.
comment by Aryeh Englander (alenglander) · 2023-09-07T16:49:39.790Z · LW(p) · GW(p)
[Cross-commenting from the EA Forum.]
[Disclaimers: My wife Deena works with Kat as a business coach. I briefly met Kat and Emerson while visiting in Puerto Rico and had positive interactions with them. My personality is such that I have a very strong inclination to try to see the good in others, which I am aware can bias my views.]
A few random thoughts related to this post:
1. I appreciate the concerns over potential for personal retaliation, and the other factors mentioned by @Habryka [EA · GW] and others for why it might be good to not delay this kind of post. I think those concerns and factors are serious and should definitely not be ignored. That said, I want to point out that there's a different type of retaliation in the other direction that posting this kind of thing without waiting for a response can cause: Reputational damage. As others have pointed out, many people seem to update more strongly on negative reports that come first and less on subsequent follow up rebuttals. If it turned out that the accusations are demonstrably false in critically important ways, then even if that comes to light later the reputational damage to Kat, Emerson, and Drew may now be irrevocable.
Reputation is important almost everywhere, but in my anecdotal experience reputation seems to be even more important in EA than in many other spheres. Many people in EA seem to have a very strong in-group bias towards favoring other "EAs" and it has long seemed to me that (for example) getting a grant from an EA organization often feels to be even more about having strong EA personal connections than for other places. (This is not to say that personal connections aren't important for securing other types of grants or deals or the like, and it's definitely not to say that getting an EA grant is only or even mostly about having strong EA connections. But from my own personal experience and from talking to quite a few others both in and out of EA, this is definitely how it feels to me. Note that I have received multiple EA grants in the past, and I have helped other people apply to and receive substantial EA grants.) I really don't like this sort of dynamic and I've low-key complained about it for a long time - it feels unprofessional and raises all sorts of in-group bias flags. And I think a lot of EA orgs feel like they've gotten somewhat better about this over time. But I think it is still a factor.
Additionally, it sometimes feels to me that EA Forum dynamics tend to lead to very strongly upvoting posts and comments that are critical of people or organizations, especially if they're more "centrally connected" in EA, while ignoring or even downvoting posts and comments in the other direction. I am not sure why the dynamic feels like this, and maybe I'm wrong about it really being a thing at all. Regardless, I strongly suspect that any subsequent rebuttal by Nonlinear would receive significantly fewer views and upvotes, even if the rebuttal were actually very strong.
Because of all this, I think that the potential for reputational harm towards Kat, Emerson, and Drew may be even greater than if this were in the business world or some other community. Even if they somehow provide unambiguous evidence that refutes almost everything in this post, I would not be terribly surprised if their potential to get EA funding going forward or to collaborate with EA orgs was permanently ended. In other words, I wouldn't be terribly surprised if this post spelled the end of their "EA careers" even if the central claims all turned out to be false. My best guess is that this is not the most likely scenario, and that if they provide sufficiently good evidence then they'll be most likely "restored" in the EA community for the most part, but I think there's a significant chance (say 1%-10%) that this is basically the end of their EA careers regardless of the actual truth of the matter.
Does any of this outweigh the factors mentioned by @Habryka [EA · GW]? I don't know. But I just wanted to point out a possible factor in the other direction that we may want to consider, particularly if we want to set norms for how to deal with other such situations going forward.
2. I don't have any experience with libel law or anything of the sort, but my impression is that suing for slander over this kind of piece is very much within the range of normal responses in the business world, even if in the EA world it is basically unheard of. So if your frame of reference is the world outside of EA then suing seems at least like a reasonable response, while if your frame of reference is the EA community then maybe it doesn't. I'll let others weigh in on whether my impressions on this are correct, but I didn't notice others bring this up so I figured I'd mention it.
3. My general perspective on these kinds of things is that... well, people are complicated. We humans often seem to have this tendency to want our heroes to be perfect and our villains to be horrible. If we like someone we want to think they could never do anything really bad, and unless presented with extremely strong evidence to the contrary we'll look for excuses for their behavior so that it matches our pictures of them as "good people". And if we decide that they did do something bad, then we label them as "bad people" and retroactively reject everything about them. And if that's hard to do we suffer from cognitive dissonance. (Cf. halo effect.)
But the reality, at least in my opinion, is that things are more complicated. It's not just that there are shades of grey, it's that people can simultaneously be really good people in some ways and really bad people in other ways. Unfortunately, it's not at all a contradiction for someone to be a genuinely kind, caring, supportive, and absolutely wonderful person towards most of the people in their life, while simultaneously being a sexual predator or committing terrible crimes.
I'm not saying that any of the people mentioned in this post necessarily did anything wrong at all. My point here is mostly just to point out something that may be obvious to almost all of us, but which feels potentially relevant and probably bears repeating in any case. Personally I suspect that everybody involved was acting in what they perceived to be good faith and are / were genuinely trying to do the right thing, just that they're looking at the situation through lenses based on very different perspectives and experiences and so coming to very different conclusions. (But see my disclaimer at the beginning of this comment about my personality bias coloring my own perspective.)
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-09-07T18:38:36.846Z · LW(p) · GW(p)
Kat, Emerson, and Drew's reputation is not my concern.
One of their friends called me yesterday saying that me publishing it would probably be the end for Nonlinear, so I should delay and give them time to prepare a response. I assured them that I was not considering that when choosing to share this information.
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2023-09-07T18:53:55.877Z · LW(p) · GW(p)
Kat, Emerson, and Drew's reputation is not your concern insofar you're basically certain that your post is basically true. If you thought there was a decent chance that your post was basically wrong and Nonlinear would find proof in the next week, publishing now would be inappropriate.
When destroying someone's reputation you have an extra obligation to make sure what you're saying is true. I think you did that in this case—just clarifying norms.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-09-10T22:59:23.902Z · LW(p) · GW(p)
I'm not sure I have the exact same tradeoff as you do here — I think I'm more likely to say "Hey, I currently assign 25% to <very terrible accusation>" if I have that probability, assigned rather than wait until it's like 90% or something before saying my probability.
But yes, what I meant to convey here was roughly "that which can be destroyed by the truth should be", and you can see in the summary section my probabilities are actually quite high.
Replies from: Lukas_Gloor↑ comment by Lukas_Gloor · 2023-09-10T23:39:48.960Z · LW(p) · GW(p)
Yeah I agree with that perspective, but want to flag that I thought your original choice of words was unfortunate. It's very much a cost to be wrong when you voice strong criticism of someone's character or call their reputation into question in other ways (even if you flag uncertainty) – just that it's sometimes (often?) worse to do nothing when you're right.
There's some room to discuss exact percentages. IMO, placing a 25% probability on someone (or some group) being a malefactor [LW · GW]* is more than enough to start digging/gossip selectively with the intent of gathering more evidence, but not always enough to go public? Sure, it's usually the case that "malefactors" cause harm to lots of people around them or otherwise distort epistemics and derail things, so there's a sense in which 25% probability might seem like it's enough from a utilitarian perspective of justice.** At the same time, in practice, I'd guess it's almost always quite easy (if you're correct!) to go from 25% to >50% with some proactive, diligent gathering of evidence (which IMO you've done very well), so, in practice, it seems good to have a norm that requires something more like >50% confidence.
Of course, the people who write as though they want you to have >95% confidence before making serious accusations, they probably haven't thought this through very well, because it seems to provide terrible incentives and lets bad actors get away with things way too easily.
*It seems worth flagging that people can be malefactors in some social contexts but not others. For instance, someone could be a bad influence on their environment when they're gullibly backing up a charismatic narcissistic leader, but not when they're in a different social group or out on their own.
**In practice, I suspect that a norm where everyone airs serious accusations with only 25% confidence (and no further "hurdles to clear") would be worse than what we have currently, even on a utilitarian perspective of justice. I'd expect something like an autoimmune overreaction from the time sink issues of social drama and paranoia where people become too protective or insecure about their reputation (worsened by bad actors or malefactors using accusations as one of their weapons). So, the autoimmune reaction could become overall worse than what one is trying to protect the community from, if one is too trigger-happy.
↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-11T09:20:44.916Z · LW(p) · GW(p)
Punish transgressions; reward true accusations; punish false accusations. The probabilities will then attend to themselves.
Replies from: Davidmanheim, Lukas_Gloor↑ comment by Davidmanheim · 2023-09-12T13:57:57.077Z · LW(p) · GW(p)
...only when there are no externalities and utilities from accusations and inflicted damage are symmetric. Neither of these is the case.
Replies from: Vaniver↑ comment by Lukas_Gloor · 2023-09-12T14:23:50.435Z · LW(p) · GW(p)
Practically, third parties who learn about an accusation will often have significant uncertainty about its accuracy. So, as a third party seeing Ben (or anyone else) make a highly critical post, I guess I could remain agnostic until the truth comes out one way or another, and reward/punish Ben at that point. That's certainly an option. Or, I could try to have some kind of bar of "how reasonable/unreasonable does an accusation need to seem to be defensible, praiseworthy, or out of line?" It's a tough continuum and you'll have communities that are too susceptible to witch hunts but also ones where people tend to play things down/placate over disharmony.
comment by DaystarEld · 2023-09-09T13:29:27.896Z · LW(p) · GW(p)
Thanks for this writeup, still undergoing various updates based on the info above and responses from Nonlinear.
One thing I do want to comment on is this:
(Personal aside: Regarding the texts from Kat Woods shown above — I have to say, if you want to be allies with me, you must not write texts like these. A lot of bad behavior can be learned from, fixed, and forgiven, but if you take actions to prevent me from being able to learn that the bad behavior is even going on, then I have to always be worried that something far worse is happening that I’m not aware of, and indeed I have been quite shocked to discover how bad people’s experiences were working for Nonlinear.)
I agree that it was a bad message to send. I agree that people shouldn't make it hard for others who have a stake in something to learn about bad behavior from others involved.
But I think it's actually a bit more complex if you consider the 0 privacy norms that might naturally follow from that, and I can kind of understand where Kat is (potentially) coming from in that message. This doesn't really apply if Nonlinear was actually being abusive, of course, only if they did things that most people would consider reasonable but which felt unfair to the recipient.
What I mean is basically that it can be tough to know how to act around people who might start shit-talking your organization when them doing so would be defecting on a peace treaty at best, and abusing good-will at worst. And it's actually generally hard to know if they're cognizant of that, in my experience.
This is totally independent of who's "right" or "wrong," and I have 0 personal knowledge of the Nonlinear stuff. But there are some people who have been to summer camps that we've had the opportunity to put on blast about antisocial things they've done that got them removed from the ecosystem, but we try to be careful to only do that when it's *really* egregious, and so often chose not to because it would have felt like too much of an escalation for something that was contained and private...
...but if they were to shit-talk the camps or how they were treated, that would feel pretty bad from my end in the "Well, fuck, I guess this is what we get for being compassionate" sense.
Many people may think it would be a better world if they imagine everyone's antisocial acts being immediately widely publicized, but in reality what I think would result is a default stance of "All organizations try to ruin people's reputations if they believe they did something even slightly antisocial so that they can't harm their reputation by telling biased stories about them first," and I think most people would actually find themselves unhappy with that world. (I'm not actually sure about that, though it seems safer to err on the side of caution.)
It can sound sinister or be a bad power dynamic from an organization to an individual, but if an individual genuinely doesn't seem to realize that the thing holding the org back isn't primarily a mutual worry of negative reputation harm but something like compassion and general decency norms, it might feel necessary to make that explicit... though of course making it explicit comes off as a threat, which is worse in many ways even if it could have been implicitly understood that the threat of reputation harm existed just from the fact that the organization no longer wants you to work with them.
There are good reasons historically why public bias is in the favor of individuals speaking out against organizations, but I think most people who have worked in organizations know what a headache it can be to deal with the occasional incredibly unreasonable person (again, not saying that's the case here, just speaking in general), and how hard it is to determine how much to communicate to the outside world when you do encounter someone you think is worse than just a "bad fit." I think it's hard to set a policy for that which is fair to everyone, and am generally unsure about what the best thing to do in such cases is.
comment by Adam Zerner (adamzerner) · 2023-09-09T07:05:21.651Z · LW(p) · GW(p)
Once I started actively looking into things, much of my information in the post below came about by a search for negative information about the Nonlinear cofounders, not from a search to give a balanced picture of its overall costs and benefits.
This is confusing (edit: and concerning) to me. Why not search for a balanced picture instead? Was this intentional? Or was it an unintended slip up that the author is merely admitting to?
Replies from: orthonormal, adamzerner↑ comment by orthonormal · 2023-09-09T18:28:30.233Z · LW(p) · GW(p)
It's a very unusual disclaimer that speaks well of the post.
The default journalistic practice at many outlets is to do an asymmetric search once the journalist or editor decides which way the wind is blowing, but of course nobody says this in the finished piece.
Ben is explicitly telling the reader that he did not spend another hundred hours looking for positive information about Nonlinear, so that we understand that absence of exculpatory evidence in the post should not be treated as strong evidence of absence.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2023-09-10T21:10:15.143Z · LW(p) · GW(p)
The default journalistic practice at many outlets is to do an asymmetric search once the journalist or editor decides which way the wind is blowing, but of course nobody says this in the finished piece.
The goal of journalism is to sell newspapers (or whatever) though. On the other hand, the goal here is to arrive at the truth.
Replies from: nathaniel-monson↑ comment by Nathaniel Monson (nathaniel-monson) · 2023-09-10T21:41:05.608Z · LW(p) · GW(p)
This seems like kinda a nonsense double standard. The declared goal of journalism is usually not to sell newspapers, that is your observation of the incentive structure. And while the declared goal of LW is to arrive at truth (or something similar--hone the skills which will better allow people to arrive at truth, or something), there are comparable parallel incentive structures to journalism.
It seems better to compare declared purpose to declared purpose, or inferred goal to inferred goal, doesn't it?
↑ comment by Adam Zerner (adamzerner) · 2023-09-10T22:14:54.227Z · LW(p) · GW(p)
Yes, but in my judgement -- and I suspect if you averaged out the judgement of reasonable others (not limited to LessWrongers) -- LW has an actual goal that is much, much closer to arriving at the truth than journalism.
↑ comment by Adam Zerner (adamzerner) · 2023-09-10T21:13:34.335Z · LW(p) · GW(p)
Thinking about it more, I can imagine some good reasons for this and am not too concerned by it.
For example, certain instances of negative information can probably be considered "dealbreakers", in which case if you find it you don't have to look for more stuff in pursuit of a "balanced picture". And here I believe 1) Ben had good reason to suspect that he'd find such dealbreakers and 2) he did in fact find numerous dealbreakers after searching.
comment by KatWoods (ea247) · 2023-09-07T07:54:29.876Z · LW(p) · GW(p)
This is a short response while I write up something more substantial.
The true story is very different than the one you just read.
Ben Pace purposefully posted this without seeing our evidence first, which I believe is unethical and violates important epistemic norms.
He said “I don't believe I am beholden to give you time to prepare”
We told him we have incontrovertible proof that many of the important claims were false or extremely misleading. We told him that we were working full-time on gathering the evidence to send him.
We told him we needed a week to get it all together because there is a lot of it. Work contracts, receipts, chat histories, transcripts, etc.
Instead of waiting to see the evidence, he published. I feel like this indicates his lack of interest in truth.
He did this despite there being no time sensitivity to this question and working on it for months. Despite him saying that he would look at the evidence.
I’m having to deal with one of the worst things that’s ever happened to me. Somebody who I used to care about is telling lies about me to my professional and social community that make me seem like a monster. And I have clear evidence to show that they’re lies.
Please, if you’re reading this, before signal boosting, I beg you to please reserve judgment until we have had a chance to present our evidence.
Replies from: Vaniver, elityre, sberens, NeroWolfe, thoth-hermes↑ comment by Vaniver · 2023-09-07T21:41:29.213Z · LW(p) · GW(p)
Ben Pace purposefully posted this without seeing our evidence first, which I believe is unethical and violates important epistemic norms.
For what it's worth, I do not view this post as unethical or violating important epistemic norms. [I do think repeating hearsay is unseemly--I would prefer the post written by Alice and Chloe--but I see why Ben is doing it in this case.]
A factor that seems somewhat important to me, and perhaps underlies a major disagreement here, is that I think reputation, while it is about you, is not for you. It's for the community you're a part of, so that other people can have accurate expectations of what you're like; both to help people who will appreciate interacting with you find you and help people who will regret interacting with you avoid you. Trying to manage your reputation is like trying to manage your bank balance: there are a small handful of ethical ways to do it and many unethical ways to do it.
And so the most concerning parts of the post (to me) are the parts where it sounds like you're trying to suppress negative evidence, and the response from Nonlinear in the comments so far feels like it supports that narrative instead of undermining it.
↑ comment by Eli Tyre (elityre) · 2023-09-11T00:34:47.470Z · LW(p) · GW(p)
Could we have a list of everything you think this post gets wrong, separately from the evidence that each of those points is wrong?
Maybe I'm missing something, but it seems like it should take less than an hour to read the post, make a note of every claim that's not true, and then post that list of false claims, even if it would take many days to collect all the evidence that shows those points are false.
I imagine that would be helpful for you, because readers are much more likely to reserve judgement if you listed which specific things are false.
Personally, I could look over that list and say "oh yeah, number 8 [or whatever] is cruxy for me. If that turns out not to be true, I think that substantially changes my sense of the situation.", and I would feel actively interested in what evidence you provide regarding that point later. And it would let you know which points to prioritize refuting, because you would know which things are cruxy for people reading.
In contrast, a generalized bid to reserve judgement because "many of the important claims were false or extremely misleading"...well, it just seems less credible, and so leaves me less willing to actually reserve judgement.
Indeed, deferring on producing such a list of claims-you-think-are-false suggests the possibility that you're trying to "get your story straight." ie that you're taking the time now to hurriedly go through and check which facts you and others will be able to prove or disprove, so that you know which things you can safely lie or exagerate about, or what narrative paints you in the best light while still being consistent with the legible facts.
↑ comment by Adam Zerner (adamzerner) · 2023-09-11T04:50:03.566Z · LW(p) · GW(p)
Yup, I strongly agree. And I'd even go further [LW(p) · GW(p)] and say that there are pretty large diminishing returns at play here. It should take even less time to come up with a list of the most important and cruxy things. Ie. maybe tier 1 of the total list takes 20 minutes and provides 70% of the value, tier 2 takes an additional 40 minutes and provides an additional 20% of the value, etc.
↑ comment by Simon Berens (sberens) · 2023-09-07T08:19:43.824Z · LW(p) · GW(p)
I am confused how to square your claim of requesting extra time for incontrovertible proof, with Ben’s claim that he had a 3 hour call with you and sent the summary to Emerson, who then replied “good summary!”
Was Emerson’s full reply something like, “Good summary! We have incontrovertible proof disproving the claims made against us, please allow us one week to provide it?”
Replies from: ea247↑ comment by KatWoods (ea247) · 2023-09-07T08:50:18.364Z · LW(p) · GW(p)
Yes, Ben took Emerson’s full email out of context, implying that Emerson was fully satisfied when in actuality, Emerson was saying, no, there is more to discuss - so much that we’d need a week to organize it.
He got multiple extremely key things wrong in that summary and was also missing key points we discussed on the call, but we figured there would be no reason he wouldn’t give us a week to clear everything up. Especially since he had been working on it for months.
Replies from: habryka4↑ comment by habryka (habryka4) · 2023-09-07T09:33:46.707Z · LW(p) · GW(p)
(Just for the record, I would have probably also walked away from this email interaction thinking that the summary did not "get multiple extremely key things wrong", according to you.
I feel kind of bad about summarizing it as just "good summary" without the "some points still require clarification" bit, but I do think that if you intended to communicate that the summary had major issues, you did fail at that, and indeed, it really seems to me like you said something that directly contradicted that)
Replies from: EmersonSpartz↑ comment by Emerson Spartz (EmersonSpartz) · 2023-09-07T10:40:21.889Z · LW(p) · GW(p)
We were very clear that we felt there were still major issues to address. Here’s another email in the thread a day later:
We also clearly told Ben and Robert in the call many times that there is a lot more to the story, and we have many more examples to share. This is why we suggested writing everything up, to be more precise and not say anything that was factually untrue. Since our former employees’ reputations are on the line as well, it makes sense to try to be very deliberate.
It's possible there was a miscommunication between you and Ben around how strongly we communicated the fact that there was a lot more here.
↑ comment by habryka (habryka4) · 2023-09-07T16:50:38.565Z · LW(p) · GW(p)
Wait, just so I understand, what I thought happened was that Ben sent you the summary before a call, to which you sent the first email (saying "good summary").
Then Ben said that he planned to publish this whole post and shared you on a draft, at which point you sent the email screenshotted in your most recent reply. They are responding to totally different pieces of text.
I absolutely agree that you clearly communicated that you think the full post is full of inaccuracies, but we were talking about whether the specific summary that Ben shared with you first, which is now included in this post as the "Paraphrasing Nonlinear" section, was something you communicated was inaccurate, which does not seem true to me according to the emails you shared here.
↑ comment by Irenicon · 2023-09-08T02:34:14.620Z · LW(p) · GW(p)
Honestly, one of the reasons I don't find the Nonlinear narrative credible is the absolute 100% denial of any wrongdoing, a 0% reflection. Clearly, Ben really looked into this and has various accounts from multiple people or really questionable behavior, that seems very credible and to come against all of it with such force and conviction is a tactic of people who want to deny and distort the truth.
Replies from: Irenicon↑ comment by NeroWolfe · 2023-09-26T14:29:08.792Z · LW(p) · GW(p)
Given that it's been a while since @Kat Woods and @Emerson Spartz claimed they had "incontrovertible proof" that warranted a delay in publishing, I'm hoping it's coming out soon. If not, a simple "we goofed" response would seem appropriate.
↑ comment by Thoth Hermes (thoth-hermes) · 2023-09-11T01:12:23.958Z · LW(p) · GW(p)
I think it might actually be better if you just went ahead with a rebuttal, piece by piece, starting with whatever seems most pressing and you have an answer for.
I don't know if it is all that advantageous to put together a long mega-rebuttal post that counters everything at once.
Then you don't have that demand nagging at you for a week while you write the perfect presentation of your side of the story.
comment by Vlad Firoiu (vlad-firoiu) · 2023-10-24T09:38:05.784Z · LW(p) · GW(p)
A lot of people have been angry about these texts made by Kat towards Alice:
“Given your past behavior, your career in EA would be over in a few DMs, but we aren’t going to do that because we care about you”
“We’re saying nice things about you publicly and expect you will do the same moving forward”
This sounds like a threat and it’s not how I would have worded it had I been in Kat’s shoes. However, I think it looks much more reasonable if you view it through the hypothesis that a) the bad things Alice is saying about Nonlinear are untrue and b) the bad things Kat has been holding off on saying about Alice are true. Basically, I think Kat’s position is that “If you [Alice] keep spreading lies about us, we will have to defend ourselves by countering with the truth, and unfortunately if these truths got out it would make you look bad (e.g. by painting you as dishonest). That’s why we’ve been trying to avoid going down this route, because we actually care about you and don’t want to hurt your reputation (so you can find jobs), so let’s both just say nice things about each other from now on and put this behind us.”. My sense is that Kat, out of fear that her reputation was being badly and unfairly damaged, emphasized the part where bad things happen to Alice in an attempt to get her to stop spreading misinformation. Again, while this isn’t how I’d have worded those messages, given this context I think it’s much more understandable than it might first seem.
Disclaimer: I'm friends with Kat and know some of her side of the story.
↑ comment by Ben Pace (Benito) · 2023-09-12T03:55:32.263Z · LW(p) · GW(p)
I had taken Dan Luu as implicitly endorsing the org by going there. I'm very unhappy that it turns out this is filtered evidence.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-09-12T04:46:54.261Z · LW(p) · GW(p)
The comment you replied to has been deleted. What was this about?
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-09-12T05:40:16.465Z · LW(p) · GW(p)
Lincoln said that Ben Kuhn was not under NDA and Dan Luu was.
comment by Adam Zerner (adamzerner) · 2023-09-09T09:03:53.003Z · LW(p) · GW(p)
I think of myself as playing the role of a wise old mentor who has had lots of experience, telling stories to the young adventurers, trying to toughen them up, somewhat similar to how Prof Quirrell[8] [? · GW] toughens up the students in HPMOR through teaching them Defense Against the Dark Arts, to deal with real monsters in the world.
Professor Quirrell also teaches his students how to lose. I suspect that Emerson is severely lacking this skill and that this lack is costing him greatly.
Edit: I say this mainly to signal boost the importance of "learn how to lose" and to point to a helpful example of why it is important.
Replies from: EmersonSpartz↑ comment by Emerson Spartz (EmersonSpartz) · 2023-09-09T14:18:03.765Z · LW(p) · GW(p)
I'd like to kindly remind you that you are making a lot of judgments about my character based on a 10,000 word post written by someone who explicitly told you he was looking for negative information and only intended to share the worst information.
That is his one paragraph paraphrase of a very complex situation and I think it's fine as far as it goes but it goes nowhere near far enough. We have a mega post coming ASAP.
Ben has also been quietly fixing errors in the post, which I appreciate, but people are going around right now attacking us for things that Ben got wrong, because how would they know he quietly changed the post?
This is why every time newspapers get caught making a mistake they issue a public retraction the next day to let everyone know. I believe Ben should make these retractions more visible.
Replies from: Zach Stein-Perlman, bec-hawk, adamzerner↑ comment by Zach Stein-Perlman · 2023-09-09T17:35:44.646Z · LW(p) · GW(p)
Ben has also been quietly fixing errors in the post, which I appreciate, but people are going around right now attacking us for things that Ben got wrong, because how would they know he quietly changed the post?
This is why every time newspapers get caught making a mistake they issue a public retraction the next day to let everyone know. I believe Ben should make these retractions more visible.
I used a diff checker to find the differences between the current post and the original post. There seem to be two:
- "Alice worked there from November 2021 to June 2022" became "Alice travelled with Nonlinear from November 2021 to June 2022 and started working for the org from around February"
- "using Lightcone funds" became "using personal funds"
Possibly I made a mistake, or Ben made edits and you saw them and then Ben reverted them—if so, I encourage you/anyone to point to another specific edit, possibly on other archive.org versions.
Update: Kat guesses [LW(p) · GW(p)] she was thinking of changes from a near-final draft rather than changes from the first published version.
Replies from: ea247, Benito, adamzerner, DanielFilan↑ comment by KatWoods (ea247) · 2023-09-10T08:34:11.228Z · LW(p) · GW(p)
Ah, sorry. I think what happened is that I was remembering the post from the draft he sent us just before it went live. At least from the post on WebArchive, the things I remember having been changed happened last minute between the draft and it going live. Only one of the changes I remember happened between the web archive shot and now.
To be fair, I think that change is large and causing a lot of problems (for example, burgergate, people thinking she was working for us at the time, instead of just a friend). However, it does look like I was wrong about that, and I retract my statement.
I'll edit the comment where I said that. Sorry for the misunderstanding. Thanks for looking into it.
↑ comment by Ben Pace (Benito) · 2023-09-10T23:04:59.729Z · LW(p) · GW(p)
For the record, I agree that it would be helpful for situations like these for us to have a publicly accessible version history, and think it would be good if we built that feature for the site.
↑ comment by Adam Zerner (adamzerner) · 2023-09-09T23:18:26.754Z · LW(p) · GW(p)
This seems important. The differences mentioned above don't seem particularly important to me. If they are in fact the only differences, I wouldn't expect someone with good/honorable intentions to frame the "quietly fixing errors" comment the way Emerson did.
Replies from: Zach Stein-Perlman↑ comment by Zach Stein-Perlman · 2023-09-09T23:22:36.903Z · LW(p) · GW(p)
Yeah. That plus an even stronger similar claim from Kat [EA(p) · GW(p)] casts doubt on their reliability, especially given how they seem to almost never say "oops." [LW · GW]
Update: Kat said oops [LW(p) · GW(p)] and has a reasonable explanation, yay.
Replies from: adamzerner↑ comment by Adam Zerner (adamzerner) · 2023-09-09T23:24:45.263Z · LW(p) · GW(p)
Yeah. The lack of "oops" is something that caught my eye as well, and I think that it is noteworthy.
Replies from: ea247↑ comment by DanielFilan · 2023-09-10T04:52:29.307Z · LW(p) · GW(p)
Site admins, would it be possible to see the edit history of posts, perhaps in diff format (or at least make that a default that authors can opt out of)? Seems like something I want in a few cases:
- controversial posts like these
- sometimes mods edit my posts and I'd like to know what they edited
↑ comment by MondSemmel · 2023-09-10T07:06:34.919Z · LW(p) · GW(p)
I thought we did have that feature on LW some time ago, as an icon at the top of the page in the same section as the author byline, with a tooltip that said something like "this post has undergone multiple revisions". But I don't see it here. I don't know if I hallucinated that feature, or misremembered it, or I got confused because it's on a similar website, or if the feature was only temporarily available, or what.
Replies from: Raemon↑ comment by Raemon · 2023-09-10T07:39:21.095Z · LW(p) · GW(p)
The original version of it only appeared when a post had been updated with a "major edit" (a manual flag authors can set on post edits)
I do think it's pretty dumb to not just let people read all the previous edits though, so I'll look into fixing that soon hopefully.
Replies from: MondSemmel↑ comment by MondSemmel · 2023-09-10T07:50:42.090Z · LW(p) · GW(p)
I guess there's still a need to be able to hide or delete versions as an author, e.g. if one accidentally doxxed someone by posting personal information. But outside of rare exceptions like that, there would likely be no problem of keeping the edits public.
↑ comment by Rebecca (bec-hawk) · 2023-09-09T16:47:50.252Z · LW(p) · GW(p)
I definitely think Ben should be flagging anywhere in the post that he has made edits.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-09-10T23:05:43.206Z · LW(p) · GW(p)
Seems right that I should keep a track of them somewhere publicly accessible.
↑ comment by Adam Zerner (adamzerner) · 2023-09-09T16:43:27.328Z · LW(p) · GW(p)
I'd like to kindly remind you that you are making a lot of judgments about my character based on a 10,000 word post written by someone who explicitly told you he was looking for negative information and only intended to share the worst information.
Here I am only making one judgement.
I agree that the evidence isn't perfect, but even after accounting for that, I still feel reasonably confident in my suspicion.
That is his one paragraph paraphrase of a very complex situation and I think it's fine as far as it goes but it goes nowhere near far enough. We have a mega post coming ASAP.
I am basing my judgement off of much more than that paragraph.
Ben has also been quietly fixing errors in the post, which I appreciate, but people are going around right now attacking us for things that Ben got wrong, because how would they know he quietly changed the post?
I don't think that saying "X lacks the skill of being able to lose" is an attack on X's character. Maybe slightly, but not substantially.
As discussed elsewhere [LW(p) · GW(p)], I don't think the fact that Nonlinear claims they have evidence of errors means that the conversation needs to be postponed. I think it simply means that we should update our beliefs when the new evidence becomes available. (Yes, humans are biased against doing this well.)
This is why every time newspapers get caught making a mistake they issue a public retraction the next day to let everyone know. I believe Ben should make these retractions more visible.
Strongly agreed.
comment by Adam Zerner (adamzerner) · 2023-09-09T09:20:03.578Z · LW(p) · GW(p)
I sense a strong "ends justify the means" mentality amongst Emerson, and to a lesser extent, Kat.
I think that everything we're seeing here more broadly is a great case study in how that sort of thinking can, and often does [LW · GW], go wrong.
In particular, if you are going to use that type of reasoning, you really need to make sure that you think beyond first order effects. What about the second, and third, and nth order effects? Thinking about such things is often difficult and mistake prone, and so something like virtue ethics is probably the approach that yields the most desirable consequences for most [LW · GW] people.
comment by Adam Zerner (adamzerner) · 2023-09-13T08:38:01.424Z · LW(p) · GW(p)
I wonder: is it appropriate to approach this situation from the perspective of gossip? As opposed to a perspective closer to formal legal systems?
I'm not sure. I suspect moderately strongly that a good amount of gossip is appropriate here, but that, at the same time, other parts of this should be approached from a more conservative and formal perspective. I worry that sticking one's chin up in the air at the thought of gossiping is a Valley of Bad Rationality [? · GW] and something a midwit would do.
Robin Hanson has written a lot about gossip. It seems that social scientists see it as something that certainly has it's place. From Scientific American's The Science of Gossip: Why We Can't Stop Ourselves:
Is Gossip Always Bad?
The aspect of gossip that is most troubling is that in its rawest form it is a strategy used by individuals to further their own reputations and selfish interests at the expense of others. This nasty side of gossip usually overshadows the more benign ways in which it functions in society. After all, sharing gossip with another person is a sign of deep trust because you are clearly signaling that you believe that this person will not use this sensitive information in a way that will have negative consequences for you; shared secrets also have a way of bonding people together. An individual who is not included in the office gossip network is obviously an outsider who is not trusted or accepted by the group.There is ample evidence that when it is controlled, gossip can indeed be a positive force in the life of a group. In a review of the literature published in 2004, Roy F. Baumeister of Florida State University and his colleagues concluded that gossip can be a way of learning the unwritten rules of social groups and cultures by resolving ambiguity about group norms. Gossip is also an efficient way of reminding group members about the importance of the group’s norms and values; it can be a deterrent to deviance and a tool for punishing those who transgress. Rutgers University evolutionary biologist Robert Trivers has discussed the evolutionary importance of detecting “gross cheaters” (those who fail to reciprocate altruistic acts) and “subtle cheaters” (those who reciprocate but give much less than they get). [For more on altruism and related behavior, see “The Samaritan Paradox,” by Ernst Fehr and Suzann-Viola Renninger; Scientific American Mind, Premier Issue 2004.]
Gossip can be an effective means of uncovering such information about others and an especially useful way of controlling these “free riders” who may be tempted to violate group norms of reciprocity by taking more from the group than they give in return. Studies in real-life groups such as California cattle ranchers, Maine lobster fishers and college rowing teams confirm that gossip is used in these quite different settings to enforce group norms when an individual fails to live up to the group’s expectations. In all these groups, individuals who violated expectations about sharing resources and meeting responsibilities became frequent targets of gossip and ostracism, which applied pressure on them to become better citizens. Anthropological studies of hunter-gatherer groups have typically revealed a similar social control function for gossip in these societies.
Anthropologist Christopher Boehm of the University of Southern California has proposed in his book Hierarchy in the Forest: The Evolution of Egalitarian Behavior (Harvard University Press, 1999) that gossip evolved as a “leveling mechanism” for neutralizing the dominance tendencies of others. Boehm believes that small-scale foraging societies such as those typical during human prehistory emphasized an egalitarianism that suppressed internal competition and promoted consensus seeking in a way that made the success of one’s group extremely important to one’s own fitness. These social pressures discouraged free riders and cheaters and encouraged altruists. In such societies, the manipulation of public opinion through gossip, ridicule and ostracism became a key way of keeping potentially dominant group members in check.
comment by Deena Englander (deena-englander) · 2023-09-07T14:49:09.245Z · LW(p) · GW(p)
There are so many things wrong with this post that I'm not entirely sure where to start. Here are a few key thoughts on this:
-EA preaches rationalism. As part of rationalism, to understand something truly, you need to investigate both sides of the argument. Yet the author specifically decided to only look at one side of the argument. How can that possibly be a rationalist approach to truth-seeking? If you're going to write a defamation article about someone, especially in EA, please make sure to go about it with the logical rigor you would give any issue.
-I've been working with Kat and Nonlinear for years now and I heard about the hiring process, the employment issues, and the nasty separation. I can guarantee you from my perspective as a coach that a good number of the items mentioned here are abjectly false. I think the worst mistake Kat made was to not have a contract in writing with both of her employees (Chloe's agreement was in writing) detailing the terms of their work engagement.
-I'm not seeing information collected from other Nonlinear employees, which makes me wonder why there's a biased sample data here. Again, if you're spending the amount of time and effort as was put into this post to defame someone, choose an appropriate data sample.
-Have you ever been through or seen people go through a divorce? Nasty splits happen all the time, and the anger can cloud retrospective judgment. Yet when we hear someone complain about how bad their ex was, we take it with a grain of salt and assume that personal prejudice is clouding their impression of the person (which is usually true). Why isn't that factor taken into account?
-In general, I think it's not a good idea to live with the people you work with. It destroys relationships. So it probably wasn't a good position to start with. I'm not surprised it went sour - how often do people not have great relationships with their roommates? And when you compound that with a built-in hierarchy of employee and boss, it can make it more challenging. It's possible Alice and Chloe didn't know what they were getting themselves into. But that brings me back to Kat's mistake of not getting it in writing for both of them. Their mistake does not give them an excuse for libel.
Honestly, I'm very disappointed in the author for writing a non-rigorous, slanderous accusation of an organization that does a whole lot of good, especially when I know firsthand that it's false. It makes me lose faith in the integrity of the rationalist community.
↑ comment by DanielFilan · 2023-09-07T18:34:37.038Z · LW(p) · GW(p)
I can guarantee you from my perspective as a coach that a good number of the items mentioned here are abjectly false.
What's an example of something that's false?
comment by RomanS · 2023-09-12T12:52:39.922Z · LW(p) · GW(p)
I think of myself as playing the role of a wise old mentor who has had lots of experience, telling stories to the young adventurers, trying to toughen them up, somewhat similar to how Prof Quirrell[8] toughens up the students in HPMOR
Speaking about taking inspiration from fiction...
Several novels by Robert A. Heinlein feature Jubal Harshaw, a fictional wealthy rationalist polymath who is living and working together with 3 sexy female secretaries: a blonde, a brunette, and a redhead (e.g. in "Stranger in a Strange Land").
I wonder if, by a pure coincidence, the 3 women involved in the Nonlinear situation are a blonde, a brunette, and a redhead?
I'm not implying anything, and I see no problem with such a setup at all, as long as everything is done with consent. But if there is indeed such a coincidence, that would make me update about Nonlinear in several ways.