Posts
Comments
From zizians.info:
All four of the people arrested as part of Ziz's protest were transgender women (the fifth was let go without charges). This is far from coincidence as Ziz seems to go out of her way to target transgender people. In terms of cult indoctrination such folks are an excellent fit. They're often:
- Financially vulnerable.
- Newly out transgender people are especially likely to already be estranged from friends or family.
- It is common for them to lack stable housing.
- Many traditional social services (illegally) reject them for cultural or religious reasons (e.g., Christian homeless shelters).
- Intolerant attitudes among the underclass hit twice: they can't rely on strangers for help and being transgender often makes them a target for violence; making them outcasts even among outcasts.
- Already creating a new identity.
- During transition people change their name. This creates an opportunity for Ziz to insert themselves into a recruits ongoing transition. By showing them their "double personhood" as they're abandoning an old identity it's possible to convince recruits to adopt a Zizian name (e.g., left hemisphere / right hemisphere) as their new social identity.
- As the name implies transition is a time of transition; old patterns and habits tend to fall away. People who have spent years repressing important parts of themselves suddenly have the opportunity to completely change their social presentation. This does not always mean someone wants to play the same role as before but a different gender. With the radical changes that can accompany transition come strong opportunities for radicalization.
All of these factors combine to make Ziz, themselves a transgender woman, more credible to recruits than she might otherwise be. A privileged cis person with close family and stable housing might reject boat housing out of hand: "I don't know, that sounds iffy to me". For someone facing mortal danger after their rude ejection into the underclass it's an easier pill to swallow: "It can't be worse than sleeping on the street right?"
Another important concept Ziz uses to manipulate people is the idea of being "bigender". Ziz claims that each hemisphere has a gender and that fairly often people have opposing gender identities between hemispheres. This provides a convenient basis for her to undermine the identity of people she's recruiting. If the target is cis, tell them their other half is trans, if the target is trans tell them their other half is cis. It's a similar disorienting trick to the idea of single and double good. If the target identifies as good tell them their other half is irredeemably evil, if they identify amorally insist that half of them is a saint. The pattern is to take aspects of folks identities that they're invested in and disrupt them by creating a domain of self which Ziz (and only Ziz) has knowledge about so the target is forced to trust their interpretation.
I don't really want to go through sinceriously.fyi at this point but it's implicit in her attacks on CFAR as "transphobic" for not accepting her belief system at least.
In the largest LW survey, 10.5% of users were transgender. This also increase the most deep in the community you are: 18% restricting to those who are either "sometimes" or "all the times" in the community, 21% restricting to those who are "all the times" in the community.
(not OP) high base rates of transgenderism in LW-rationalism, particularly the sections that would be the most sensible to tenets of Ziz's ideology (high interest in technical aspects of mathematical decision theory, animal rights, radical politics), while being on average more socially vulnerable, and Ziz herself apparently believed that trans women were inherently more capable of accepting her "truth" for more mystical g/acc-ish reasons (though I can't find first-hand confirmation rn)
You are referring to Pasek with male pronouns despite the consensus of all sources provided in OP. Considering you claim to have known Pasek, I would like you to confirm that you're doing so because you have first-hand information not known to any of the writers of the sources in OP, and I'm just getting the impression otherwise because your last posts on the forum were about how doing genetics studies in medicine is "DEI".
TBF it is fairly striking reading about early Soviet history how many of the Old Bolshevik intelligentsia would have fit right in this community but the whole "Putin is a secret cosmist" crowd is... unhinged.
@PhilGoetz's Reason as memetic immune disorder seems relevant here. It has been noted many times that engineers are disproportionately involved in terrorism, in ways that the mere usefulness of their engineering skills can't explain.
As documented in the 2023 Medium article, Ziz has threatened to murder rationalists for a while, and I'm aware prominent rationalists have been paranoid about possible attempts on their life by Zizians for the past few years. Aella has also recently stated on Twitter she wouldn't accept an interview on the subject without an upgraded security system on her house.
Surveilling whose activities?
Core Zizians (and, in general, any group determined to be a cult or terror threat to the community), as the US doesn't really have an equivalent of, say, the French MIVILUDES to do that job (else US society would be fairly different). Potential recruits are addressed in the next comma.
TBF, Torres denies using it to mean this, instead claiming it refers to some obscure 2010 article by Ben Goertzel alone. This doesn't seem a very credible excuse, and it has been largely understood by proponents of the theory (like Dave Troy or Céline Keller) to mean Russian cosmism (and consequently that "TESCREAL" is actually a plot by Russian intelligence to re-establish the Soviet Union).
People who use the term TESCREAL generally don't realize that science fiction authors often take the futures they write about seriously (if not literally). They will talk about "TESCREALists taking sci-fi books too seriously" without knowing Marvin Minsky, the AI pioneer whose "AI tasked to solve the Riemann hypothesis" thought experiment is effectively the origin of the paperclip-minimizer thought experiment, was the technical consultant for 2001: A Space Odyssey and was considered by Isaac Asimov to be one of the two smartest people he ever met (alongside cosmist Carl Sagan).
Or is this all just bad luck... that if you make a workshop, and a future murderer decides to go there, and they decide to use some of your keywords in their later manifesto... then it doesn't really matter what you do, even if you tell them to fuck off and call cops on them, you will forever be connected to them, and it's up to journalists whether they decide to spin it as: the murderer is just an example of everything that is wrong with this community.
I think this is a strange description of the mainstream media coverage when most of the articles talking about Zizian ideology are nearly entirely sourced from interviews with rationalists and talk about their conflict with mainline rationalists at length.
The lesson I can glean is probably that considering the high rate of cult creation in the community and the flashing warning signs about them, rationalists should have been far more proactive in adopting normal procedures of cult and terrorism prevention (e.g. surveilling their activities, preventing the isolation of potential recruits, and looking for any suspiciously missing person).
Violence by radical vegans and left-anarchists has historically not been extremely rare. Nothing in Zizians' actions strike me as particularly different (in kind if not in competency) than, say, the Belle Époque illegalists like the Bonnot Gang, or the Years of Lead leftist groups like the Red Army Fraction or the Weather Underground.
Silver and Ivory is Suri Dao, the info was put in the Detailed timeline of events linked in the OP.
I think there are a lot of people out there who will be willing to tell the Ziz sympathetic side of the story. (I mean, I would if asked, though "X did little wrong" seems pretty insane for most people involved and especially for Ziz). Like, I think there's a certain sort of left anarchismish person who is just, going to be very inclined to take the broke crazy trans women's side as much as it's possible to do so. It doesn't seem possible or even necessarily desirable to track every person with a take like that... whereas with people very very into Zizianism, it seems like important information.
I think that describes quite a few people in Rationalist Tumblr, and you could find them reblogging the accounts of the mainliner-Zizian conflict by Somni or pseudonymous pro-Zizian accounts like @aflowerbynoothername and @donttrythisathome (which I don't think have ever been identified, and I suspect based on style those may be maintained by Ziz and Gwen while in hiding).
(There is also a specific blogger I won't name out of respect (but who anyone in Rationalist Tumblr will be familiar with) that was a friend of many of the Zizians, including Emma and Ophelia, and was/is heavily involved in their legal defense after the violent clash with Curtis Lind.)
I would, however, caution against overcorrecting: some of the more recent Zizian recruits, like Suri Dao/Silver and Ivory (formerly a Rationalist Tumblr mainstay) and Ophelia, both ultimately implicated in violence, seem to have started out as telling the Ziz-sympathetic side of the story without initially approving her ideology. So there is still a slippery slope to watch for, but I don't think it's a good idea to do it publicly.
My impression is that (without even delving into any meta-level IR theory debates) Democrats are more hawkish on Russia while Republicans are more hawkish on China. So while obviously neither parties are kum-ba-yah and both ultimately represent US interests, it still makes sense to expect each party to be less receptive to the idea of ending any potential arms race against the country they consider an existential threat to US interests if left unchecked, so the party that is more hawkish on a primarily military superpower would be worse on nuclear x-risk, and the party that is more hawkish on a primarily economic superpower would be worse on AI x-risk and environmental x-risk. (Negotiating arms control agreements with the enemy superpower right during its period of liberalization and collapse or facilitating a deal between multiple US allies with the clear goal of serving as a counterweight to the purported enemy superpower seems entirely irrelevant here.)
Fortunately, the existential risks posed by AI are recognized by many close to President-elect Donald Trump. His daughter Ivanka seems to see the urgency of the problem. Elon Musk, a critical Trump backer, has been outspoken about the civilizational risks for many years, and recently supported California’s legislative push to safety-test AI. Even the right-wing Tucker Carlson provided common-sense commentary when he said: “So I don’t know why we’re sitting back and allowing this to happen, if we really believe it will extinguish the human race or enslave the human race. Like, how can that be good?” For his part, Trump has expressed concern about the risks posed by AI, too.
This is a strange contrast from the rest of the article, considering both Donald and Ivanka Trump's positions are largely informed by the "situational awareness" position arguing that the US should develop AGI before China to ensure US victory over China – which is explicitly the position Tegmark and Leahy argue against (and consider existentially harmful) when they call to stop work on AGI and work on international co-operation to restrict it and develop tool AI instead.
I still see this kind of confusion between the two positions a fair bit and it is extremely strange. It's like if back in the original Cold War people couldn't tell the difference between anti-communist hawks and the Bulletin of the Atomic Scientists (let alone anti-war hippies) because technically they both considered nuclear arms race to be very important for the future of humanity.
(Defining Tool AI as a program that would evaluate the answer to a question given available data without seeking to obtain any new data, and then shut down after having discovered the answer) While those arguments (if successful) argue that it's harder to program a Tool AI than it might look at first, so AI alignment research is still something that should be actively researched (and I doubt Tegmark think AI alignment research is useless), they don't really address the point that making aligned Tool AIs are still in some sense "inherently safer" than making Friendly AGI because the lack of a singleton scenario mean you don't need to solve all moral and political philosophy from first principles in your garage in 5 years and hope you "get it right" the first time.
The bottom 55% of the world population own ~1% of capital, the bottom 88% own ~15%, and the bottom 99% own ~54%, which is a majority, but the top 1% are the millionaires (not even multi-millionaires or billionaires) likely owning wealth more vitally important to the economy than personal property and bank accounts, and empirically they seem to be doing fine dominating the economy already without neoclassical catechism about comparative advantage preventing them from doing that. However you massage the data it seems highly implausible that driving the value of labor (the non-capital factor of production) to zero wouldn't be a global catastrophic risk and value drift risk/s-risk.
Wiping out 99% of the world population is a global catastrophic risk, and likely a value drift risk and s-risk.
Thanks for writing this, this is something I have thought about before trying to convince people who are more worried about "short-term" issues to take the "long-term" risks seriously. Essentially, one can think of two major "short-term" AI risk scenarios (or, at least "medium-term" ones that "short-term"ists might take seriously), essentially corresponding to the prospects of automating the two factors of production:
- Mass technological unemployment causing large swathes of workers to become superfluous and then starved out by the now AI-enabled corporations (what you worry about in this post)
- AI increasingly replacing "fallible" human decision-makers in corporations if not in government, pushed by the necessity to maximize profits to be unfettered by any moral and legal norm (even more so than human executives are already incentivized to be; what Scott worries about here)
But if 1 and 2 happens at the same time, you've got your more traditional scenario: AI taking over the world and killing all humans as they have become superfluous. This doesn't provide a full-blown case for the more Orthodox AI-go-FOOM scenario (you would need ), but at least serve as a case that one should believe Reform AI Alignment is a pressing issue, and those who are convinced about that will ultimately be more likely to take the AI-go-FOOM scenario seriously, or at least operationalize one's differences with believers in only object-level disagreements about intelligence explosion macroeconomics, how powerful is intelligence as a "cognitive superpower", etc. as opposed to the tribalized meta-level disagreements that define the current "AI ethics" v. "AI alignment" discourse.
(Admittedly, AI will probably progress simultaneously with robots, which will hit people who do more hands-on work too.)
This looks increasingly unlikely to me. It seems to me (from an outsider's perspective) that the current bottleneck in robotics is the low dexterity of existing hardware far more than the software to animate robot arms, or even the physics simulation software to test it. And on the flip side current proto-AGI research makes the embodied cognition thesis seems very unlikely.
At least under standard microeconomic assumptions of property ownership, you would presumably still have positive productivity of your capital (like your land).
Well, we're not talking about microeconomics, are we? Unemployment is a macroeconomic phenomenon, and we are precisely talking about people who have little to no capital, need to work to live, and therefore need their labor to have economic value to live.