Harper's Magazine article on LW/MIRI/CFAR and Ethereum
post by gwern · 2014-12-12T20:34:45.244Z · LW · GW · Legacy · 154 commentsContents
154 comments
Cover title: “Power and paranoia in Silicon Valley”; article title: “Come with us if you want to live: Among the apocalyptic libertarians of Silicon Valley” (mirrors: 1, 2, 3), by Sam Frank; Harper’s Magazine, January 2015, pg26-36 (~8500 words). The beginning/ending are focused on Ethereum and Vitalik Buterin, so I'll excerpt the LW/MIRI/CFAR-focused middle:
…Blake Masters-the name was too perfect-had, obviously, dedicated himself to the command of self and universe. He did CrossFit and ate Bulletproof, a tech-world variant of the paleo diet. On his Tumblr’s About page, since rewritten, the anti-belief belief systems multiplied, hyperlinked to Wikipedia pages or to the confoundingly scholastic website Less Wrong: “Libertarian (and not convinced there’s irreconcilable fissure between deontological and consequentialist camps). Aspiring rationalist/Bayesian. Secularist/agnostic/ ignostic . . . Hayekian. As important as what we know is what we don’t. Admittedly eccentric.” Then: “Really, really excited to be in Silicon Valley right now, working on fascinating stuff with an amazing team.” I was startled that all these negative ideologies could be condensed so easily into a positive worldview. …I saw the utopianism latent in capitalism-that, as Bernard Mandeville had it three centuries ago, it is a system that manufactures public benefit from private vice. I started CrossFit and began tinkering with my diet. I browsed venal tech-trade publications, and tried and failed to read Less Wrong, which was written as if for aliens.
…I left the auditorium of Alice Tully Hall. Bleary beside the silver coffee urn in the nearly empty lobby, I was buttonholed by a man whose name tag read MICHAEL VASSAR, METAMED research. He wore a black-and-white paisley shirt and a jacket that was slightly too big for him. “What did you think of that talk?” he asked, without introducing himself. “Disorganized, wasn’t it?” A theory of everything followed. Heroes like Elon and Peter (did I have to ask? Musk and Thiel). The relative abilities of physicists and biologists, their standard deviations calculated out loud. How exactly Vassar would save the world. His left eyelid twitched, his full face winced with effort as he told me about his “personal war against the universe.” My brain hurt. I backed away and headed home. But Vassar had spoken like no one I had ever met, and after Kurzweil’s keynote the next morning, I sought him out. He continued as if uninterrupted. Among the acolytes of eternal life, Vassar was an eschatologist. “There are all of these different countdowns going on,” he said. “There’s the countdown to the broad postmodern memeplex undermining our civilization and causing everything to break down, there’s the countdown to the broad modernist memeplex destroying our environment or killing everyone in a nuclear war, and there’s the countdown to the modernist civilization learning to critique itself fully and creating an artificial intelligence that it can’t control. There are so many different - on different time-scales - ways in which the self-modifying intelligent processes that we are embedded in undermine themselves. I’m trying to figure out ways of disentangling all of that. . . .I’m not sure that what I’m trying to do is as hard as founding the Roman Empire or the Catholic Church or something. But it’s harder than people’s normal big-picture ambitions, like making a billion dollars.” Vassar was thirty-four, one year older than I was. He had gone to college at seventeen, and had worked as an actuary, as a teacher, in nanotech, and in the Peace Corps. He’d founded a music-licensing start-up called Sir Groovy. Early in 2012, he had stepped down as president of the Singularity Institute for Artificial Intelligence, now called the Machine Intelligence Research Institute (MIRI), which was created by an autodidact named Eliezer Yudkowsky, who also started Less Wrong. Vassar had left to found MetaMed, a personalized-medicine company, with Jaan Tallinn of Skype and Kazaa, $500,000 from Peter Thiel, and a staff that included young rationalists who had cut their teeth arguing on Yudkowsky’s website. The idea behind MetaMed was to apply rationality to medicine-“rationality” here defined as the ability to properly research, weight, and synthesize the flawed medical information that exists in the world. Prices ranged from $25,000 for a literature review to a few hundred thousand for a personalized study. “We can save lots and lots and lots of lives,” Vassar said (if mostly moneyed ones at first). “But it’s the signal-it’s the ‘Hey! Reason works!’-that matters. . . . It’s not really about medicine.” Our whole society was sick - root, branch, and memeplex - and rationality was the only cure. …I asked Vassar about his friend Yudkowsky. “He has worse aesthetics than I do,” he replied, “and is actually incomprehensibly smart.” We agreed to stay in touch.
One month later, I boarded a plane to San Francisco. I had spent the interim taking a second look at Less Wrong, trying to parse its lore and jargon: “scope insensitivity,” “ugh field,” “affective death spiral,” “typical mind fallacy,” “counterfactual mugging,” “Roko’s basilisk.” When I arrived at the MIRI offices in Berkeley, young men were sprawled on beanbags, surrounded by whiteboards half black with equations. I had come costumed in a Fermat’s Last Theorem T-shirt, a summary of the proof on the front and a bibliography on the back, printed for the number-theory camp I had attended at fifteen. Yudkowsky arrived late. He led me to an empty office where we sat down in mismatched chairs. He wore glasses, had a short, dark beard, and his heavy body seemed slightly alien to him. I asked what he was working on. “Should I assume that your shirt is an accurate reflection of your abilities,” he asked, “and start blabbing math at you?” Eight minutes of probability and game theory followed. Cogitating before me, he kept grimacing as if not quite in control of his face. “In the very long run, obviously, you want to solve all the problems associated with having a stable, self-improving, beneficial-slash-benevolent AI, and then you want to build one.” What happens if an artificial intelligence begins improving itself, changing its own source code, until it rapidly becomes - foom! is Yudkowsky’s preferred expression - orders of magnitude more intelligent than we are? A canonical thought experiment devised by Oxford philosopher Nick Bostrom in 2003 suggests that even a mundane, industrial sort of AI might kill us. Bostrom posited a “superintelligence whose top goal is the manufacturing of paper-clips.” For this AI, known fondly on Less Wrong as Clippy, self-improvement might entail rearranging the atoms in our bodies, and then in the universe - and so we, and everything else, end up as office supplies. Nothing so misanthropic as Skynet is required, only indifference to humanity. What is urgently needed, then, claims Yudkowsky, is an AI that shares our values and goals. This, in turn, requires a cadre of highly rational mathematicians, philosophers, and programmers to solve the problem of “friendly” AI - and, incidentally, the problem of a universal human ethics - before an indifferent, unfriendly AI escapes into the wild.
Among those who study artificial intelligence, there’s no consensus on either point: that an intelligence explosion is possible (rather than, for instance, a proliferation of weaker, more limited forms of AI) or that a heroic team of rationalists is the best defense in the event. That MIRI has as much support as it does (in 2012, the institute’s annual revenue broke $1 million for the first time) is a testament to Yudkowsky’s rhetorical ability as much as to any technical skill. Over the course of a decade, his writing, along with that of Bostrom and a handful of others, has impressed the dangers of unfriendly AI on a growing number of people in the tech world and beyond. In August, after reading Superintelligence, Bostrom’s new book, Elon Musk tweeted, “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” In 2000, when Yudkowsky was twenty, he founded the Singularity Institute with the support of a few people he’d met at the Foresight Institute, a Palo Alto nanotech think tank. He had already written papers on “The Plan to Singularity” and “Coding a Transhuman AI,” and posted an autobiography on his website, since removed, called “Eliezer, the Person.” It recounted a breakdown of will when he was eleven and a half: “I can’t do anything. That’s the phrase I used then.” He dropped out before high school and taught himself a mess of evolutionary psychology and cognitive science. He began to “neuro-hack” himself, systematizing his introspection to evade his cognitive quirks. Yudkowsky believed he could hasten the singularity by twenty years, creating a superhuman intelligence and saving humankind in the process. He met Thiel at a Foresight Institute dinner in 2005 and invited him to speak at the first annual Singularity Summit. The institute’s paid staff grew. In 2006, Yudkowsky began writing a hydra-headed series of blog posts: science-fictionish parables, thought experiments, and explainers encompassing cognitive biases, self-improvement, and many-worlds quantum mechanics that funneled lay readers into his theory of friendly AI. Rationality workshops and Meetups began soon after. In 2009, the blog posts became what he called Sequences on a new website: Less Wrong. The next year, Yudkowsky began publishing Harry Potter and the Methods of Rationality at
fanfiction.net
. The Harry Potter category is the site’s most popular, with almost 700,000 stories; of these, HPMoR is the most reviewed and the second-most favorited. The last comment that the programmer and activist Aaron Swartz left on Reddit before his suicide in 2013 was on/r/hpmor
. In Yudkowsky’s telling, Harry is not only a magician but also a scientist, and he needs just one school year to accomplish what takes canon-Harry seven. HPMoR is serialized in arcs, like a TV show, and runs to a few thousand pages when printed; the book is still unfinished. Yudkowsky and I were talking about literature, and Swartz, when a college student wandered in. Would Eliezer sign his copy of HPMoR? “But you have to, like, write something,” he said. “You have to write, ‘I am who I am.’ So, ‘I am who I am’ and then sign it.” “Alrighty,” Yudkowsky said, signed, continued. “Have you actually read Methods of Rationality at all?” he asked me. “I take it not.” (I’d been found out.) “I don’t know what sort of a deadline you’re on, but you might consider taking a look at that.” (I had taken a look, and hated the little I’d managed.) “It has a legendary nerd-sniping effect on some people, so be warned. That is, it causes you to read it for sixty hours straight.”The nerd-sniping effect is real enough. Of the 1,636 people who responded to a 2013 survey of Less Wrong’s readers, one quarter had found the site thanks to HPMoR, and many more had read the book. Their average age was 27.4, their average IQ 138.2. Men made up 88.8% of respondents; 78.7% were straight, 1.5% transgender, 54.7 % American, 89.3% atheist or agnostic. The catastrophes they thought most likely to wipe out at least 90% of humanity before the year 2100 were, in descending order, pandemic (bioengineered), environmental collapse, unfriendly AI, nuclear war, pandemic (natural), economic/political collapse, asteroid, nanotech/gray goo. Forty-two people, 2.6 %, called themselves futarchists, after an idea from Robin Hanson, an economist and Yudkowsky’s former coblogger, for reengineering democracy into a set of prediction markets in which speculators can bet on the best policies. Forty people called themselves reactionaries, a grab bag of former libertarians, ethno-nationalists, Social Darwinists, scientific racists, patriarchists, pickup artists, and atavistic “traditionalists,” who Internet-argue about antidemocratic futures, plumping variously for fascism or monarchism or corporatism or rule by an all-powerful, gold-seeking alien named Fnargl who will free the markets and stabilize everything else. At the bottom of each year’s list are suggestive statistical irrelevancies: “every optimizing system’s a dictator and i’m not sure which one i want in charge,” “Autocracy (important: myself as autocrat),” “Bayesian (aspiring) Rationalist. Technocratic. Human-centric Extropian Coherent Extrapolated Volition.” “Bayesian” refers to Bayes’s Theorem, a mathematical formula that describes uncertainty in probabilistic terms, telling you how much to update your beliefs when given new information. This is a formalization and calibration of the way we operate naturally, but “Bayesian” has a special status in the rationalist community because it’s the least imperfect way to think. “Extropy,” the antonym of “entropy,” is a decades-old doctrine of continuous human improvement, and “coherent extrapolated volition” is one of Yudkowsky’s pet concepts for friendly artificial intelligence. Rather than our having to solve moral philosophy in order to arrive at a complete human goal structure, C.E.V. would computationally simulate eons of moral progress, like some kind of Whiggish Pangloss machine. As Yudkowsky wrote in 2004, “In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together.” Yet can even a single human’s volition cohere or compute in this way, let alone humanity’s? We stood up to leave the room. Yudkowsky stopped me and said I might want to turn my recorder on again; he had a final thought. “We’re part of the continuation of the Enlightenment, the Old Enlightenment. This is the New Enlightenment,” he said. “Old project’s finished. We actually have science now, now we have the next part of the Enlightenment project.”
In 2013, the Singularity Institute changed its name to the Machine Intelligence Research Institute. Whereas MIRI aims to ensure human-friendly artificial intelligence, an associated program, the Center for Applied Rationality, helps humans optimize their own minds, in accordance with Bayes’s Theorem. The day after I met Yudkowsky, I returned to Berkeley for one of CFAR’s long-weekend workshops. The color scheme at the Rose Garden Inn was red and green, and everything was brocaded. The attendees were mostly in their twenties: mathematicians, software engineers, quants, a scientist studying soot, employees of Google and Facebook, an eighteen-year-old Thiel Fellow who’d been paid $100,000 to leave Boston College and start a company, professional atheists, a Mormon turned atheist, an atheist turned Catholic, an Objectivist who was photographed at the premiere of Atlas Shrugged II: The Strike. There were about three men for every woman. At the Friday-night meet and greet, I talked with Benja, a German who was studying math and behavioral biology at the University of Bristol, whom I had spotted at MIRI the day before. He was in his early thirties and quite tall, with bad posture and a ponytail past his shoulders. He wore socks with sandals, and worried a paper cup as we talked. Benja had felt death was terrible since he was a small child, and wanted his aging parents to sign up for cryonics, if he could figure out how to pay for it on a grad-student stipend. He was unsure about the risks from unfriendly AI - “There is a part of my brain,” he said, “that sort of goes, like, ‘This is crazy talk; that’s not going to happen’” - but the probabilities had persuaded him. He said there was only about a 30% chance that we could make it another century without an intelligence explosion. He was at CFAR to stop procrastinating. Julia Galef, CFAR’s president and cofounder, began a session on Saturday morning with the first of many brain-as-computer metaphors. We are “running rationality on human hardware,” she said, not supercomputers, so the goal was to become incrementally more self-reflective and Bayesian: not perfectly rational agents, but “agent-y.” The workshop’s classes lasted six or so hours a day; activities and conversations went well into the night. We got a condensed treatment of contemporary neuroscience that focused on hacking our brains’ various systems and modules, and attended sessions on habit training, urge propagation, and delegating to future selves. We heard a lot about Daniel Kahneman, the Nobel Prize-winning psychologist whose work on cognitive heuristics and biases demonstrated many of the ways we are irrational. Geoff Anders, the founder of Leverage Research, a “meta-level nonprofit” funded by Thiel, taught a class on goal factoring, a process of introspection that, after many tens of hours, maps out every one of your goals down to root-level motivations-the unchangeable “intrinsic goods,” around which you can rebuild your life. Goal factoring is an application of Connection Theory, Anders’s model of human psychology, which he developed as a Rutgers philosophy student disserting on Descartes, and Connection Theory is just the start of a universal renovation. Leverage Research has a master plan that, in the most recent public version, consists of nearly 300 steps. It begins from first principles and scales up from there: “Initiate a philosophical investigation of philosophical method”; “Discover a sufficiently good philosophical method”; have 2,000-plus “actively and stably benevolent people successfully seek enough power to be able to stably guide the world”; “People achieve their ultimate goals as far as possible without harming others”; “We have an optimal world”; “Done.” On Saturday night, Anders left the Rose Garden Inn early to supervise a polyphasic-sleep experiment that some Leverage staff members were conducting on themselves. It was a schedule called the Everyman 3, which compresses sleep into three twenty-minute REM naps each day and three hours at night for slow-wave. Anders was already polyphasic himself. Operating by the lights of his own best practices, goal-factored, coherent, and connected, he was able to work 105 hours a week on world optimization. For the rest of us, for me, these were distant aspirations. We were nerdy and unperfected. There was intense discussion at every free moment, and a genuine interest in new ideas, if especially in testable, verifiable ones. There was joy in meeting peers after years of isolation. CFAR was also insular, overhygienic, and witheringly focused on productivity. Almost everyone found politics to be tribal and viscerally upsetting. Discussions quickly turned back to philosophy and math. By Monday afternoon, things were wrapping up. Andrew Critch, a CFAR cofounder, gave a final speech in the lounge: “Remember how you got started on this path. Think about what was the time for you when you first asked yourself, ‘How do I work?’ and ‘How do I want to work?’ and ‘What can I do about that?’ . . . Think about how many people throughout history could have had that moment and not been able to do anything about it because they didn’t know the stuff we do now. I find this very upsetting to think about. It could have been really hard. A lot harder.” He was crying. “I kind of want to be grateful that we’re now, and we can share this knowledge and stand on the shoulders of giants like Daniel Kahneman . . . I just want to be grateful for that. . . . And because of those giants, the kinds of conversations we can have here now, with, like, psychology and, like, algorithms in the same paragraph, to me it feels like a new frontier. . . . Be explorers; take advantage of this vast new landscape that’s been opened up to us in this time and this place; and bear the torch of applied rationality like brave explorers. And then, like, keep in touch by email.” The workshop attendees put giant Post-its on the walls expressing the lessons they hoped to take with them. A blue one read RATIONALITY IS SYSTEMATIZED WINNING. Above it, in pink: THERE ARE OTHER PEOPLE WHO THINK LIKE ME. I AM NOT ALONE.
That night, there was a party. Alumni were invited. Networking was encouraged. Post-its proliferated; one, by the beer cooler, read SLIGHTLY ADDICTIVE. SLIGHTLY MIND-ALTERING. Another, a few feet to the right, over a double stack of bound copies of Harry Potter and the Methods of Rationality: VERY ADDICTIVE. VERY MIND-ALTERING. I talked to one of my roommates, a Google scientist who worked on neural nets. The CFAR workshop was just a whim to him, a tourist weekend. “They’re the nicest people you’d ever meet,” he said, but then he qualified the compliment. “Look around. If they were effective, rational people, would they be here? Something a little weird, no?” I walked outside for air. Michael Vassar, in a clinging red sweater, was talking to an actuary from Florida. They discussed timeless decision theory (approximately: intelligent agents should make decisions on the basis of the futures, or possible worlds, that they predict their decisions will create) and the simulation argument (essentially: we’re living in one), which Vassar traced to Schopenhauer. He recited lines from Kipling’s “If-” in no particular order and advised the actuary on how to change his life: Become a pro poker player with the $100k he had in the bank, then hit the Magic: The Gathering pro circuit; make more money; develop more rationality skills; launch the first Costco in Northern Europe. I asked Vassar what was happening at MetaMed. He told me that he was raising money, and was in discussions with a big HMO. He wanted to show up Peter Thiel for not investing more than $500,000. “I’m basically hoping that I can run the largest convertible-debt offering in the history of finance, and I think it’s kind of reasonable,” he said. “I like Peter. I just would like him to notice that he made a mistake . . . I imagine a hundred million or a billion will cause him to notice . . . I’d like to have a pi-billion-dollar valuation.” I wondered whether Vassar was drunk. He was about to drive one of his coworkers, a young woman named Alyssa, home, and he asked whether I would join them. I sat silently in the back of his musty BMW as they talked about potential investors and hires. Vassar almost ran a red light. After Alyssa got out, I rode shotgun, and we headed back to the hotel.
It was getting late. I asked him about the rationalist community. Were they really going to save the world? From what? “Imagine there is a set of skills,” he said. “There is a myth that they are possessed by the whole population, and there is a cynical myth that they’re possessed by 10% of the population. They’ve actually been wiped out in all but about one person in three thousand.” It is important, Vassar said, that his people, “the fragments of the world,” lead the way during “the fairly predictable, fairly total cultural transition that will predictably take place between 2020 and 2035 or so.” We pulled up outside the Rose Garden Inn. He continued: “You have these weird phenomena like Occupy where people are protesting with no goals, no theory of how the world is, around which they can structure a protest. Basically this incredibly, weirdly, thoroughly disempowered group of people will have to inherit the power of the world anyway, because sooner or later everyone older is going to be too old and too technologically obsolete and too bankrupt. The old institutions may largely break down or they may be handed over, but either way they can’t just freeze. These people are going to be in charge, and it would be helpful if they, as they come into their own, crystallize an identity that contains certain cultural strengths like argument and reason.” I didn’t argue with him, except to press, gently, on his particular form of elitism. His rationalism seemed so limited to me, so incomplete. “It is unfortunate,” he said, “that we are in a situation where our cultural heritage is possessed only by people who are extremely unappealing to most of the population.” That hadn’t been what I’d meant. I had meant rationalism as itself a failure of the imagination. “The current ecosystem is so totally fucked up,” Vassar said. “But if you have conversations here”-he gestured at the hotel-“people change their mind and learn and update and change their behaviors in response to the things they say and learn. That never happens anywhere else.” In a hallway of the Rose Garden Inn, a former high-frequency trader started arguing with Vassar and Anna Salamon, CFAR’s executive director, about whether people optimize for hedons or utilons or neither, about mountain climbers and other high-end masochists, about whether world happiness is currently net positive or negative, increasing or decreasing. Vassar was eating and drinking everything within reach. My recording ends with someone saying, “I just heard ‘hedons’ and then was going to ask whether anyone wants to get high,” and Vassar replying, “Ah, that’s a good point.” Other voices: “When in California . . .” “We are in California, yes.”
…Back on the East Coast, summer turned into fall, and I took another shot at reading Yudkowsky’s Harry Potter fanfic. It’s not what I would call a novel, exactly, rather an unending, self-satisfied parable about rationality and trans-humanism, with jokes.
…I flew back to San Francisco, and my friend Courtney and I drove to a cul-de-sac in Atherton, at the end of which sat the promised mansion. It had been repurposed as cohousing for children who were trying to build the future: start-up founders, singularitarians, a teenage venture capitalist. The woman who coined the term “open source” was there, along with a Less Wronger and Thiel Capital employee who had renamed himself Eden. The Day of the Idealist was a day for self-actualization and networking, like the CFAR workshop without the rigor. We were to set “mega goals” and pick a “core good” to build on in the coming year. Everyone was a capitalist; everyone was postpolitical. I squabbled with a young man in a Tesla jacket about anti-Google activism. No one has a right to housing, he said; programmers are the people who matter; the protesters’ antagonistic tactics had totally discredited them.
…Thiel and Vassar and Yudkowsky, for all their far-out rhetoric, take it on faith that corporate capitalism, unchecked just a little longer, will bring about this era of widespread abundance. Progress, Thiel thinks, is threatened mostly by the political power of what he calls the “unthinking demos.”
Pointer thanks to /u/Vulture.
154 comments
Comments sorted by top scores.
comment by swfrank · 2014-12-13T16:42:50.598Z · LW(p) · GW(p)
Hi everyone. Author here. I'll maybe reply in a more granular way later, but to quickly clear up a few things:
-I didn't write the headlines. But of course they're the first thing readers encounter, so I won't expect you to assess my intentions without reference to them. That said, I especially wanted to get readers up to half-speed on a lot of complicated issues, so that we can have a more sophisticated discussion going forward.
-A lot fell out during editing. An outtake that will be posted online Monday concerns "normal startup culture"--in which I went to TechCrunch Disrupt. I don't take LW/MIRI/CFAR to be typical of Silicon Valley culture; rather, a part of Bay Area memespace that is poorly understood or ignored but still important. Of course some readers will be put off. Others will explore more deeply, and things that seemed weird at first will come to seem more normal. That's what happened with me, but it took months of exposure. And I still struggle with the coexistence of universalism and elitism in the community, but it's not like I have a wholly satisfying solution; maybe by this time next year I'll be a neoreactionary, who knows!!
-Regarding the statistics and summary of the LW survey. That section was much longer initially, and we kept cutting. I think the last thing to go was a sentence about the liberal/libertarian/socialist/conservative breakdown. We figured that that various "suggestive statistical irrelevancies" would imply the diversity of political opinion. Maybe we were overconfident. Anyway, after the few paragraphs about Thiel, I tried not to treat libertarianism until the final sections, and even there with some sympathy.
-"Overhygienic," I can see how that might be confusing. I meant epistemic hygiene.
-letters@harpers.org for clarifying letters, please! And I'm sam@canopycanopycanopy.com.
-
Replies from: NancyLebovitz, Will_Newsome, Julia_Galef, Will_Newsome, None, None, Jonathan_Graehl↑ comment by NancyLebovitz · 2014-12-14T16:00:31.857Z · LW(p) · GW(p)
Thanks for showing up.
Replies from: swfrank↑ comment by swfrank · 2014-12-14T19:12:30.603Z · LW(p) · GW(p)
While I'm here, let me plug two novels I think LW readers might appreciate: Watt by Samuel Beckett (an obsessively logical, hilarious book) and The Man Without Qualities by Robert Musil, whose hero is a rationalist in abeyance (Musil was a former engineer, philosopher, and psychologist himself).
↑ comment by Will_Newsome · 2014-12-14T19:30:31.682Z · LW(p) · GW(p)
Good sociology yo, good sardonicism without sneering, best article I've seen about this subculture yet.
↑ comment by Julia_Galef · 2014-12-15T19:13:12.466Z · LW(p) · GW(p)
Thanks for showing up and clarifying, Sam!
I'd be curious to hear more about the ways in which you think CFAR is over-(epistemically) hygienic. Feel free to email me if you prefer, but I bet a lot of people here would also be interested to hear your critique.
↑ comment by Will_Newsome · 2014-12-14T19:34:42.592Z · LW(p) · GW(p)
"Almost everyone found politics to be tribal and viscerally upsetting."
This is gold.
↑ comment by [deleted] · 2014-12-15T07:11:11.946Z · LW(p) · GW(p)
And I still struggle with the coexistence of universalism and elitism in the community, but it's not like I have a wholly satisfying solution; maybe by this time next year I'll be a neoreactionary, who knows!!
An interesting problem. There are a few things that can be said about this.
1) Neoreaction is not the only tendency that combines universalism and elitism -- for that matter, it consistently rejects universalism, so it's one way of resolving the tension you're perceiving. Another way is to embrace both: this could be done by belief in a heritable factor of general intelligence (which strikes me as the rationalist thing to do, and which necessarily entails some degree of elitism), but that's merely the most visible option. An alternative is to say that some cultures are superior to others (the North to the South for a common political example, aspiring-rationalist culture to the culture at large for a local one), which also necessarily entails elitism: at the very least, the inferiors must be uplifted.
2) The coexistence of universalism and elitism (and technocratic progressivism) is reminiscent of the later days of the British Empire. They believed that they could figure out a universal morality -- and beyond that, a universally proper culture -- but, of course, only the more developed and rational among even their own people could play a part in that. I suspect that LW draws disproportionately from communities that contain ideological descent from the British Empire, and that its surrounding baggage uncritically reflects that descent -- in fact, this is provably true for one aspect of LW-rationality unless utilitarianism was independently developed somewhere else. (The last line from the last point sounds familiar.)
3) Neoreaction is probably partially an exercise in constructing new narratives and value-systems that are at least as plausible as the ones that are currently dominant. This isn't incompatible with the generation of true insights -- in fact, the point can't be made with false ones. (Obviously false ones, at least, but if the epistemic sanity waterline isn't high enough around here to make that almost as difficult a constraint, rationalism has probably failed.) There's also some shock-jock countersignaling stuff, especially with Moldbug.
4) The comparative study of civilizations leads (at least when taken in conjunction with the point that technological progress and political progress cannot be assumed to be the same, or even driven by the same factors, except insofar as technology can make possible beneficial things that would not have been possible otherwise -- though it can do the same for harmful things, like clickbait or the nuclear bomb) to two insights: first, that civilizations keep collapsing, and second, that they tend to think they're right. No two fundamentally disagreeing civilizations can be right at the same time -- so either value-systems cannot be compared (which is both easily dismissed and likely to contain a grain of truth for the simple reason that, if any of our basic moral drives come neither from culture nor about facts about the outside world, what else could they be but innate? Even the higher animals show signs of a sense of morality in lab tests, I've heard.) or one of them is wrong. It's the same argument as the atheist one against religion, just fully generalized. (I don't think the argument works for atheism, since, if you grant that the God or gods of divine-containing religions want humans to follow them, Christianity and the various paganisms can't be seriously compared -- but I digress.) Hence the utility of generating alternative narratives for the cause of seeking truth.
5) People concerned about civilizational risk would do well to take the possibility of collapse seriously, as per the fourth point. People who want to hurry up and solve morality and build it into a friendly AI, even more so. Those who believe that every civilization would come to the same moral solution should want there to be as many people likely to support this goal and do good and useful work toward it as possible, before a government or a business beats them to it, which seems to imply that they should either want there to be as many not-unfriendly and likely-to-be-useful civilizations as possible or that they should at least want Western civilization (i.e. the USA, Canada, and some amount of Europe depending on who you talk to) not to collapse, since it's generated by far the highest proportion of people who take that task seriously. (IIRC the last part is close to the reasoning Anissimov went through, but I could be misremembering.)
(There's likely to be at least one part of this that's completely wrong, especially since it's two in the morning and I'm rushing through this so I can sleep. A slice of virtual Stollen to anyone who finds one -- it's that time of the year.)
Replies from: SilentCal, Lumifer, None↑ comment by SilentCal · 2014-12-15T20:21:09.172Z · LW(p) · GW(p)
I'm not sure I see the contradiction. "We have found the way (elitism), and others should follow (universalism)" seems like a pretty coherent position, and one I'd expect to see throughout history, not just in the British Empire. Isn't it implicit in the idea of missionary religion, and of much philosophy?
Granted, there's a distinction you can make between "We found the way by luck" and "We found the way by virtue". The former is less elitist than the latter, but it still entails that "our way is better than yours".
...I think I've lost sight of what defines 'elitism' besides believing something.
Replies from: None, None↑ comment by [deleted] · 2014-12-17T02:23:04.627Z · LW(p) · GW(p)
Dammit! You win an entire virtual Stollen.
I still suspect there are differences in how this combination is enforced, but I'll need to do a lot more research now. Anyone know of any good books on the French or Spanish Empires, or the Islamic conquests?
...oh, Islam is actually a good example: their thing seems to be directly manipulating the incentive structure, whether by the jizya or the sword. Did they force Christians to go to Islamic schools, or did they just tax the Christians more than the Muslims? (Or neither? Did Christians have to pay zakat? IIRC they didn't, but it might have varied...?)
Replies from: alienist↑ comment by alienist · 2014-12-17T02:44:01.001Z · LW(p) · GW(p)
Did they force Christians to go to Islamic schools, or did they just tax the Christians more than the Muslims? (Or neither? Did Christians have to pay zakat? IIRC they didn't, but it might have varied...?)
I've heard that at one point the authorities were discouraging conversion to Islam because of the effect on tax revenue.
Replies from: Sarunas↑ comment by Sarunas · 2014-12-17T23:24:28.249Z · LW(p) · GW(p)
According to the book "A Historical And Economic Geography Of Ottoman Greece: The Southwestern Morea in the 18th Century" by Fariba Zarinebaf, John Bennet and Jack L. Davis:
To finance its war efforts, the Ottoman state relied heavily on revenues from the cizye (poll tax) collected directly by the central treasury. Therefore, it generally did not support forced conversion of the non-Muslim reaya. The social pressure to convert must have been considerable, however, in areas where the majority of the population was Muslim. Furthermore, an increase in the amount of the cizye must also have indirectly encouraged conversion in the second half of the 16th century. An imperial order issued to the kadi of the districts of Manafge and Modon on 19 Zilkade 978/March 1570 stated that there were illegal attempts by taxfarmers to collect cizye from converts who were timar-holders and who had been serving in the Ottoman army for fifteen years. From this report it is clear that local Christians converted to Islam to enter the ranks of the military to avoid the payment of taxes. But it is also obvious that tax collectors and tax-farmers resented the tax-exempt privileges of the converts
Glossary:
cizye - Islamic poll tax imposed on a non-Muslim household
reaya - productive groups (peasants, merchants, artisans) subject to taxes, in contrast to askeri (q.v.) (military), who were tax-exempt
kadi - Muslim judge
Zilkade - Dhu al-Qi'dah, the eleventh month in the Islamic calendar. It is one of the four sacred months in Islam during which warfare is prohibited, hence the name ‘Master of Truce’.
timar - prebend in the form of state taxes in return for regular military service, conventionally less than 20,000 akçes (q.v.) in value
↑ comment by Lumifer · 2014-12-16T04:52:06.562Z · LW(p) · GW(p)
No two fundamentally disagreeing civilizations can be right at the same time -- so either value-systems cannot be compared ... or one of them is wrong.
Think it's a bit more complicated. The issue is that while value systems can be compared, there are many different criteria by which they can be measured against each other. In different comparison frameworks the answer as to which is superior is likely to be different, too.
Consider e.g. a tapir and a sloth. Both are animals which live in the same habitat. Can they be compared? They "fundamentally disagree" about whether it's better to live on the ground or up in the trees -- is one of them "right" and the other "wrong"?
This, by the way, probably argues for your point that generating alternative narratives is useful.
Replies from: None↑ comment by [deleted] · 2014-12-17T02:31:34.949Z · LW(p) · GW(p)
Good point -- you have to take into account technological, genetic, geographic, economic, geopolitical, etc. conditions as well.
(Which poses an interesting question: what sort of thing is America or any one of its component parts to be compared to? Or is there a more general rule -- something with a similar structure to "if the vast majority of other civilizations would disagree up to their declining period, you're probably wrong"?)
Steppe hordes, sea empires, and hill tribes may be alike enough that similar preconditions for civilization would be necessary. (cf. hbdchick's inbreeding/outbreeding thing, esp. the part about the Semai: same effect, totally different place)
↑ comment by [deleted] · 2014-12-16T12:56:15.657Z · LW(p) · GW(p)
4) The comparative study of civilizations leads (at least when taken in conjunction with the point that technological progress and political progress cannot be assumed to be the same, or even driven by the same factors, except insofar as technology can make possible beneficial things that would not have been possible otherwise -- though it can do the same for harmful things, like clickbait or the nuclear bomb) to two insights: first, that civilizations keep collapsing, and second, that they tend to think they're right. No two fundamentally disagreeing civilizations can be right at the same time -- so either value-systems cannot be compared (which is both easily dismissed and likely to contain a grain of truth for the simple reason that, if any of our basic moral drives come neither from culture nor about facts about the outside world, what else could they be but innate? Even the higher animals show signs of a sense of morality in lab tests, I've heard.) or one of them is wrong. It's the same argument as the atheist one against religion, just fully generalized. (I don't think the argument works for atheism, since, if you grant that the God or gods of divine-containing religions want humans to follow them, Christianity and the various paganisms can't be seriously compared -- but I digress.) Hence the utility of generating alternative narratives for the cause of seeking truth.
I think this is the completely wrong part, in that it assumes that any living individual ever considers everything about their civilization to be Good and Right. By and large, even the ruling classes don't get everything they want (for example, they wanted a Hayekian utopia along Peter Thiel's lines, but what they got was the messiness of actually existing neoliberalism). And in fact, one of the chief causes for the repeated collapses is that institutional structures usually can't take being pushed and pulled in too many contradictory directions at once without ceasing to act coherently for anything at all (they become "unagenty", in our language).
The US Congress is a fairly good present-day example: it's supposed to act for the people as a whole, for the districts, and for the "several States"; for the right of the majority to govern as they will and for the right of small ideological minorities to obstruct whatever they please; for the fair representation of the voters and for the institutionalization of the political parties. When these goals become contradictory instead of complementary, the institution stops functioning (ie: it passes no legislation, not even routine matters, instead of merely passing legislation I disagree with), and society has to replace it or face decline.
Replies from: None↑ comment by [deleted] · 2014-12-17T02:24:04.187Z · LW(p) · GW(p)
I think this is the completely wrong part, in that it assumes that any living individual ever considers everything about their civilization to be Good and Right. By and large, even the ruling classes don't get everything they want (for example, they wanted a Hayekian utopia along Peter Thiel's lines, but what they got was the messiness of actually existing neoliberalism).
I'm not talking about practice, but rather about ideals, value systems, that sort of thing. Tumblrites haven't gotten what they want either -- but they still want what they want, and what they want is determined by something, and whatever that something is, it varies.
↑ comment by [deleted] · 2014-12-16T12:39:32.249Z · LW(p) · GW(p)
I talked to one of my roommates, a Google scientist who worked on neural nets. The CFAR workshop was just a whim to him, a tourist weekend. “They’re the nicest people you’d ever meet,” he said, but then he qualified the compliment. “Look around. If they were effective, rational people, would they be here? Something a little weird, no?”
This is hilarious, in implying exactly the reason I go to LW meetups (there's other ultra-nerds to socialize with!) and why I don't go to CFAR workshops (they're an untested self-help program that asks me to pay for the privilege of doing what I could do for free at LW meetups).
-Regarding the statistics and summary of the LW survey. That section was much longer initially, and we kept cutting. I think the last thing to go was a sentence about the liberal/libertarian/socialist/conservative breakdown. We figured that that various "suggestive statistical irrelevancies" would imply the diversity of political opinion. Maybe we were overconfident.
I think you were overconfident: the article definitely comes across as associating "cyberpunks, cypherpunks, extropians, transhumanists, and singularians" with right-libertarianism. As the survey confirms, LW and its "rationalists" and assorted nerds in each of those other categories vary across the entire spectrum of opinions commonly held by highly-educated and materially privileged white male Western technologists ;-).
Overall, brilliant article. If our group came across looking insane, that's our fault, since we wave our meta-contrarian flags so emphatically and signal a lot of ego.
maybe by this time next year I'll be a neoreactionary, who knows!!
Now, a small rebuke: I know you are trying to signal a humble openness to new knowledge, but to the best of my knowledge, neoreaction is incorrect. It's not wise to be so open-minded your brains fall out, like Michel Foucault praising the Iranian Revolution.
Replies from: swfrank↑ comment by Jonathan_Graehl · 2014-12-29T01:29:58.003Z · LW(p) · GW(p)
I liked the excerpts gwern quoted and see truth (and positive things) in most of it. "Hydra-headed" for EY's writing seems inapt. If you refute one of his essays 3 more will spring up in response?
Not sure what Vassar thinks is 3 in 1000 people - exploring+building boldly? Leadership?
Almost running a red light while buzzed+chatting. Hm. Well, I'm sure we all try to have a healthy respect for the dangers of killing and being killed while driving cars.
comment by Sarunas · 2014-12-13T01:19:41.247Z · LW(p) · GW(p)
Given the writing style, it seems to me that the author intended this piece to be read as a travelogue ("a trip to a far away land") rather than an article that tries to explain something to the readers. It is my impression that he does not try to think about the subject matter, he tries to feel it as a traveler who accidentally wandered here. Thus the author writes about his experiences, pays attention to small and idiosyncratic things (instead of trying to analyze anything), short sentences and quick leaps of thought is probably the authors way to give reader a dizzying feeling which, I guess, was what the author was feeling during his trip to an unfamiliar place due to meeting so many people in a short period of time. So, I guess, the author didn't care that much about whether his travelogue would give magazine's readers an accurate and insightful understanding of MIRI, CFAR and LessWrong. Instead, he probably cared about conveying his emotions and experiences to his readers.
It seems to me that the author didn't intend this piece to be thought of as a review of MIRI's activities. It seems to be as much (or maybe even more) about his trip as it is about the community he visited. Once you put this piece in a travelogue reference class, some of its flaws seem to be simply peculiarities of the writing style that is typical to that genre.
Replies from: Vulture, chaosmage↑ comment by chaosmage · 2014-12-14T09:20:00.002Z · LW(p) · GW(p)
A more analytical piece would have been more verbose and challenging. There isn't exactly a lack of verbose and challenging intros to rationality. This fills a gap.
I'm sure the author consciously picked the kinds of impressions that he believes would be most relevant to readers, were they to make his journey. So when he describes, say, Eliezer's body language, I suspect that body language really is remarkably odd (at least on first impressions), not that the author picked something small and idiosyncratic to remark on for no reason.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-14T11:02:08.656Z · LW(p) · GW(p)
Where do you think that Eliezer's body language is described as remarkably odd? If you mean "He wore glasses, had a short, dark beard, and his heavy body seemed slightly alien to him.", I'm not sure.
It might be a reference to the way Eliezer talks about his body and ideas about cryonics that only the brain matters.
Replies from: chaosmage↑ comment by chaosmage · 2014-12-14T11:17:02.320Z · LW(p) · GW(p)
The "slightly alien" thing and this:
Replies from: ChristianKl, Jonathan_Graehlhe kept grimacing as if not quite in control of his face
↑ comment by ChristianKl · 2014-12-14T12:01:46.189Z · LW(p) · GW(p)
I'm not sure to what extend that's something typical for Eliezer as I never meet him in person. If it is, it's a sign that normal emotional regulation is off. But then various people in this community aren't neurotypical.
↑ comment by Jonathan_Graehl · 2014-12-29T01:34:56.019Z · LW(p) · GW(p)
You can (or could) watch EY debating (e.g. w/ that presumptuous jaron lanier guy) over videoconference and like many less-polished speakers he has some visible tics while searching for a thought or turn of phrase while feeling under the gun + not wanting to lose his turn to speak.
comment by Vulture · 2014-12-12T23:46:52.497Z · LW(p) · GW(p)
For what it's worth, I perceived the article as more affectionate than offensive when I initially read it. This may have something to do with full piece vs. excerpts, so I'd recommend reading the full piece (which isn't that much longer) first if you care.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-12-14T17:50:16.523Z · LW(p) · GW(p)
I read just the excerpts, and I still thought that it came off as affectionate.
comment by ESRogs · 2014-12-12T23:53:43.275Z · LW(p) · GW(p)
I enjoyed this opportunity to relive being Vassar'd.
Replies from: ESRogs, knb, Vaniver, vassarbatory↑ comment by ESRogs · 2014-12-13T00:42:45.289Z · LW(p) · GW(p)
For anyone else going through withdrawal:
https://www.youtube.com/watch?v=qPFOkr1eE7I
Replies from: Manfred↑ comment by knb · 2014-12-14T01:17:39.936Z · LW(p) · GW(p)
What does "Vassar'd" mean?
Replies from: ESRogs↑ comment by ESRogs · 2014-12-14T04:49:54.924Z · LW(p) · GW(p)
It's this:
I was buttonholed by a man whose name tag read MICHAEL VASSAR, METAMED research.... “What did you think of that talk?” he asked, without introducing himself. “Disorganized, wasn’t it?” A theory of everything followed...
Vassar has a tendency to monologue. And a lot of what he says comes off as crazy at first blush. You get the impression he's just throwing stuff against the wall to see what sticks. Usually I find monologuers annoying, but I find Michael fascinating. It seems our author was similarly seduced.
... My brain hurt. I backed away and headed home. But Vassar had spoken like no one I had ever met, and after Kurzweil’s keynote the next morning, I sought him out.
↑ comment by Vaniver · 2014-12-13T00:19:07.722Z · LW(p) · GW(p)
I still want "Vascination" to catch on, but I don't know how to spell it. Vasscination? Vassination?
Replies from: ESRogs↑ comment by ESRogs · 2014-12-13T00:32:09.362Z · LW(p) · GW(p)
I like the last one.
Replies from: Vaniver↑ comment by vassarbatory · 2014-12-30T01:49:41.978Z · LW(p) · GW(p)
My circle of friends refer to this as "vassarbation" or being "vassarbated on"
comment by Julia_Galef · 2014-12-12T23:07:50.185Z · LW(p) · GW(p)
Perhaps this is silly of me, but the single word in the article that made me indignantly exclaim "What!?" was when he called CFAR "overhygienic."
I mean... you can call us nerdy, weird in some ways, obsessed with productivity, with some justification! But how can you take issue with our insistence [Edit: more like strong encouragement!] that people use hand sanitizer at a 4-day retreat with 40 people sharing food and close quarters?
[Edit: The author has clarified above that "overhygienic" was meant to refer to epistemic hygiene, not literal hygiene.]
Replies from: ChristianKl, Lumifer, Vaniver, devi, Dr_Manhattan↑ comment by ChristianKl · 2014-12-13T14:17:01.851Z · LW(p) · GW(p)
But how can you take issue with our insistence [Edit: more like strong encouragement!] that people use hand sanitizer at a 4-day retreat with 40 people sharing food and close quarters?
I would guess >95% of 4-day retreats where 40 people are sharing food and close quarters don't include recommendations about the usage of hand sanitizer.
↑ comment by Lumifer · 2014-12-12T23:46:54.640Z · LW(p) · GW(p)
But how can you take issue with our insistence that people use hand sanitizer
You insisted (instead of just offering)? I would have found it weird. And told you "No, thank you", too.
Replies from: Julia_Galef↑ comment by Julia_Galef · 2014-12-12T23:55:43.901Z · LW(p) · GW(p)
Edited to reflect the fact that, no, we certainly don't insist. We just warn people that it's common to get sick during the workshop because you're probably getting less sleep and in close contact with so many other people (many of whom have recently been in airports, etc.). And that it's good practice to use hand sanitizers regularly, not just for your own sake but for others'.
Replies from: Lumifer, ChristianKl↑ comment by Lumifer · 2014-12-13T00:28:02.203Z · LW(p) · GW(p)
and in close contact with so many other people
So, people who commute by public transportation in a big city are just screwed, aren't they? :-)
it's good practice to use hand sanitizers regularly
I don't think so -- not for people with a healthy immune system.
↑ comment by ChristianKl · 2014-12-13T14:27:33.264Z · LW(p) · GW(p)
And that it's good practice to use hand sanitizers regularly, not just for your own sake but for others'.
Is that recommendation based on concret evidence, if so, could you link sources?
Replies from: Julia_Galef↑ comment by Julia_Galef · 2014-12-15T19:01:49.782Z · LW(p) · GW(p)
Sure, here's a CDC overview: http://www.cdc.gov/handwashing/show-me-the-science-hand-sanitizer.html They seem to be imperfect but better than nothing, and since people are surely not going to be washing their hands every time they cough, sneeze, or touch communal surfaces, supplementing normal handwashing practices with hand sanitizer seems like a probably-helpful precaution.
But note that this has turned out to be an accidental tangent since the "overhygienic" criticism was actually meant to refer to epistemic hygiene! (I am potentially also indignant about the newly clarified criticism, but would need more detail from Sam to find out what, exactly, about our epistemic hygiene he objects to.)
↑ comment by Vaniver · 2014-12-13T00:18:09.162Z · LW(p) · GW(p)
But how can you take issue with our insistence [Edit: more like strong encouragement!] that people use hand sanitizer at a 4-day retreat with 40 people sharing food and close quarters?
So, I have noticed that I am overhygienic relative to the general population (when it comes to health; not necessarily when it comes to appearance), and I think that's standard for LWers. I think this is related to taking numbers and risk seriously; to use dubious leftovers as an example, my father's approach to food poisoning is "eh, you can eat that, it's probably okay" and my approach to food poisoning is "that's only 99.999% likely to be okay, no way is eating that worth 10 micromorts!"
Replies from: Gondolinian, Swimmer963, ChristianKl, dxu, gwillen↑ comment by Gondolinian · 2014-12-13T01:33:12.102Z · LW(p) · GW(p)
micromorts
Nitpick: Isn't food poisoning non-fatal a vast majority of the time? Or were you using a broad definition of "okay?"
Replies from: Vaniver↑ comment by Vaniver · 2014-12-13T01:48:41.443Z · LW(p) · GW(p)
Isn't food poisoning non-fatal a vast majority of the time? Or were you using a broad definition of "okay?"
Yeah; obviously missing a day or two to just getting ill is a significant cost worth avoiding, but if you expect to live a long while the chance of death typically ends up being more important in terms of total cost (because it's worse than it is rarer, I believe).
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-12-14T03:32:51.482Z · LW(p) · GW(p)
Interestingly, I think that when I'm not at work, I'm probably less hygienic than the average population–the implicit thought process is kind of like "oh my god, I have washed my hands every 5 minutes for 12 hours straight, I can't stand the thought of washing my hands again until I next have to go to work." I do make some effort at CFAR workshops but it's ughy.
↑ comment by ChristianKl · 2014-12-13T14:16:41.547Z · LW(p) · GW(p)
I think rating eating a dubious leftover as 10 micromorts comes from not taking numbers seriously. If you really think it's in that order of magnitude I would like to see the reasoning behind it.
Replies from: Vaniver↑ comment by Vaniver · 2014-12-13T17:10:48.958Z · LW(p) · GW(p)
I think rating eating a dubious leftover as 10 micromorts comes from not taking numbers seriously. If you really think it's in that order of magnitude I would like to see the reasoning behind it.
The base rate of death due to food-borne illness in the US is 10 micromorts a year; there's a conversion from 'per year' numbers to 'per act' numbers, the issue of how much comes from food starting off bad and how much comes from food going bad, and the issue of how good you are at detecting a pathogen risk by smell/sight, and I fudged all three as coming out to 1 when combined. (You could also add in the risk of days lost to sickness in terms of micromorts, instead of separate units, but that would probably be unnecessarily confusing.)
The real point, though, was to demonstrate how you can agree on facts but disagree on values; even if we both put the same probability on the risk of death, one of us is moved by it and the other isn't. (As well, I have a specialized vocabulary specifically targeted at dealing with these tiny risks of death that he doesn't use as much.) That's what 'overhygienic' means to me: "look at how far they're willing to go to avoid death!"
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-13T18:10:33.445Z · LW(p) · GW(p)
The AIDS risk of unprotected sex at a one-night stand is also a risk on the order of 10 micromorts and quite many people do care about it. (For values such as infection risk at 0.1% if the person has AIDS and 1% of the population having AIDS)
The real point, though, was to demonstrate how you can agree on facts but disagree on values;
But there no good reason to believe that there's agreement on facts. Plenty of people do believe that being overhygienic leads to an increase in allergies and isn't healthy.
there's a conversion from 'per year' numbers to 'per act' numbers, the issue of how much comes from food starting off bad and how much comes from food going bad, and the issue of how good you are at detecting a pathogen risk by smell/sight, and I fudged all three as coming out to 1 when combined. (
I don't think that's reasonable. It seems to me like all those factors are under 1.
The highest of the factors in the US seem to be Salmonella, Toxoplasma gondii and Listeria. All bacteria that you can kill if you cook your food. The forth factor is Norovirus and there I'm not even sure that food going back is an usual way of food getting poisoned. It's rather about uncleanness.
That's what 'overhygienic' means to me: "look at how far they're willing to go to avoid death!"
I don't think that most people who are overhygineic are that way because they follow a rational strategy but because of emotional driven fear of uncleanness.
↑ comment by dxu · 2014-12-13T02:56:38.906Z · LW(p) · GW(p)
I think that's standard for LWers
How large is your sample size? (I would consider myself around average as far as hygiene goes, but my sense of the average level of hygiene of the general population may be somewhat skewed by hanging around someone who regularly picks food up off of the ground and eats it, so...)
Replies from: Vaniver↑ comment by Vaniver · 2014-12-13T18:10:28.416Z · LW(p) · GW(p)
How large is your sample size?
I've met probably ~100 LWers in person, but I notice that I was falling prey to confirmation / availability bias when writing the grandparent post. When I met up with Fluttershy for lunch in a restaurant, he took out the bottle of hand sanitizer he kept in his backpack, and that counted more heavily in my memory than the Seattle LW game night / group meal hosted at jsalvatier's place, where if I recall correctly people washed their hands in the sink if they wanted to, rather than there being some sort of explicit cleanliness norm, despite there being two LWers at the first event and ~twelve at the second. I can't recall a time when I thought a LWer was behaving in an obviously unclean manner, though, but that's rare enough anyway among people I'm around that I don't know how much evidence that is. (Thinking of the group I'm close to with the least overall healthiness, as evidenced by the prevalence of drinking, smoking, and (I'm pretty sure) promiscuity, even they throw out unrefrigerated leftovers with meat in it because of the influence of one of the members with a food service job (and thus the associated food safety training).)
Replies from: Fluttershy↑ comment by Fluttershy · 2014-12-13T22:13:37.005Z · LW(p) · GW(p)
Meeting you for lunch was fun! Normally, I would have just gone to the restroom to wash my hands; the reason I had left a bottle of hand sanitizer on the table for was that I had wanted to be able to clean my hands without getting up from the table immediately after sitting down, given that some people think that getting up from the table is slightly rude. Using hand sanitizer just happened to be a more visible method of cleaning my hands than washing my hands in the restroom would have been.
On a related note, at the LW meetup after lunch, I remember that Frances passed a bottle of hand sanitizer around the table while we were in the middle of a conversation about how being hygienic was a good thing. I appreciated that.
Replies from: Vaniver↑ comment by gwillen · 2014-12-13T02:41:03.207Z · LW(p) · GW(p)
Can you try to summarize your rules of thumb on consumption of leftovers, and describe to what extent you think they've got a rational basis?
(I discovered last year that I'm actually more lax about it than some people I know, so I'm interested in what you and others think is risky versus safe behavior in this regard, and what that's based on. I guess when I was growing up we tended not to have a lot of leftovers, so it never came up, and I think I may lack an adequate fear of food poisoning as a result.)
Replies from: mindspillage, Vaniver↑ comment by mindspillage · 2014-12-13T07:57:35.185Z · LW(p) · GW(p)
I am far more lax than most people I know also--when I was growing up there were leftovers, but we couldn't afford to waste them unless they were really not good; I was still broke in college and would not turn my nose up at things other people were wary of. I have never been completely stupid about it, but I am not terribly afraid of food poisoning either, mostly because it barely registers on the list of risky activities I should worry about. (For comparison, I am convinced that my lack of driving skill would seriously injure myself or others, and so I don't drive, which apparently makes me weird.)
I have had food poisoning a handful of times--but mostly under conditions that even conscientiously hygienic people would consider fine... and once from dubious food while traveling, because really if you do not eat the street food you are wasting your airfare.
(gwillen, I swear I am not deliberately following you around!)
↑ comment by Vaniver · 2014-12-13T02:53:41.480Z · LW(p) · GW(p)
Can you try to summarize your rules of thumb on consumption of leftovers, and describe to what extent you think they've got a rational basis?
The primary things that come to mind are "if you notice anything off, dispose of it" and "store things in sealed containers with dates on post-it notes or written with dry erase markers," but most of the stuff I pay attention to these days are food prep rules (since I very rarely have leftovers, and most of the things I consume take a long time to go bad).
↑ comment by devi · 2014-12-12T23:29:47.449Z · LW(p) · GW(p)
But how can you take issue with our insistence that people use hand sanitizer at a 4-day retreat with 40 people sharing food and close quarters?
This is not something that would cross my mind if I was organizing such a retreat. Making sure people who handled food washed their hands with soap, yes, but not hand sanitizer. Perhaps this is a cultural difference between (parts of) US and Europe.
Replies from: Metus, gwillen↑ comment by gwillen · 2014-12-13T02:43:39.549Z · LW(p) · GW(p)
I think hand sanitizer is more feasible for practical reasons? Generally in the sorts of spaces where people gather for things like this, there is not a sink near the food. So I'm used to there being hand sanitizer at the beginning of the food line, not because hand sanitizer is great, but because it's inconvenient and time consuming (and overbearing) to ask everyone to shuffle through the restroom to wash their hands before touching the food.
Replies from: Gondolinian↑ comment by Gondolinian · 2014-12-13T03:06:16.775Z · LW(p) · GW(p)
Not to mention that if people touch the bathroom door handle, sink handle, etc. after they washed their hands, they'll get many of the germs they just washed off back onto their hands, whereas with hand sanitizer, all you need do is touch the pump and you're good to go.
↑ comment by Dr_Manhattan · 2014-12-17T16:17:53.795Z · LW(p) · GW(p)
I mean... you can call us nerdy, weird in some ways, obsessed with productivity,
You can add "literal" to that :-p
comment by Manfred · 2014-12-13T05:54:03.100Z · LW(p) · GW(p)
I really liked the level of subtextual snark (e.g. almost every use of the word 'rational'). This level of skepticism and mockery is, frankly, about what should be applied, and was fun to read.
I was surprised at the density of weirdness, not because it makes bad journalism, but because it's difficult for the audience to understand (e.g. /r/hpmor is just dropped in there and the reader is expected to deal). I like Sarunas' explanation for this. Fairness-wise, this was better than I expected, though with occasional surrenders to temptation (The glaring one for me was Will and Divia Eden).
Michael Vassar as our face was inevitable if disappointing. The writing about him was great. I feel like the descriptions of his clothing are the author making him a little funnier - nobody else gets clothing description.
The author's initial inability to read lesswrong makes me think we may need a big button at the top that says "First time? Click here!" and just dumps you into a beginner version of the Sequences page.
Replies from: ChristianKl, John_Maxwell_IV, None, orbenn↑ comment by ChristianKl · 2014-12-13T16:41:59.381Z · LW(p) · GW(p)
I feel like the descriptions of his clothing are the author making him a little funnier - nobody else gets clothing description.
That's not true. At the beginning he notices Bitcoin clothing further to the end he notices a Tesla jacket. He also speaks about his own Fermats theorem T-Shirt and how Eliezer reacts to it.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2014-12-13T09:12:16.952Z · LW(p) · GW(p)
Changing the about page/homepage is pretty easy. If you have a concrete suggestion it should be straightforward to implement. Currently the about page/sequences page/homepage/faq are somewhat optimized for exposing the reader to a broad cross-section of Less Wrong articles. The downside here is that we may be presenting users with an overwhelming number of choices.
↑ comment by [deleted] · 2014-12-16T12:57:48.539Z · LW(p) · GW(p)
I was surprised at the density of weirdness, not because it makes bad journalism, but because it's difficult for the audience to understand (e.g. /r/hpmor is just dropped in there and the reader is expected to deal).
At least he didn't sample the "Shit LessWrongers Say" Markov-chain bot.
↑ comment by orbenn · 2014-12-14T09:16:53.910Z · LW(p) · GW(p)
I agree. The difficult thing about introducing others to Less Wrong has always been that even if the new person remembers to say "It's my first time, be gentle". Less Wrong has the girth of a rather large horse. You can't make it smaller without losing much of its necessary function.
comment by Punoxysm · 2014-12-12T22:15:16.744Z · LW(p) · GW(p)
Interesting excerpt.
First I'd say, to anyone who would call it unfair (I think it's far more nuanced and interesting than say the Slate article), that the author is pretty clear about what is alienating or confounding him. If many people dismiss LW and MIRI and CFAR for similar reasons, then the only rational response is to identify how that "this is ridiculous" response can be prevented.
Second, best HPMOR summary ever (I say this as a fan):
Replies from: VultureIt’s not what I would call a novel, exactly, rather an unending, self-satisfied parable about rationality and trans-humanism, with jokes.
↑ comment by Vulture · 2014-12-12T23:18:34.970Z · LW(p) · GW(p)
If many people dismiss LW and MIRI and CFAR for similar reasons, then the only rational response is to identify how that "this is ridiculous" response can be prevented.
I agree with your overall point, but I think that "this is ridiculous" is not really the author's main objection to the LW-sphere; it's clearer in the context of the whole piece, but they're essentially setting up LW/MIRI/CFAR as typical of Silicon Valley culture(!), a collection of mad visionaries (in a good way) whose main problem is elitism; ethereum is then presented as a solution to this problem, or at least indicative of better attitude. I don't necessarily agree with any of this, but that's what the thesis of the article seems to be.
Replies from: Punoxysm↑ comment by Punoxysm · 2014-12-12T23:44:38.247Z · LW(p) · GW(p)
You're right, the word "ridiculous" may not be correct. Maybe elitist, insular and postpolitical (which the author clearly finds negative), but the article speaks better for itself than I can.
Still, there's plenty of negative impressions (LW is a "site written for aliens") that could be dispelled.
Replies from: Vulturecomment by jefftk (jkaufman) · 2014-12-13T12:09:43.996Z · LW(p) · GW(p)
its keynote would be delivered by Ray Kurzweil, Google’s director of engineering
Standard correction: Kurzweil is one of many directors of engineering at Google. It's unfortunate that the name of his title makes it sound like he's the only one.
comment by SilentCal · 2014-12-12T23:05:03.548Z · LW(p) · GW(p)
Also, should we be doing a better job publicizing the fact that LW's political surveys turn up plurality liberal, and about as many socialists as libertarians? Not that there's anything wrong with being libertarian, but I'm uneasy having the site classified that way.
Replies from: None, ChristianKl, RobbBB, ciphergoth, HBDfan↑ comment by [deleted] · 2014-12-13T09:43:37.552Z · LW(p) · GW(p)
This will not work, to briefly explain why I think so:
For the intended audience of the article, Libertarianism is unusual, Liberalism is normative. If the community was completely liberal, its liberalism would not have more than one mention or so in the article, certainly it would not make the title.
The prevalence of Liberals and Socialists, no matter how emphasized, can not lead to a rebranding as long as there is a presence of Libertarians in a fraction greater than expected. Indeed even if Libertarians were precisely at the expected fraction, whatever that would be, they might still get picked up by people searching for weird, potentially bad things about this weird, potentially bad "rationality movement".
As evidence of this note no journalist so far considers the eeire near total absence of normal conservatives who make up half of the population of the United States, the country most strongly represented, to be an unusual feature of the community. And furthermore if they somehow made up half of the community or some other "representative" fraction, this would be seen as a very strange, unusual perhaps even worrying feature of the community.
Hypothetically the opposite effect should be seen as well, if somehow this place was 100% liberal, yet still in the weird, potentially bad mental bin of journalists, its weirdness and badness would lead to its liberalism not being mentioned. For an example of this consider if you associate the Jim Jones' Family, the mass suicide of which gave rise to the phrase "drinking the cool aid", with liberalism or socialism.
The only way to inoculate would be to loudly denounce and perhaps even purge libertarians. Perhaps a few self-eviscerating heartfelt admissions of "how rationality cured my libertarianism" for good measure. This wouldn't actually result in no Libertarians being present of course, though it would dent their numbers, but it would provide a giant sign of "it doesn't make sense to use this fact about the community". This doesn't always work, since denouncing the prominence of witches or making official statements about how they are unwelcome has been read as evidence for the presence of witchcraft by journalists in the past as well.
Beyond the question of if it would work, I would like to more generally disapprove of this approach, since it would rapidly hasten the ongoing politicization of the rationality community, badly harming the art in the process. To make a step beyond that I will also say that I think many libertarian rationalists carry interesting insight, precisely because of their ideology.
Replies from: None, SilentCal, gothgirl420666↑ comment by [deleted] · 2014-12-16T13:02:08.916Z · LW(p) · GW(p)
Honestly, it would still be better publicity, and equally unusual, if we were known as "Those people with the arrogant Harry Potter fanfiction" rather than "those techno-libertarians, one of whom wrote Harry Potter fanfiction." Harry Potter fanfic is something the popular audience can at least conceive of someone else enjoying. Techno-libertarianism has a smell, and that is the dead, dusty smell of server rooms in basements: all functionality with no humanity. It makes us sound like Cybermen from Doctor Who.
↑ comment by SilentCal · 2014-12-15T18:21:25.776Z · LW(p) · GW(p)
I think this is all pretty much true... but I still suspect that a lot of readers of that kind of article, if they learned that LW readers were mostly liberal/socialist, would be surprised and update a bit. Reading non-libertarian rationalists firsthand might do something similar.
ETA: That's not to say any particular publication strategy, or any publication strategy at all, would be effective, or that there wouldn't be other costs.
↑ comment by gothgirl420666 · 2014-12-16T02:40:47.972Z · LW(p) · GW(p)
I wonder if, despite the fact that LessWrong members are equally liberal and libertarian, the leaders of the movement are disproportionally libertarian in a way that merits mention. Eliezer and Vassar, the two people featured in the article, both seem to be. Scott Alexander seems to be libertarian too, or at least he seems to like libertarianism more than any other political ideology. Who else?
Replies from: None↑ comment by [deleted] · 2014-12-17T02:11:51.636Z · LW(p) · GW(p)
Scott Alexander seems to be libertarian too, or at least he seems to like libertarianism more than any other political ideology.
Scott Alexander as in Anti-Libertarian FAQ Scott Alexander?
He's a liberal. You probably think he's a libertarian because there aren't many liberals anymore -- most of the parts of their demographic that ever show up on the internet have gone over to Tumblr totalitarianism instead.
(There's probably a lesson in here about the Dark Arts: don't call up what you can't put down. You summon Jon Stewart, you'll get Julius Streicher within a decade.)
Replies from: gothgirl420666↑ comment by gothgirl420666 · 2014-12-17T08:53:53.704Z · LW(p) · GW(p)
I'm pretty sure that he has recently said
- He wants to update the anti-libertarian FAQ, but he isn't sure he's an anti-libertarian anymore
- He feels like he is too biased towards the right and is looking for leftist media in order to correct this
These together imply to me that he favors libertarianism but idk, I could be wrong, I don't think he has ever really come out and said anything about his concrete beliefs on policy proposals. He also seems to not dislike Ayn Rand and talks sometimes about the power of capitalism, iirc.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2014-12-18T22:09:02.464Z · LW(p) · GW(p)
Scott identifies as left-libertarian, so you're both right. Quoting "A Something Sort of Like Left-Libertarian-Ist Manifesto":
"[Some people] support both free markets and a social safety net. You could call them 'welfare capitalists'. I ran a Google search and some of them seem to call themselves 'bleeding heart libertarians'. I would call them 'correct'." [...]
"The position there’s no good name for – 'bleeding heart libertarians' is too long and too full of social justice memes, 'left-libertarian' usually means anarchists who haven’t thought about anarchy very carefully, and 'liberaltarian' is groanworthy – that position seems to be the sweet spot between these two extremes and the political philosophy I’m most comfortable with right now. It consists of dealing with social and economic problems, when possible, through subsidies and taxes which come directly from the government. I think it’s likely to be the conclusion of my long engagement with libertarianism (have I mentioned I only engage with philosophies I like?)"
This is still probably an oversimplification, and Scott's views may have developed in the year since he wrote that article -- in particular, his Moloch piece and exploration of Communism suggest he's seriously considering autocratic views on the far left and far right, though he has yet to be won over by one. He likes meta-level views and views that can be seen, from different angles, as liberal, conservative, or libertarian -- the Archipelago being a classic example.
I don't think he has ever really come out and said anything about his concrete beliefs on policy proposals.
He cites Jeff Kauffman's policy proposals extremely approvingly: "Please assume this, if not quite a Consensus Rationalist Opinion on politics, is a lot closer to such than what random people on Tumblr accuse us of believing.". Since he thinks this is a very reasonable mainstream-for-rationalists set of proposals, he probably agrees with most of the proposals himself, or at least finds them very appealing.
↑ comment by ChristianKl · 2014-12-13T16:22:19.248Z · LW(p) · GW(p)
His main classification of LW wasn't libertarian but post-political. On a website that uses the slogan "politics is the mindkiller" I think post-political is a fair label.
The article is not only about LW but also about people like Peter Thiel who are clearly libertarian and the crypto-currency folks who are also libertarian.
Most of those people who answer liberal or socialist on the LW survey don't do anything political that matters. Thiel on the other hand is global player who matters.
The core summary of LW's politics in the article might be Vassar quote:
“You have these weird phenomena like Occupy where people are protesting with no goals, no theory of how the world is, around which they can structure a protest. Basically this incredibly, weirdly, thoroughly disempowered group of people will have to inherit the power of the world anyway, because sooner or later everyone older is going to be too old and too technologically obsolete and too bankrupt. The old institutions may largely break down or they may be handed over, but either way they can’t just freeze. These people are going to be in charge, and it would be helpful if they, as they come into their own, crystallize an identity that contains certain cultural strengths like argument and reason.” I didn’t argue with him, except to press, gently, on his particular form of elitism. His rationalism seemed so limited to me, so incomplete. “It is unfortunate,” he said, “that we are in a situation where our cultural heritage is possessed only by people who are extremely unappealing to most of the population.”
I think most those people who do label as liberal or socialist on LW would agree with that sentiment. Getting politcs right is not about being left, right or liberatarian but about actually thinking rationally about the underlying issues. That's post-political from the perspective of the author.
That's in the CFAR mission statement:
What if we could shrug off our feelings of defensiveness, and honestly evaluate the evidence on both sides of an issue before deciding which legislation to pass, what research to fund, and where to donate to do the most good?
I think the author was right to present that idea as the main political philosophy of LW instead of just pattern matching to the standard labels.
If you want to be perceived as a liberal or socialist community than you would need people who not only self-label themselves that way on surveys but who also do something under those labels that's interesting to the outside world.
Replies from: None↑ comment by [deleted] · 2014-12-16T13:09:32.931Z · LW(p) · GW(p)
I think most those people who do label as liberal or socialist on LW would agree with that sentiment. Getting politcs right is not about being left, right or liberatarian but about actually thinking rationally about the underlying issues. That's post-political from the perspective of the author.
It's also, strictly speaking, incorrect. A set of propositions must have some very specific properties in order to be made into a probability distribution:
1) The propositions must be mutually exclusive.
2) Each proposition must be true in some nonzero fraction of possible worlds/samples.
3) In any given possible world/sample, only one proposition can be true.
(This is assuming we're talking about atomic events rather than compound events.) So for example, when rolling a normal, 6-sided die, we can only get one number, and we also must get one number. No more, no less.
Political positions often fail to be mutually exclusive (in implementation if not in ideal), and the political reasoning we engage in on most issues always fails to exhaust the entire available space of possible positions.
This means that when it comes to these issues, we can't just assign a prior and update on evidence until we have evidence sufficient to swamp the prior and we declare ourselves to arrive to a "rational" conclusion. The relevant propositions simply don't obey the axioms of probability like that. Outside Context Problems can and do occur, and sometimes Outside Context Solutions are the right ones, but we didn't think of them because we were busy shuffling belief-mass around a tiny, over-constrained corner of the solution space.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-16T13:31:56.528Z · LW(p) · GW(p)
I'm not sure how what you relate to what I wrote.
Plenty of policy ideas on LW are outside of the standard context of left vs. right. I don't think that this community is reasonably criticised for for thinking enough about outside context solutions.
Replies from: None↑ comment by [deleted] · 2014-12-16T13:42:30.238Z · LW(p) · GW(p)
Sorry, I didn't mean that "LW is outside standard left vs right." I meant that "post-political" politics is categorically impossible when you can't exhaustively evaluate Solomonoff Induction. You cannot reduce an entire politics to "I rationally evaluated the evidence and updated my hypotheses", because the relevant set of propositions doesn't fit the necessary axioms. Instead, I think we have to address politics as a heuristic, limited-information, online-learning utility-maximization inference problem, one that also includes the constraint of trying to make sure malign, naively selfish, ignorant, and idiotic agents can't mess up the strategy we're trying to play while knowing that other agents view us as belonging to all those listed categories of Bad People.
So it's not just an inference problem with very limited data, it's an inference about inference problem with very limited data. You can't reduce it to some computationally simpler problem of updating a posterior distribution, you can only gather data, induce improved heuristics, and hope to God you're not in a local maximum.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-16T15:04:22.962Z · LW(p) · GW(p)
I think rationality in the LW sense can to be said to be about heuristics, limited-information and an online-learning utility-maximization inference problem.
If you say that on LW. People are generally going to agree and maybe add a few qualifiers. If someone on Huffington post would say: "We should think about politics as being heuristics, limited-information and an online-learning utility-maximization inference problem.", the audience wouldn't know what you are talking about.
the strategy we're trying to play while knowing that other agents view us as belonging to all those listed categories of Bad People
That assumes that the best way to act is in a way where other agents get a sense that you are playing or what you are playing.
There no reason to believe that a problem being visible makes it important. Under the Obama adminiratrion the EPA managed to raise standards on mercury pollution by being able to calculate that the IQ points of American children are worth more than the money it costs to reduce polution. The issue didn't become major headlines because nobody really cared about making it a controversial issue. Nobody had the stomach to hold a speech about how the EPA should value the IQ of American kids less.
At the same time the EPA didn't get anything done on the topic of global warming that was in the news.
Naomi Klein description about how white men in Africa kept economic equality when the gave blacks "equal rights" is a good example of how knowledge allows acting in a way that makes it irrelevant that the whites where seen as Bad People.
Playing 1 or 2 levels higher can do a lot.
↑ comment by Rob Bensinger (RobbBB) · 2014-12-13T01:02:43.128Z · LW(p) · GW(p)
It's possible to consistently call us disproportionately libertarian, or to call us disproportionately straight and cis, but not both. :)
Replies from: dxu↑ comment by Paul Crowley (ciphergoth) · 2014-12-13T08:38:03.764Z · LW(p) · GW(p)
There is no publicising anything to these people.
comment by HumanPlus · 2014-12-14T02:10:14.942Z · LW(p) · GW(p)
a Mormon turned atheist
Fun to be mentioned.
Replies from: palladias, HumanPlus, advancedatheist↑ comment by palladias · 2014-12-14T14:56:16.375Z · LW(p) · GW(p)
an atheist turned Catholic
Ditto. :)
Replies from: khafra↑ comment by khafra · 2014-12-15T11:58:25.826Z · LW(p) · GW(p)
I just want to know about the actuary from Florida; I didn't think we had any other LW'ers down here.
Replies from: XFrequentist, rule_and_line↑ comment by XFrequentist · 2014-12-16T18:41:06.451Z · LW(p) · GW(p)
I know who this is. If he doesn't out himself I'll PM you with contact info.
↑ comment by rule_and_line · 2014-12-23T23:22:28.436Z · LW(p) · GW(p)
Hey, that's me! I also didn't think we had other LWers down here. PM sent, let's meet up after the holidays.
↑ comment by HumanPlus · 2014-12-16T04:36:39.820Z · LW(p) · GW(p)
/u/advancedatheist, I've thought a lot about that. Especially considering that I was a mormon Transhumanist before I lost all belief.
Honestly, while I think that the singularity will happen, I have lots of doubts.
I don't know if it will turn out good or bad. There are so many points where it can just go FUBAR.
I don't see any leaders as messiahs. Some smart people will be more instrumental than others in advancing technology, but they are human, and make mistakes.
It doesn't have much to do with how I live my day to day. I do have some meta goals about being a kind of person who will survive to see it, if it happens within my reasonable lifespan to be openminded to all of the changes that will happen.
↑ comment by advancedatheist · 2014-12-14T18:44:34.727Z · LW(p) · GW(p)
That depends. Years ago Thomas Donaldson wrote the following:
THE APOCALYPSE HAS BEEN CALLED OFF
http://www.alcor.org/cryonics/cryonics8906.txt
Very many of us come from a Christian background. I do myself. I became an atheist very early in my teens. But our background is very important. It can fool us as to what we really believe. I have noticed, too much, both in cryonics and out, a strong desire to interpret nanotechnology (and before it there were others) in the exact terms of Christian myth. It's as if a person carries out a renaming exercise (God == Nanotechnology, Apocalypse == Singularity, Drexler == Christ (sorry Eric! )). God's name is certainly not a central part of Christian doctrine. This person is a Christian, rather than the atheist he thinks he is. His differences from Christianity are sectarian, not philosophical. Personally I believe no conflict exists between Christianity and cryonics, although many churches will meet with disaster (and deserve to!) for tying their message so much to death. It's not wrong to be Christian. But it is dishonest to oneself and others to think that just renaming everything, and having a slightly different theory of how God works, frees one from Christianity to light.
So ask yourself: Have you really become an atheist, or have you just switched denominations to one with a "slightly different theory about how God works"?
Replies from: gjm↑ comment by gjm · 2014-12-14T23:46:24.671Z · LW(p) · GW(p)
This person is a Christian, rather than the atheist he thinks he is.
Yeah, totally. Apart from little details such as
- whether he expects "God" to judge him on the basis of character, actions, religious affiliation, etc.
- whether he thinks "God" is an authority on (or indeed the ultimate source of) moral values
- whether he believes that "Christ" has died and been raised from the dead
- whether he sees the Christian scriptures as authoritative, inspired, etc.
- whether he regards "God" as (at least) a person with preferences, opinions, the possibility of interpersonal interaction, etc.
I think there's a reasonable case to be made that some people think about the Singularity in quasi-religious terms. But this sort of ridiculous overstatement does no one any favours.
comment by devi · 2014-12-12T23:34:02.145Z · LW(p) · GW(p)
Men made up 88.8% of respondents; 78.7% were straight, 1.5% transgender, ...
The author makes it sound like this makes us a very male-dominated straight cisgender community.
Mostly male, sure. But most people won't compare the percentage of heterosexuals and cisgenders with that of the general population to note that we are in fact more diverse.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2014-12-13T08:37:41.647Z · LW(p) · GW(p)
Never mind comparing; simply writing "Women made up 11.2% of respondents, 21.3% were not straight..." would have put a very different spin on it.
Replies from: chaosmage↑ comment by chaosmage · 2014-12-14T09:56:53.813Z · LW(p) · GW(p)
Sorry, I don't see it. How would that've been different?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2014-12-14T16:40:48.601Z · LW(p) · GW(p)
For the same reason they write "90% fat free" on foods rather than "10% fat".
comment by Daniel_Burfoot · 2014-12-13T23:31:42.670Z · LW(p) · GW(p)
Mike Vassar should have his own television show.
Replies from: drethelin, Bruno_Coelho↑ comment by drethelin · 2014-12-14T04:36:08.866Z · LW(p) · GW(p)
just put a go pro on him for 8 hours a day, then edit it down
Replies from: Daniel_Burfoot↑ comment by Daniel_Burfoot · 2014-12-14T19:12:05.609Z · LW(p) · GW(p)
Special episode of Silicon Valley, guest starring Michael Vassar as himself?
↑ comment by Bruno_Coelho · 2014-12-18T14:15:28.359Z · LW(p) · GW(p)
Somehow, LW/MIRI can't disentangle research and weirdness. Vassar is one of the guys when make public interviews end up giving this impression.
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2014-12-14T03:29:08.037Z · LW(p) · GW(p)
Wow, reading this is surreal.
comment by Kawoomba · 2014-12-12T22:45:20.638Z · LW(p) · GW(p)
Some of the awkward personal details which are so deliciously expounded upon come close to character assassination. Shame on the author, going for the cheap shots. I can just imagine an exposé of the very same style used to denigrate, say, Alan Turing.
Replies from: gwern, chaosmage, fubarobfusco, Brillyant↑ comment by chaosmage · 2014-12-14T09:48:05.215Z · LW(p) · GW(p)
That's only true if you read everything as judgement. In my opinion, the author cared much more about describing impressions than about rating things on any overarching scale, let alone a basic like/dislike scale.
Replies from: Kawoomba↑ comment by Kawoomba · 2014-12-14T09:58:47.569Z · LW(p) · GW(p)
Yet his readers will form an opinion, and you bet that's gonna be their opinion til kingdom come.
The connoisseur of literature in me appreciates his menagerie of the strange elephant men, his collection of curios.
But the consequentialist in me is pissed off. (#RiddickQuotesInUnlikelyPlaces)
Replies from: ChristianKl, chaosmage↑ comment by ChristianKl · 2014-12-14T11:39:21.987Z · LW(p) · GW(p)
Yet his readers will form an opinion, and you bet that's gonna be their opinion til kingdom come.
No. Most readers will soon forget whatever they read.
There's also hostile media bias. It's natural to consider neutral articles as biased against oneselves.
But the consequentialist in me is pissed off
I don't think there a reason to be. The saying that there's no such thing as bad PR exist for a reason.
I would except LW/MIRI to be antifragile to most mainstream media criticism. Especially to articles that tell a story of how LW is strange and important.
Replies from: Kawoomba↑ comment by Kawoomba · 2014-12-14T12:07:13.495Z · LW(p) · GW(p)
Don't underestimate the Harper's readership. If any one of those encounters the subject again, they're prone to remember "haha, yea I read about those, something about a guy who can't control his face or something?", have a good laugh and move on. That kind of stuff is much more salient than some cursory pointers at some arguments, mostly with a one sentence "debunking" following.
The author has snuck in so many "these people are crazy", "these people actually don't have a clue" and "these people are full of themselves" counterpoints, each of which is presented in a much more authoritative way than presenting MIRI's/CFAR's arguments:
"Look around. If they were effective, rational people, would they be here? Something a little weird, no?" I walked outside for air.
This is the voice of god sneaking in and impressing on readers what they should think. Nice people, don't waste your time nor your money.
I would except LW/MIRI to be antifragile to most mainstream media criticism.
"Criticism" is a nice phrase when we're talking about parading people's bodily shortcomings and "crazy" idiosyncrasies in lieu of actual counterpoints. Also, what is the goal of MIRI, if not to build legitimacy to eventually gain traction in specialist circles? People who, before seriously engaging or affiliating themselves, may just google MIRI and come across a certain Harper's article about a couple of "self-satisfied", self-taught apparent nut cases who score high on the crackpot index.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-14T13:15:54.108Z · LW(p) · GW(p)
"Look around. If they were effective, rational people, would they be here? Something a little weird, no?" I walked outside for air.
But at the same time, the article does mention that Vassar got his $500,000 from Peter Thiel to start MetaMed. It mentions that the IQ average is 138 without any questioning of that figure. There a mention a women who invented the term "open source" is in attendence the one event he attended.
I would expect that the average Harper's reader is in the humanties and get's the impression: "Those are strange nerds, that seem to do something I don't understand and that currently try to reorganize the world as they like via technology."
There nothing wrong with appearing strange if you appear influential.
Also, what is the goal of MIRI, if not to build legitimacy to eventually gain traction in specialist circles?
Building traction in the AI community is about writing publishable papers and going to the industry conferences.
It also seems like it's rather FHI role to build "legitimacy" than it's MIRI role.
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2014-12-14T21:05:46.074Z · LW(p) · GW(p)
I liked the article on a personal level, but as PR I agree more with Kawoomba. It seems like a lot of people invested time into making this article well-informed and balanced, yet the result is a (mild) PR net-negative, albeit an entertaining one. We have positive associations with most of the things the article talks about, so we're likely to underestimate the effect of the article's negative priming and framing on a typical reader (which may include other journalists, and thereby affect our perception in future articles).
The article wants readers to think that Vassar has delusions of grandeur and Thiel is a fascist, so linking the two more tightly isn't necessarily an effort to make either one look better. And there's totally such a thing as bad press, especially when your main goal is to sway computer science types, not get the general public to notice you. This at best adds noise and shiny distractions to attempts to talk about LW/MIRI's AI views around the water cooler.
It's true FHI and FLI are more PR-oriented than MIRI, but that doesn't mean it's FHI's job to produce useful news stories and MIRI's job to produce silly or harmful ones. Better to just not make headlines (and reduce the risk of inoculating people against MIRI's substantive ideas), unless there's a plausible causal pathway from the article to 'AGI risk is reduced'.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-14T22:50:48.874Z · LW(p) · GW(p)
We have positive associations with most of the things the article talks about, so we're likely to underestimate the effect of the article's negative priming and framing on a typical reader
I think hostile media bias is stronger. Priors indicate that the average person on LW while have a more negative view of the article than warranted.
The article wants readers to think that Vassar has delusions of grandeur
I don't think the average person watching Vassar's Tedx talk would get the same impression as someone reading that article.
Thiel is a fascist
If that's what he wanted to do he would have made a point about what Palantir Technologies does. Maybe remind the readers about Palantir Technologies responsiblity for the attempt to smear Glenn Greenwald and destroy his career.
Instead the author just points out that Palantir is a nerdy name that comes from Lord of the Rings. Even if the author hasn't heard of the episode with Glenn Greenwald, leaving out that Palantir is a defensive contractor that builds software for the NSA is a conscious choice that someone who wanted to portray Thiel as a facist wouldn't make. The author went for "bunch of strange nerds" instead of "facists".
And there's totally such a thing as bad press, especially when your main goal is to sway computer science types
I don't think that's the audience that Harper's magazine has. That's not for whom a journalist in that magazine writes.
Did you had any negative water cooler discussions with people because of that article?
↑ comment by chaosmage · 2014-12-14T11:08:15.167Z · LW(p) · GW(p)
That makes sense. I'd only add that the readers will probably form a range of opinions, not all have one opinion, as you appear to suggest.
But anyone's attention is limited, and it naturally goes to what they're unfamiliar with. When you, as one in the know, read that article, many of the rationalist talking points were familiar to you and didn't register much. A reader new to this would use up much of his/her attention on those parts, and less on mundane things like body shapes.
↑ comment by fubarobfusco · 2014-12-13T03:25:29.210Z · LW(p) · GW(p)
Dude was crazy about long-distance running.
↑ comment by Brillyant · 2014-12-13T05:01:33.564Z · LW(p) · GW(p)
Care to elaborate and/or give examples?
Replies from: Jayson_Virissimo↑ comment by Jayson_Virissimo · 2014-12-13T05:57:57.022Z · LW(p) · GW(p)
For starters, Turing had a high pitched stammer, extremely yellow teeth, and noticeably dirty fingernails (even though he was often seen biting them). No doubt, a less than sympathetic investigator would have found many more such things to mention in an article about him.
Replies from: TobyBartels↑ comment by TobyBartels · 2014-12-13T17:40:06.167Z · LW(p) · GW(p)
And he was gay (before the word ‘gay’ meant that).
comment by John_Maxwell (John_Maxwell_IV) · 2014-12-13T00:05:41.670Z · LW(p) · GW(p)
I'm curious what the goal of communicating with this journalist was. News organizations get paid by the pageview, so they have an incentive to sell a story, not spread the truth. And journalists also are famous for misrepresenting the people and topics they cover. (Typically when I read something in the press that discusses a topic I know about, they almost always get it a little wrong and often get it a lot wrong. I'm not the only one; this has gotten discussed on Hacker News. In fact, I think it might be interesting to start a "meta-journalism" organization that would find big stories in the media, talk to the people who were interviewed, and get direct quotes from them on if/how they were misrepresented.) If media exposure is a goal, you don't work with random journalists who come to you telling you that they want to include you in stories. You hire a publicist or PR firm that does the reverse and takes your story to journalists and makes sure they present it accurately.
Replies from: ChristianKl, Punoxysm↑ comment by ChristianKl · 2014-12-13T19:46:23.865Z · LW(p) · GW(p)
News organizations get paid by the pageview, so they have an incentive to sell a story, not spread the truth.
Harper's magazine is not a website that counts pageviews as it's prime metric. It makes money via subscriptions. Different business model.
In fact, I think it might be interesting to start a "meta-journalism" organization that would find big stories in the media, talk to the people who were interviewed, and get direct quotes from them on if/how they were misrepresented.
That could be useful for giving people a better idea of how the media works.
You hire a publicist or PR firm that does the reverse and takes your story to journalists and makes sure they present it accurately.
That's a naive view. There no way a PR firm can force accurate representation.
↑ comment by Punoxysm · 2014-12-13T00:42:22.907Z · LW(p) · GW(p)
So would you suggest we only read PR-firm-generated articles to get the "real story"?
More direct answer: Not talking to journalists allows them to represent you however they want, along with the "refused to comment". Talking at least gets your own words in.
I also don't see anything clearly unethical in this article's journalism.
Replies from: ChristianKl↑ comment by ChristianKl · 2014-12-13T19:43:32.314Z · LW(p) · GW(p)
Not talking to journalists allows them to represent you however they want, along with the "refused to comment".
In a case like LW there's also enough material online that a journalist can simply quote you if he wants to do so.
The worst mainstream media article in which I'm quoted didn't have the journalist who wrote the article speaking to me.
comment by LoganStrohl (BrienneYudkowsky) · 2014-12-13T17:25:58.896Z · LW(p) · GW(p)
This gave me so many warm fuzzies. <3
comment by Jonathan_Graehl · 2014-12-29T01:44:54.898Z · LW(p) · GW(p)
https://www.youtube.com/watch?v=W5UAOK1bk74 - shoot, Vassar does really wear slightly-too-large suits. I'll assume that he's A/B tested this to give best results?
Replies from: drethelincomment by Shmi (shminux) · 2014-12-12T22:12:16.171Z · LW(p) · GW(p)
Seems like a very interesting, if unflattering and often uncharitable look at CFAR and MIRI from the inside.
Replies from: RobbBB, Luke_A_Somers↑ comment by Rob Bensinger (RobbBB) · 2014-12-13T21:54:00.909Z · LW(p) · GW(p)
I had a conversation with the Tarletons that went something like:
- RB: "It's extremely non-charitable. And extremely non-uncharitable."
- NT: "YES."
- ET: "Does that make it accurate?"
- RB: "Yes. ... Well, OK, no."
↑ comment by Luke_A_Somers · 2014-12-12T22:40:05.592Z · LW(p) · GW(p)
I am somewhat amused that the author seems to me to emphasize the peculiarities of their communication styles when the author has such downright awkward, disjointed and mashed-together style. Eliezer's faults do not include lack of facility with paragraph structure.
Replies from: gwern, Vulture↑ comment by gwern · 2014-12-12T22:55:09.932Z · LW(p) · GW(p)
when the author has such downright awkward, disjointed and mashed-together style
Some of that may be the fault of my excerpting/editing. There were so many short paragraphs I felt I had to combine a bunch for HTML presentation (little paragraphs may work well in a double-column narrow magazine layout, but in a single column wide layout? not so much), and I tried to cut as much material as possible (otherwise there's no point in excerpting and I should've just left it as a pointer to the PDF).
Replies from: Luke_A_Somers, Manfred↑ comment by Luke_A_Somers · 2014-12-15T13:58:26.575Z · LW(p) · GW(p)
I see. Using ellipses would have helped.
Replies from: gwern↑ comment by gwern · 2014-12-15T16:57:33.699Z · LW(p) · GW(p)
I did.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-12-15T20:55:17.527Z · LW(p) · GW(p)
Hmm. So you did. They didn't make anything like full impact since they were not accompanied by paragraph breaks.
Anyway, that mitigates but does not completely solve the author's style issues.
comment by advancedatheist · 2014-12-13T16:30:00.475Z · LW(p) · GW(p)
“You have these weird phenomena like Occupy where people are protesting with no goals, no theory of how the world is, around which they can structure a protest. Basically this incredibly, weirdly, thoroughly disempowered group of people will have to inherit the power of the world anyway, because sooner or later everyone older is going to be too old and too technologically obsolete and too bankrupt. The old institutions may largely break down or they may be handed over, but either way they can’t just freeze. These people are going to be in charge, and it would be helpful if they, as they come into their own, crystallize an identity that contains certain cultural strengths like argument and reason.”
I don't see how that follows. From studies of the demographics of Occupiers I've seen, the movement has attracted the relatively less successful members of the white upper classes, graduates from elite universities and such who haven't found their places in life yet. Nothing about the whole Occupy fad signals notable intelligence or competence to me.
I could see that this disaffected White Men's March would fold any way given how male psychology works. Gather a bunch of young losers together, and the men with more organized lives will naturally mock them, like the ones at the Chicago Board of Trade who posted signs on their windows, "We Are the 1 Percent." Young men HATE public humiliation by more dominant men.
comment by advancedatheist · 2014-12-14T18:31:57.855Z · LW(p) · GW(p)
"Ethereum" sounds like the title of a movie directed by Ridley Scott or Neill Blomkamp.
Replies from: TylerJaycomment by LizzardWizzard · 2014-12-13T07:46:55.296Z · LW(p) · GW(p)
Metamed sounds like a kinda promising startup)
I failed to understand through text why accused people and institutions are necessarily evil
I browsed venal tech-trade publications, and tried and failed to read Less Wrong, which was written as if for aliens
I wonder is this because most humans can't find joy in the merely real, praising deities and trusting in other supernatural stuff like signs and horoscopes, so disbelieving and living in reality is abnormal?
Replies from: bramflakes, Sarunas↑ comment by bramflakes · 2014-12-13T11:53:00.684Z · LW(p) · GW(p)
I wonder is this because most humans can't find joy in the merely real, praising deities and trusting in other supernatural stuff like signs and horoscopes, so disbelieving and living in reality is abnormal?
or more prosaically, because the sequences are written in an idiosyncratic semi-autobiographical style with few citations and often grandiose language, and many people are immediately turned off by that
Replies from: Nornagest, Viliam_Bur↑ comment by Nornagest · 2014-12-16T21:48:25.708Z · LW(p) · GW(p)
I don't think the citations matter much, but the sequences are narrowly optimized -- probably unintentionally -- to reach people with a worldview and cultural background similar to Eliezer or his younger self. Not necessarily libertarians or people with apocalyptic preoccupations, as the survey results should make clear, but definitely people who have at some point wanted to be Kimball Kinnison or a character similar to him.
The grandiose language is one of the ways this manifests itself, but it's not the only one. HPMoR aims a little broader, but not by much.
↑ comment by Viliam_Bur · 2014-12-16T20:47:57.667Z · LW(p) · GW(p)
with few citations
Few citations compared with what? Certainly not an average website.
Replies from: bramflakes↑ comment by bramflakes · 2014-12-16T22:41:13.517Z · LW(p) · GW(p)
The Sequences don't purport to be average.
↑ comment by Sarunas · 2014-12-13T17:41:35.478Z · LW(p) · GW(p)
I wonder is this because most humans can't find joy in the merely real, praising deities and trusting in other supernatural stuff like signs and horoscopes, so disbelieving and living in reality is abnormal?
Well, there is a group of people who are SlateStarCodex readers, but aren't LessWrong readers. Some of them say that they do not find LessWrong topics interesting, they are often too speculative for them, and they prefer SSC because they want to discuss things like policy and economics. I think that this explanation is applicable in general case as well - most LessWrong topics simply aren't that interesting to most people and they prefer to spend their time on other websites.