Posts

Missing dog reasoning 2020-06-26T21:30:00.491Z
A point of clarification on infohazard terminology 2020-02-02T17:43:56.601Z
Eukryt Wrts Blg 2019-09-28T21:42:11.201Z
Tiddlywiki for organizing notes and research 2019-09-01T18:44:57.742Z
How to make a giant whiteboard for $14 (plus nails) 2019-07-07T19:23:38.870Z
Naked mole-rats: A case study in biological weirdness 2019-05-19T18:40:25.203Z
Spaghetti Towers 2018-12-22T05:29:47.551Z
The funnel of human experience 2018-10-10T02:46:02.240Z
Biodiversity for heretics 2018-05-27T13:37:09.314Z
Global insect declines: Why aren't we all dead yet? 2018-04-01T20:38:58.679Z
Caring less 2018-03-13T22:53:22.288Z
Social media probably not a deathtrap 2017-10-07T03:54:36.211Z
Throw a prediction party with your EA/rationality group 2016-12-31T23:02:11.284Z

Comments

Comment by eukaryote on What are some beautiful, rationalist artworks? · 2020-11-06T20:12:23.668Z · LW · GW
Art by Nicki (Sofhtie on tumblr). Quote is apparently from the  "Not Another D&D Podcast".
Comment by eukaryote on What are some beautiful, rationalist artworks? · 2020-11-06T17:33:17.620Z · LW · GW

This was the version I had saved on my computer, but we actually have a more complete map now. I love this image both by what it represents:

  • Exploring a new world
  • Alien geology
  • Cool maps
  • Including a sense of process (I don't actually know anything about how this image put together, but just looking at it, I'm nearly certain we're looking at a map composited orbits that Cassini took over the source of the planet - like a scanner!)

And from a purely aesthetic perspective:

  • Really simple, strongly contrasting, powerful colors
  • Clean geometry along with the chaotic and organic
Comment by eukaryote on What are some beautiful, rationalist artworks? · 2020-11-06T17:26:00.801Z · LW · GW
False color radar map of Titan's methane & ethane lakes, ~2007? From footage taken by Cassini. Credit: NASA / JPL-Caltech / Agenzia Spaziale Italiana / USGS
Comment by eukaryote on What are some beautiful, rationalist artworks? · 2020-11-06T17:19:57.185Z · LW · GW
Venus, Earth, Moon, Mars, Titan, I think by this reddit user
Comment by eukaryote on Postmortem to Petrov Day, 2020 · 2020-10-05T05:12:59.585Z · LW · GW

FWIW, I thought the 'Doomsday phishing' attack was absolutely brilliant. Hey! Sometimes people will deceive you about which things will end the world! May we all stay on our toes.

Comment by eukaryote on My Dating Plan ala Geoffrey Miller · 2020-07-23T19:09:39.771Z · LW · GW

Why would a big driver behind LW's appeal be sexism?

I don't think this currently is true for LW myself, but if a space casually has, say, sexist or racist stuff in it, people looking can be like "oh thank god, a place I can say what I really think [that is sexist or racist] without political correctness stopping me" and then that becomes a selling point for people who want to talk about sexist or racist stuff. Suspect the commenter means something like this.

Comment by eukaryote on My weekly review habit · 2020-06-21T18:55:12.808Z · LW · GW

You might look into bullet journalling - a lot of people find it a pretty helpful and low-mental-effort way to keep to-do lists and record what they do.

Comment by eukaryote on Using a memory palace to memorize a textbook. · 2020-06-19T03:52:10.774Z · LW · GW

This is cool as all hell. How long ago did you do this? If you think of some way to test this, I'd be super curious to learn how much of this you can still remember in a month. I expect it to be pretty decent. I've never just... sat down and tried to do this for a big topic, and I might now.

Comment by eukaryote on Eukryt Wrts Blg · 2020-06-03T22:57:00.696Z · LW · GW

I have a proposal.

Nobody affiliated with LessWrong is allowed to use the word "signalling" for the next six months. 

If you want to write something about signalling, you have to use the word "communication" instead. You can then use other words to clarify what you mean, as long as none of them are "signalling".

I think this will lead to more clarity and a better site culture. Thanks for coming to my talk.

Comment by eukaryote on Eukryt Wrts Blg · 2020-02-05T17:27:16.826Z · LW · GW

I think I agree with mr-hire that this doesn't seem right to me. The site is already public and will turn up when people search your name - or your blog name, in my case - or the idea you're trying to explain.

I don't especially care whether people use their real names or pseudonyms here. If people feel uncomfortable making their work more accessible under their real names, they can use a pseudonym. I suppose there's a perceived difference in professionalism or skin in the game (am I characterizing the motive correctly?), but we're all here for the ideas anyways, right?

Comment by eukaryote on Eukryt Wrts Blg · 2020-02-04T17:28:47.057Z · LW · GW

Yeah, building on more complex ideas - that you really need to read something else to understand - seems like a fine reason to use jargon.

Comment by eukaryote on Eukryt Wrts Blg · 2020-02-03T16:44:31.695Z · LW · GW

In fact, I think that the default should be to not want any given post to be linked, and to spread, far and wide.

Say more?

Comment by eukaryote on Eukryt Wrts Blg · 2020-02-03T06:47:07.271Z · LW · GW

Here's something I believe: You should be trying really hard to write your LessWrong posts in such a way that normal people can read them.

By normal, I mean "people who are not immersed in LessWrong culture or jargon." This is most people. I get that you have to use jargon sometimes. (Technical AI safety people: I do not understand your math, but keep up the good fight.) Or if your post is referring to another post, or is part of a series, then it doesn't have to stand alone. (But maybe the series should stand alone?)

Obviously if you only want your post to be accessible to LWers, ignore this. But do you really want that?

  • If your post provides value to many people on LW, it will probably provide value to people off LW. And making it accessible suddenly means it can be linked and referred to in many other contexts.
  • Your post might be the first time someone new to the site sees particular terms.
  • Even if the jargon is decipherable or the piece doesn't rely on the jargon, it still looks weird, and people don't like reading things where they don't know the words. It signals "this is not for me" and can make them feel dumb for not getting it.
  • (Listen, I was once in a conversation with a real live human being who dropped references to obscure classical literature every third sentence or so. This is the most irritating thing in the universe. Do not be that person.)

On a selfish level,

  • It enables the post to spread beyond the LW memeosphere, potentially bringing you honor and glory.
  • It helps you think and communicate better to translate useful ideas into and out of the original context they appear in.

If you're not going to do this, you can at least: Link jargon to somewhere that explains it.

Thank you for coming to my TED talk.

Comment by eukaryote on A point of clarification on infohazard terminology · 2020-02-02T22:51:14.841Z · LW · GW

What do you think of the change? (I think Bostrom's terms are fine, but it's still useful to have a word for the broad category of "knowing this may hurt you".)

Comment by eukaryote on A point of clarification on infohazard terminology · 2020-02-02T20:52:58.077Z · LW · GW

Update: I have swapped this out. I appreciate your feedback, because the distinction you point to seems like a valuable one, and I don't want to step on a great term. Hopefully this resolves the issue?

Comment by eukaryote on A point of clarification on infohazard terminology · 2020-02-02T20:28:22.395Z · LW · GW

Aw, carp, you're totally right. It had been pointed out to me while I was getting feedback that "memetic hazard" doesn't clearly gesture at the thing, but I hadn't thought of or been aware that there was a coherent and reasonable definition of "memetic hazard" that's the thing it sounds like it should mean.

I do actually have one more term up my sleeve, which is "cognitohazard", which comes about the same way and more clearly indicates the danger. (Which is from thinking / "cognitizing" (?) about it.)

I'm trying to think of a way to switch this out now that doesn't cause people to get confused or think that the [infohazard vs. knowledge that harms the knower] distinction doesn't matter. Hmmm. Let me think if I should just edit these posts now.

Comment by eukaryote on Whipped Cream vs Fancy Butter · 2020-01-21T03:42:25.622Z · LW · GW

I love this take. You're out here living in 3020. Also, I never get to use my eggbeater these days, so I'm excited to try this.

Comment by eukaryote on 100 Ways To Live Better · 2020-01-20T19:31:31.604Z · LW · GW

As a result of this, I put a post on Nextdoor offering to walk people's dogs for free. I'm hoping someone takes me up on it. Thanks for the brilliant suggestion!

Comment by eukaryote on The funnel of human experience · 2020-01-10T02:47:52.291Z · LW · GW

Quick authorial review: This post has brought me the greatest joy from other sources referring to it, including Marginal Revolution (https://marginalrevolution.com/marginalrevolution/2018/10/funnel-human-experience.html) and the New York Times bestseller "The Uninhabitable Earth". I was kind of hoping to supply a fact about the world that people could use in many different lights, and they have (see those and also like https://unherd.com/2018/10/why-are-woke-liberals-such-enemies-of-the-past/ )

An unintentional takeaway from this attention is solidifying my belief that if you're describing a new specific concept, you should make up a name too. For most purposes, this is for reasons like the ones described by Malcolm Ocean here (https://malcolmocean.com/2016/02/sparkly-pink-purple-ball-thing/). But also, sometimes, a New York Times bestseller will cite you, and you'll only find out as you set up Google alerts.

(And then once you make a unique name, set up google alerts for it. The book just cites "eukaryote" rather than my name, and this post rather than the one on my blog. Which I guess goes to show you that you can put anything in a book.)

Anyways, I'm actually a little embarrassed because my data on human populations isn't super accurate - they start at the year 50,000 BCE, when there were humans well before that. But those populations were small, probably not enough to significantly influence the result. I'm not a historian, and really don't want to invest the effort needed for more accurate numbers, although if someone would like to, please go ahead.

But it also shows that people are interested in quantification. I've written a lot of posts that are me trying to find a set of numbers, and making lots and lots of assumptions along the way. But then you have some plausible numbers. It turns out that you can just do this, and don't need a qualification in Counting Animals or whatever, just supply your reasoning and attach the appropriate caveats. There are no experts, but you can become the first one.

As an aside, in the intervening years, I've become more interested in the everyday life of the past - of all of the earlier chunks that made up so much of the funnel. I read an early 1800's housekeeping book, "The Frugal Housewife", which advises mothers to teach their children how to knit starting at age 4, and to keep all members of the family knitting in their downtime. And it's horrifying, but maybe that's what you have to do to keep your family warm in the northeast US winter. No downtime that isn't productive. I've taken up knitting lately and enjoy it, but at the same time, I love that it's a hobby and not a requirement. A lot of human experience must have been at the razor's edge of survival, Darwin's hounds nipping at our heels. I prefer 2020.

If you want a slight taste of everyday life at the midpoint of human experience, you might be interested in the Society for Creative Anachronism. It features swordfighting and court pagentry but also just a lot of everyday crafts - sewing, knitting, brewing, cooking. If you want to learn about medieval soapmaking or forging, they will help you find out.

Comment by eukaryote on Spaghetti Towers · 2020-01-10T02:15:10.078Z · LW · GW

A brief authorial take - I think this post has aged well, although as with Caring Less (https://www.lesswrong.com/posts/dPLSxceMtnQN2mCxL/caring-less), this was an abstract piece and I didn't make any particular claims here.

I'm so glad that A) this was popular B) I wasn't making up a new word for a concept that most people already know by a different name, which I think will send you to at least the first layer of Discourse Hell on its own.

I've met at least one person in the community who said they knew and thought about this post a lot, well before they'd met me, which was cool.

I think this website doesn't recognize the value of bad hand-drawn graphics for communicating abstract concepts (except for Garrabrant and assorted other AI safety people, whose posts are too technical for me to read but who I support wholly.) I'm guessing that the graphics helped this piece, or at least got more people to look at it.

I do wish I'd included more examples of spaghetti towers, but I knew that before posting it, and this was an instance of "getting something out is better than making it perfect."

I've planned on doing followups in the same sort of abstract style as this piece, like methods I've run into for getting around spaghetti towers. (Modularization, swailing, documentation.) I hopefully will do that some day. If anyone wants to help brainstorm examples, hit me up and I may or may not get back to you.

Comment by eukaryote on Caring less · 2020-01-10T01:46:17.935Z · LW · GW

Hi, I'm pleased to see that this has been nominated and has made a lasting impact.

Do I have any updates? I think it aged well. I'm not making any particular specific claims here, but I still endorse this and think it's an important concept.

I've done very little further thinking on this. I was quietly hoping that others might pick up the mantle and write more on strategies for caring less, as well as cases where this should be argued. I haven't seen this, but I'd love to see more of it.

I've referred to it myself when talking about values that I think people are over-invested in (see https://eukaryotewritesblog.com/2018/05/27/biodiversity-for-heretics/), but not extensively.

Finally, while I'm generally pleased with this post's reception, I think nobody appreciated my "why couldn't we care less" joke enough.

Comment by eukaryote on Do you get value out of contentless comments? · 2019-12-01T07:10:30.680Z · LW · GW

Yeah! I like getting positive feedback on my work, especially in a rather intimidating forum like here. Anything more specific than "good post" or whatever is better, but even that is more emotionally rewarding than seeing digits in the vote box change.

Comment by eukaryote on Eukryt Wrts Blg · 2019-09-28T21:42:11.381Z · LW · GW

I don't like taking complicated variable-probability-based bets. I like "bet you a dollar" or "bet you a drink". I don't like "I'll sell you a $20 bid at 70% odds" or whatever. This is because:

A) I don't really understand the betting payoffs. I do think I have a good internal sense of probabilities, and am well-calibrated. That said, the payoffs are often confusing, and I don't have an internal sense linking "I get 35 dollars if you're right and you give me 10 dollars if I'm not" or whatever, to those probabilities. It seems like a sensible policy that if you're not sure how the structure of a bet works, you shouldn't take it. (Especially if someone else is proposing it.)

B) It's obfuscating the fact that different people value money differently. I'm poorer than most software engineers. Obviously two people are likely to be affected differently by a straightforward $5 bet, but the point of betting is kind of to tie your belief to palpable rewards, and varying amounts of money muddy the waters more.

(Some people do bets like this where you are betting on really small amounts, like 70 cents to another person's 30 cents or whatever. This seems silly to me because the whole point of betting with money is to be trading real value, and the value of the time you spend figuring this out is already not worth collecting on.)

C) Also, I'm kind of risk averse and like bets where I'm surer about the outcome and what's going on. This is especially defensible if you're less financially sound than your betting partner and it's not just enough to come out ahead statistically, you need to come out ahead in real life.

This doesn't seem entirely virtuous, but these are my reasons and I think they're reasonable. If I ever get into prediction markets or stock trading, I'll probably have to learn the skills here, but for now, I'll take simple monetary bets but not weird ones.

Comment by eukaryote on Tiddlywiki for organizing notes and research · 2019-09-21T02:56:58.796Z · LW · GW

Sure. It's not much right now.

I put each quote and source combo on their own tiddler, then tag it with a bunch of stuff that might help me find it later. I'll probably refine the system as I start referring back to it more.

Comment by eukaryote on How much background technical knowledge do LW readers have? · 2019-07-12T02:19:33.946Z · LW · GW

Wait, do people usually use the phrase "technical knowledge" to mean just math and programming? I'm to understand that you have technical knowledge in any science or tool.

Comment by eukaryote on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-09T02:59:42.447Z · LW · GW

FWIW, "Alice is systematically wrong [and/or poorly justified] [about X thing]" came to mind as a thing that I think would make me sit up and take note, while having the right implications.

Comment by eukaryote on Raemon's Shortform · 2019-07-08T17:50:15.494Z · LW · GW

I'm basically never talking about the third thing when I talk about morality or anything like that, because I don't think we've done a decent job at the first thing.

Wait, why do you think these have to be done in order?

Comment by eukaryote on FB/Discord Style Reacts · 2019-06-04T02:15:07.650Z · LW · GW

This gif:

"Whoa there, friend, you might need to slow down"

(See also: "This is a reach", "you need to explain this more", "I don't understand why you said this", etc)

Comment by eukaryote on Naked mole-rats: A case study in biological weirdness · 2019-05-23T15:03:49.228Z · LW · GW

Oh, huh - I thought the Damaraland mole-rats were basically sister species of the naked mole-rats, the two most closely-related species, and so didn't consider them much. But it looks like that isn't true - they're not even the same genus. Maybe they evolved eusociality independently? Going to have to look into this, thanks!

Comment by eukaryote on If you wrote a letter to your future self every day, what would you put in it? · 2019-04-08T06:15:40.539Z · LW · GW

I don't think I'd put anything in it. I sort of expect all those thousands of cooperative like-minded strangers to have a better sense of their current situation than I do, and not to read emails that serve no communication purpose and that they know the contents of already.

I'm writing this with "the tired energy of a long flight" rather than fervent munchkinry, but hey, someone's gotta point out the null hypothesis.

Comment by eukaryote on The funnel of human experience · 2018-10-11T17:56:51.965Z · LW · GW

I haven't looked into this, but based on trends in meat consumption (richer people eat more meat), the growing human population, and factory farming as an efficiency improvement over traditional animal agriculture, I'm going to guess "most".

Comment by eukaryote on The funnel of human experience · 2018-10-10T18:58:06.779Z · LW · GW

You asked if he had a doctorate, and he does have a doctorate. This seems like evidence that people doing groundbreaking scientific work (at least in relatively recent times) have doctorates.

Comment by eukaryote on The funnel of human experience · 2018-10-10T18:56:53.066Z · LW · GW
Certainly, women can pursue knowledge. Or can they? Can men? Can anyone?

I don't know what you mean by this and suspect it's beyond the scope of this piece.

It seems fairly clear to me that on average, the “scientist” of today does far less of anything that can (without diluting the word into unrecognizability) be called “science”. It may very well be much less.

Seems possible. I don't know what the day-to-day process of past scientists was like. I wonder if something like improvements to statistics, the scientific method, etc., means that modern scientists get more learned per "time spent science" than in the past - I don't know. This may also be outweighed by how many more scientists now than there were then.

The last point about how PhDs don’t necessarily do scientific thought makes sense. Shall I say “formal scientific thought” instead? We’re on LessWrong and may as well hold “real scientific thought” to a high standard, but if you want to conclude from this “we have most of all the people who are supposed to be scientists with us now and they’re not doing anything”, well, there’s something real to that too.

What I meant by this is that perhaps the thing I'm more directly grasping at here is "amount of time people have spent trying to do science", with much less certainty around "how much science gets done." If people are spending much more time trying to do science now than they ever have in the past, and less is getting done (I'm not sure if I buy this), that's a problem, or maybe just indicative of something.

Once again, consider the case of my mother: she’s a teacher, an administrator, a curriculum designer, etc. My mother is not doing scientific thought. She’s not trying to do scientific thought.

Sure. I suppose I'm using PhDs as something of a proxy here, for "people who have spent a long time pushing on the edges of a scientific field". Think of STEM PhDs alone if you prefe. (Though note that someone in your mother's field could be doing science - if you say she's not, I believe you, but limiting it to just classic STEM is also only a proxy.)

Comment by eukaryote on The funnel of human experience · 2018-10-10T16:52:35.781Z · LW · GW

Do you mean why did I think this analysis was worth doing at all, or something else?

Comment by eukaryote on The funnel of human experience · 2018-10-10T16:46:23.503Z · LW · GW

Yeah, let me unpack this a little more. Over half of PhDs are in STEM fields - 58% in 1974, and 75% in 2014, providing weak evidence that this is becoming more true over time.

Dmitri Mendeleev had a doctorate. The other two did not. I see the point you're getting at - that scientific thought is not limited to PhDs, and is older than them as an institution - but surely it also makes sense that civilization is wealthier and has more capacity than ever for people to spend their lives pursuing knowledge, and that the opportunity to do so is available to more people (women, for instance.) That's why 90% is reasonable to me even if PhDs are a poor proxy.

The last point about how PhDs don't necessarily do scientific thought makes sense. Shall I say "formal scientific thought" instead? We're on LessWrong and may as well hold "real scientific thought" to a high standard, but if you want to conclude from this "we have most of all the people who are supposed to be scientists with us now and they're not doing anything", well, there's something real to that too.

Comment by eukaryote on The funnel of human experience · 2018-10-10T04:46:45.384Z · LW · GW

You are super right and that is exactly what happened - I checked the numbers and had made the order of magnitude three times larger. Thanks for the sanity checks and catch. It turns out this moves the midpoint up to 1432. Lemme fix the other numbers as well.

Update: Actually, it did nothing to the midpoint, which makes sense in retrospect (maybe?) but does change the "fraction of time" thing, as well as some of the Fermi estimates in the middle.
15% of experience has actually been experienced by living people, and 28% since Kane Tanaka's birth. I've updated this here and on my blog.

Comment by eukaryote on Open Thread October 2018 · 2018-10-03T19:12:22.044Z · LW · GW

I'm interested in collecting information on alternative platforms to facebook (that seem to offer some benefit).

E.g.:

Mastodon

Diaspora

If you know of others, especially though not necessarily with strong reasons for using them preferentially, I'd appreciate knowing!

Comment by eukaryote on How to Build a Lumenator · 2018-10-03T04:01:12.981Z · LW · GW

Ah, okay. It looks like your lumenator is hung from normal hooks on the ceiling. But if you wanted to use command hooks like you describe, you'd have to put it on the wall, correct?

Comment by eukaryote on How to Build a Lumenator · 2018-09-26T05:34:47.212Z · LW · GW

pssst

Comment by eukaryote on Open Thread September 2018 · 2018-09-25T16:13:40.165Z · LW · GW

How do people organize their long ongoing research projects (academic or otherwise)? I do a lot of these but think I would benefit from more of a system than I have right now.

Comment by eukaryote on An Ontology of Systemic Failures: Dragons, Bullshit Mountain, and the Cloud of Doom · 2018-09-13T15:48:01.175Z · LW · GW

I would also like to know the answers to these. I know that "injecting Slack" is a reference to Zvi's conception of Slack.

Comment by eukaryote on An Ontology of Systemic Failures: Dragons, Bullshit Mountain, and the Cloud of Doom · 2018-09-09T23:19:40.388Z · LW · GW

Interesting and elegant model!

I'm having trouble parsing what the Cloud of Doom is. It sounds similar to a wicked problem. Wicked problems come with the issue that there's no clear best solution, which perhaps is true of Clouds of Doom as well. On the other hand, you make two claims about wicked problems:

  • Every organization doing real work has them
  • There's one way to solve them, by adding lots of slack

I'm not sure where those are coming from, or what those imply. Examples or explanations would help.

Another thought: after the creation of vaccines, smallpox was arguably a "bug". There's a clear problem (people infected with a specific organism) and a clear solution (vaccinate a bunch of people and then check if it's gone). It still took a long time and lots of effort. Perhaps I'm drawing the analogy farther than you meant it to imply. (Or perhaps "a bunch of people" is doing the heavy lifting here and in fact counts as many little problems.)

Comment by eukaryote on How to Build a Lumenator · 2018-08-12T07:16:27.393Z · LW · GW

This is a good post, props for writing up a practical thing that people can refer to! This is potentially really useful information for people outside the community as well - lots of people struggle with SAD.

Two small changes I'd want to see before I show this to friends outside the community:

  • Take out the word "rationalist" in the first sentence. This sounds like a small nitpick but I think it's huge - It's early and prominent enough that it would likely turn off a casual reader who wasn't already aware or fond of the community. (And the person being a rationalist isn't relevant to the advice.) Replace it with "friend", perhaps.
  • Add a picture, even just a crappy cell phone photo. How do you get the hooks to hang a cord from the ceiling?
Comment by eukaryote on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-27T17:58:13.232Z · LW · GW
If many info-hazards have already been openly published, the world may be considered saturated with info-hazards, as a malevolent agent already has access to so much dangerous information. In our world, where genomes of the pandemic flus have been openly published, it is difficult to make the situation worse.

I strongly disagree that we're in a world of accessible easy catastrophic information right now.

This is based on a lot of background knowledge, but as a good start, Sonia Ben Ouagrham-Gormley makes a strong case that bioweapons groups historically have had very difficult times creating usable weapons even when they already have a viable pathogen. Having a flu genome online doesn't solve any of the other problems of weapons creation. While biotechnology has certainly progressed since major historic programs, and more info and procedures of various kinds are online, I still don't see the case for lots of highly destructive technology being easily available.

If you do not believe that we're at that future of plenty of calamitous information easily available online, but believe we could conceivably get there, then the proposed strategy of openly discussing GCR-related infohazards is extremely dangerous, because it pushes us there even faster.

If the reader thinks we're probably already there, I'd ask how confident they are. Getting it wrong carries a very high cost, and it's not clear to me that having lots of infohazards publicly available is the correct response, even for moderately high certainty that we're in "lots of GCR instruction manuals online" world. (For starters, publication has a circuitous path to positive impact at best. You have to get them to the right eyes.)

Other thoughts:

The steps for checking a possibly-dangerous idea before you put it online, including running it by multiple wise knowledgeable people and trying to see if it's been discovered already, and doing analysis in a way that won't get enormous publicity, seem like good heuristics for potentially risky ideas. Although if you think you've found something profoundly dangerous, you probably don't even want to type it into Google.

Re: dangerous-but-simple ideas being easy to find: It seems that for some reason or other, bioterrorism and bioweapons programs are very rare these days. This suggests to me that there could be a major risk in the form of inadvertently convincing non-bio malicious actors to switch to bio - by perhaps suggesting a new idea that fulfils their goals or is within their means. We as humans are in a bad place to competently judge whether ideas that are obvious to us are also obvious to everybody else. So while inferential distance is a real and important thing, I'd suggest against being blindly incautious with "obvious" ideas.

(Anyways, this isn't to say such things shouldn't be researched or addressed, but there's a vast difference between "turn off your computer and never speak of this again" and "post widely in public forums; scream from the rooftops", and many useful actions between the two.)

(Please note that all of this is my own opinion and doesn't reflect that of my employer or sponsors.)

Comment by eukaryote on Ben Hoffman's donor recommendations · 2018-06-23T14:31:36.242Z · LW · GW
The actual causal factors behind allocation decisions by GiveWell and OpenPhil continue to be opaque to outsiders, [...]

You mean something other than the cost-effectiveness process and analysis from their website?

Comment by eukaryote on Biodiversity for heretics · 2018-05-28T00:28:10.412Z · LW · GW

Thanks! Honestly, I'm completely fine filling in whatever content people might expect when looking for "controversial biodiversity opinions on LessWrong" with controversial opinions on actual environmental biodiversity.

Comment by eukaryote on April Fools: Announcing: Karma 2.0 · 2018-04-01T20:41:01.855Z · LW · GW

A fluid serif/sans-serif font, where the serifs get progressively bigger the more formal your comment is.

Comment by eukaryote on Notes From an Apocalypse · 2017-09-23T17:51:45.962Z · LW · GW

This was a fantastic read! (In the interests of letting other people have more trust, I did some research on the Cambrian Explosion a bit ago for a project, and the author here accurately represents everything as far as I know. This is a really eloquent explanation of both what we think happened at the time, and why pulling data out of the fossil record is so damn hard and creates so much uncertainty. I don't know much about Hox genes, but it seems totally plausible.)

Comment by eukaryote on Fish oil and the self-critical brain loop · 2017-09-17T07:52:03.899Z · LW · GW

My impression is that algae oil is more similar to fish oil than flax, if you decide to experiment - it's where fish get their omega-3 from.

Comment by eukaryote on 2017 LessWrong Survey · 2017-09-17T07:46:04.680Z · LW · GW

I have taken the survey, please shower me in karma.