Eukryt Wrts Blg 2019-09-28T21:42:11.201Z · score: 5 (1 votes)
Tiddlywiki for organizing notes and research 2019-09-01T18:44:57.742Z · score: 32 (13 votes)
How to make a giant whiteboard for $14 (plus nails) 2019-07-07T19:23:38.870Z · score: 29 (8 votes)
Naked mole-rats: A case study in biological weirdness 2019-05-19T18:40:25.203Z · score: 95 (38 votes)
Spaghetti Towers 2018-12-22T05:29:47.551Z · score: 92 (41 votes)
The funnel of human experience 2018-10-10T02:46:02.240Z · score: 79 (34 votes)
Biodiversity for heretics 2018-05-27T13:37:09.314Z · score: 41 (9 votes)
Global insect declines: Why aren't we all dead yet? 2018-04-01T20:38:58.679Z · score: 72 (22 votes)
Caring less 2018-03-13T22:53:22.288Z · score: 109 (37 votes)
Social media probably not a deathtrap 2017-10-07T03:54:36.211Z · score: 19 (8 votes)
Throw a prediction party with your EA/rationality group 2016-12-31T23:02:11.284Z · score: 8 (9 votes)


Comment by eukaryote on Eukryt Wrts Blg · 2019-09-28T21:42:11.381Z · score: 8 (3 votes) · LW · GW

I don't like taking complicated variable-probability-based bets. I like "bet you a dollar" or "bet you a drink". I don't like "I'll sell you a $20 bid at 70% odds" or whatever. This is because:

A) I don't really understand the betting payoffs. I do think I have a good internal sense of probabilities, and am well-calibrated. That said, the payoffs are often confusing, and I don't have an internal sense linking "I get 35 dollars if you're right and you give me 10 dollars if I'm not" or whatever, to those probabilities. It seems like a sensible policy that if you're not sure how the structure of a bet works, you shouldn't take it. (Especially if someone else is proposing it.)

B) It's obfuscating the fact that different people value money differently. I'm poorer than most software engineers. Obviously two people are likely to be affected differently by a straightforward $5 bet, but the point of betting is kind of to tie your belief to palpable rewards, and varying amounts of money muddy the waters more.

(Some people do bets like this where you are betting on really small amounts, like 70 cents to another person's 30 cents or whatever. This seems silly to me because the whole point of betting with money is to be trading real value, and the value of the time you spend figuring this out is already not worth collecting on.)

C) Also, I'm kind of risk averse and like bets where I'm surer about the outcome and what's going on. This is especially defensible if you're less financially sound than your betting partner and it's not just enough to come out ahead statistically, you need to come out ahead in real life.

This doesn't seem entirely virtuous, but these are my reasons and I think they're reasonable. If I ever get into prediction markets or stock trading, I'll probably have to learn the skills here, but for now, I'll take simple monetary bets but not weird ones.

Comment by eukaryote on Tiddlywiki for organizing notes and research · 2019-09-21T02:56:58.796Z · score: 1 (1 votes) · LW · GW

Sure. It's not much right now.

I put each quote and source combo on their own tiddler, then tag it with a bunch of stuff that might help me find it later. I'll probably refine the system as I start referring back to it more.

Comment by eukaryote on How much background technical knowledge do LW readers have? · 2019-07-12T02:19:33.946Z · score: 7 (4 votes) · LW · GW

Wait, do people usually use the phrase "technical knowledge" to mean just math and programming? I'm to understand that you have technical knowledge in any science or tool.

Comment by eukaryote on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-09T02:59:42.447Z · score: 3 (2 votes) · LW · GW

FWIW, "Alice is systematically wrong [and/or poorly justified] [about X thing]" came to mind as a thing that I think would make me sit up and take note, while having the right implications.

Comment by eukaryote on Raemon's Scratchpad · 2019-07-08T17:50:15.494Z · score: 4 (4 votes) · LW · GW

I'm basically never talking about the third thing when I talk about morality or anything like that, because I don't think we've done a decent job at the first thing.

Wait, why do you think these have to be done in order?

Comment by eukaryote on FB/Discord Style Reacts · 2019-06-04T02:15:07.650Z · score: 7 (2 votes) · LW · GW

This gif:

"Whoa there, friend, you might need to slow down"

(See also: "This is a reach", "you need to explain this more", "I don't understand why you said this", etc)

Comment by eukaryote on Naked mole-rats: A case study in biological weirdness · 2019-05-23T15:03:49.228Z · score: 7 (6 votes) · LW · GW

Oh, huh - I thought the Damaraland mole-rats were basically sister species of the naked mole-rats, the two most closely-related species, and so didn't consider them much. But it looks like that isn't true - they're not even the same genus. Maybe they evolved eusociality independently? Going to have to look into this, thanks!

Comment by eukaryote on If you wrote a letter to your future self every day, what would you put in it? · 2019-04-08T06:15:40.539Z · score: 6 (4 votes) · LW · GW

I don't think I'd put anything in it. I sort of expect all those thousands of cooperative like-minded strangers to have a better sense of their current situation than I do, and not to read emails that serve no communication purpose and that they know the contents of already.

I'm writing this with "the tired energy of a long flight" rather than fervent munchkinry, but hey, someone's gotta point out the null hypothesis.

Comment by eukaryote on The funnel of human experience · 2018-10-11T17:56:51.965Z · score: 4 (3 votes) · LW · GW

I haven't looked into this, but based on trends in meat consumption (richer people eat more meat), the growing human population, and factory farming as an efficiency improvement over traditional animal agriculture, I'm going to guess "most".

Comment by eukaryote on The funnel of human experience · 2018-10-10T18:58:06.779Z · score: 11 (5 votes) · LW · GW

You asked if he had a doctorate, and he does have a doctorate. This seems like evidence that people doing groundbreaking scientific work (at least in relatively recent times) have doctorates.

Comment by eukaryote on The funnel of human experience · 2018-10-10T18:56:53.066Z · score: 3 (2 votes) · LW · GW
Certainly, women can pursue knowledge. Or can they? Can men? Can anyone?

I don't know what you mean by this and suspect it's beyond the scope of this piece.

It seems fairly clear to me that on average, the “scientist” of today does far less of anything that can (without diluting the word into unrecognizability) be called “science”. It may very well be much less.

Seems possible. I don't know what the day-to-day process of past scientists was like. I wonder if something like improvements to statistics, the scientific method, etc., means that modern scientists get more learned per "time spent science" than in the past - I don't know. This may also be outweighed by how many more scientists now than there were then.

The last point about how PhDs don’t necessarily do scientific thought makes sense. Shall I say “formal scientific thought” instead? We’re on LessWrong and may as well hold “real scientific thought” to a high standard, but if you want to conclude from this “we have most of all the people who are supposed to be scientists with us now and they’re not doing anything”, well, there’s something real to that too.

What I meant by this is that perhaps the thing I'm more directly grasping at here is "amount of time people have spent trying to do science", with much less certainty around "how much science gets done." If people are spending much more time trying to do science now than they ever have in the past, and less is getting done (I'm not sure if I buy this), that's a problem, or maybe just indicative of something.

Once again, consider the case of my mother: she’s a teacher, an administrator, a curriculum designer, etc. My mother is not doing scientific thought. She’s not trying to do scientific thought.

Sure. I suppose I'm using PhDs as something of a proxy here, for "people who have spent a long time pushing on the edges of a scientific field". Think of STEM PhDs alone if you prefe. (Though note that someone in your mother's field could be doing science - if you say she's not, I believe you, but limiting it to just classic STEM is also only a proxy.)

Comment by eukaryote on The funnel of human experience · 2018-10-10T16:52:35.781Z · score: 9 (2 votes) · LW · GW

Do you mean why did I think this analysis was worth doing at all, or something else?

Comment by eukaryote on The funnel of human experience · 2018-10-10T16:46:23.503Z · score: 6 (7 votes) · LW · GW

Yeah, let me unpack this a little more. Over half of PhDs are in STEM fields - 58% in 1974, and 75% in 2014, providing weak evidence that this is becoming more true over time.

Dmitri Mendeleev had a doctorate. The other two did not. I see the point you're getting at - that scientific thought is not limited to PhDs, and is older than them as an institution - but surely it also makes sense that civilization is wealthier and has more capacity than ever for people to spend their lives pursuing knowledge, and that the opportunity to do so is available to more people (women, for instance.) That's why 90% is reasonable to me even if PhDs are a poor proxy.

The last point about how PhDs don't necessarily do scientific thought makes sense. Shall I say "formal scientific thought" instead? We're on LessWrong and may as well hold "real scientific thought" to a high standard, but if you want to conclude from this "we have most of all the people who are supposed to be scientists with us now and they're not doing anything", well, there's something real to that too.

Comment by eukaryote on The funnel of human experience · 2018-10-10T04:46:45.384Z · score: 20 (6 votes) · LW · GW

You are super right and that is exactly what happened - I checked the numbers and had made the order of magnitude three times larger. Thanks for the sanity checks and catch. It turns out this moves the midpoint up to 1432. Lemme fix the other numbers as well.

Update: Actually, it did nothing to the midpoint, which makes sense in retrospect (maybe?) but does change the "fraction of time" thing, as well as some of the Fermi estimates in the middle.
15% of experience has actually been experienced by living people, and 28% since Kane Tanaka's birth. I've updated this here and on my blog.

Comment by eukaryote on Open Thread October 2018 · 2018-10-03T19:12:22.044Z · score: 4 (3 votes) · LW · GW

I'm interested in collecting information on alternative platforms to facebook (that seem to offer some benefit).




If you know of others, especially though not necessarily with strong reasons for using them preferentially, I'd appreciate knowing!

Comment by eukaryote on How to Build a Lumenator · 2018-10-03T04:01:12.981Z · score: 1 (1 votes) · LW · GW

Ah, okay. It looks like your lumenator is hung from normal hooks on the ceiling. But if you wanted to use command hooks like you describe, you'd have to put it on the wall, correct?

Comment by eukaryote on How to Build a Lumenator · 2018-09-26T05:34:47.212Z · score: 3 (2 votes) · LW · GW


Comment by eukaryote on Open Thread September 2018 · 2018-09-25T16:13:40.165Z · score: 2 (2 votes) · LW · GW

How do people organize their long ongoing research projects (academic or otherwise)? I do a lot of these but think I would benefit from more of a system than I have right now.

Comment by eukaryote on An Ontology of Systemic Failures: Dragons, Bullshit Mountain, and the Cloud of Doom · 2018-09-13T15:48:01.175Z · score: 10 (3 votes) · LW · GW

I would also like to know the answers to these. I know that "injecting Slack" is a reference to Zvi's conception of Slack.

Comment by eukaryote on An Ontology of Systemic Failures: Dragons, Bullshit Mountain, and the Cloud of Doom · 2018-09-09T23:19:40.388Z · score: 11 (3 votes) · LW · GW

Interesting and elegant model!

I'm having trouble parsing what the Cloud of Doom is. It sounds similar to a wicked problem. Wicked problems come with the issue that there's no clear best solution, which perhaps is true of Clouds of Doom as well. On the other hand, you make two claims about wicked problems:

  • Every organization doing real work has them
  • There's one way to solve them, by adding lots of slack

I'm not sure where those are coming from, or what those imply. Examples or explanations would help.

Another thought: after the creation of vaccines, smallpox was arguably a "bug". There's a clear problem (people infected with a specific organism) and a clear solution (vaccinate a bunch of people and then check if it's gone). It still took a long time and lots of effort. Perhaps I'm drawing the analogy farther than you meant it to imply. (Or perhaps "a bunch of people" is doing the heavy lifting here and in fact counts as many little problems.)

Comment by eukaryote on How to Build a Lumenator · 2018-08-12T07:16:27.393Z · score: 10 (7 votes) · LW · GW

This is a good post, props for writing up a practical thing that people can refer to! This is potentially really useful information for people outside the community as well - lots of people struggle with SAD.

Two small changes I'd want to see before I show this to friends outside the community:

  • Take out the word "rationalist" in the first sentence. This sounds like a small nitpick but I think it's huge - It's early and prominent enough that it would likely turn off a casual reader who wasn't already aware or fond of the community. (And the person being a rationalist isn't relevant to the advice.) Replace it with "friend", perhaps.
  • Add a picture, even just a crappy cell phone photo. How do you get the hooks to hang a cord from the ceiling?
Comment by eukaryote on Informational hazards and the cost-effectiveness of open discussion of catastrophic risks · 2018-06-27T17:58:13.232Z · score: 18 (3 votes) · LW · GW
If many info-hazards have already been openly published, the world may be considered saturated with info-hazards, as a malevolent agent already has access to so much dangerous information. In our world, where genomes of the pandemic flus have been openly published, it is difficult to make the situation worse.

I strongly disagree that we're in a world of accessible easy catastrophic information right now.

This is based on a lot of background knowledge, but as a good start, Sonia Ben Ouagrham-Gormley makes a strong case that bioweapons groups historically have had very difficult times creating usable weapons even when they already have a viable pathogen. Having a flu genome online doesn't solve any of the other problems of weapons creation. While biotechnology has certainly progressed since major historic programs, and more info and procedures of various kinds are online, I still don't see the case for lots of highly destructive technology being easily available.

If you do not believe that we're at that future of plenty of calamitous information easily available online, but believe we could conceivably get there, then the proposed strategy of openly discussing GCR-related infohazards is extremely dangerous, because it pushes us there even faster.

If the reader thinks we're probably already there, I'd ask how confident they are. Getting it wrong carries a very high cost, and it's not clear to me that having lots of infohazards publicly available is the correct response, even for moderately high certainty that we're in "lots of GCR instruction manuals online" world. (For starters, publication has a circuitous path to positive impact at best. You have to get them to the right eyes.)

Other thoughts:

The steps for checking a possibly-dangerous idea before you put it online, including running it by multiple wise knowledgeable people and trying to see if it's been discovered already, and doing analysis in a way that won't get enormous publicity, seem like good heuristics for potentially risky ideas. Although if you think you've found something profoundly dangerous, you probably don't even want to type it into Google.

Re: dangerous-but-simple ideas being easy to find: It seems that for some reason or other, bioterrorism and bioweapons programs are very rare these days. This suggests to me that there could be a major risk in the form of inadvertently convincing non-bio malicious actors to switch to bio - by perhaps suggesting a new idea that fulfils their goals or is within their means. We as humans are in a bad place to competently judge whether ideas that are obvious to us are also obvious to everybody else. So while inferential distance is a real and important thing, I'd suggest against being blindly incautious with "obvious" ideas.

(Anyways, this isn't to say such things shouldn't be researched or addressed, but there's a vast difference between "turn off your computer and never speak of this again" and "post widely in public forums; scream from the rooftops", and many useful actions between the two.)

(Please note that all of this is my own opinion and doesn't reflect that of my employer or sponsors.)

Comment by eukaryote on Ben Hoffman's donor recommendations · 2018-06-23T14:31:36.242Z · score: 18 (5 votes) · LW · GW
The actual causal factors behind allocation decisions by GiveWell and OpenPhil continue to be opaque to outsiders, [...]

You mean something other than the cost-effectiveness process and analysis from their website?

Comment by eukaryote on Biodiversity for heretics · 2018-05-28T00:28:10.412Z · score: 46 (16 votes) · LW · GW

Thanks! Honestly, I'm completely fine filling in whatever content people might expect when looking for "controversial biodiversity opinions on LessWrong" with controversial opinions on actual environmental biodiversity.

Comment by eukaryote on April Fools: Announcing: Karma 2.0 · 2018-04-01T20:41:01.855Z · score: 32 (9 votes) · LW · GW

A fluid serif/sans-serif font, where the serifs get progressively bigger the more formal your comment is.

Comment by eukaryote on Notes From an Apocalypse · 2017-09-23T17:51:45.962Z · score: 5 (3 votes) · LW · GW

This was a fantastic read! (In the interests of letting other people have more trust, I did some research on the Cambrian Explosion a bit ago for this piece, and the author here accurately represents everything as far as I know. This is a really eloquent explanation of both what we think happened at the time, and why pulling data out of the fossil record is so damn hard and creates so much uncertainty. I don't know much about Hox genes, but it seems totally plausible.)

Comment by eukaryote on Fish oil and the self-critical brain loop · 2017-09-17T07:52:03.899Z · score: 0 (0 votes) · LW · GW

My impression is that algae oil is more similar to fish oil than flax, if you decide to experiment - it's where fish get their omega-3 from.

Comment by eukaryote on 2017 LessWrong Survey · 2017-09-17T07:46:04.680Z · score: 18 (18 votes) · LW · GW

I have taken the survey, please shower me in karma.

Comment by eukaryote on Open thread, Jul. 17 - Jul. 23, 2017 · 2017-07-19T07:05:29.004Z · score: 0 (0 votes) · LW · GW

Is there something that lets you search all the rationality/EA blogs at once? I could have sworn I've seen something - maybe a web app made by chaining a bunch of terms together in Google - but I can't remember where or how to find it.

Comment by eukaryote on Stupid Questions May 2017 · 2017-04-26T19:24:32.236Z · score: 2 (2 votes) · LW · GW

There's a lot of uncertainty in this field. I would hope to see a lot of people very quickly shift a lot of effort into researching:

  • Effective interventions for reducing the number of insects in the environment (without, e.g., crashing the climate)
  • Comparative effects of different kinds of land use (e.g. farming crops or vegetables, pasture, left wild, whatever) on insect populations
  • Ability of various other invertebrates to suffer (how about plankton, or nematodes? The same high-confidence evidence showing insects suffer might also show the same for their smaller, more numerous cousins)
  • Shifting public perceptions of gene drives
  • Research into which pesticides cause the least suffering

Currently it seems like Brian Tomasik & the Foundational Research Institute, and Sentience Politics, are paying some attention to considerations like this.

Comment by eukaryote on Throw a prediction party with your EA/rationality group · 2017-01-04T22:19:14.959Z · score: 0 (0 votes) · LW · GW

Yes, I see - it seems like there are two ways to do this exercise.

1) Everybody writes their own predictions and arranges them into probability bins (either artificially after coming up with them, or just writing 5 at 60%, 5 at 70%, etc.) You then check your calibration with a graph like Scott Alexander's.

2) Everybody writes their estimations for the same set of predictions - maybe you generate 50 as a group, and everyone writes down their most likely outcome and how confident they are in it. You then check your Brier score.

Both of these seem useful for different things - in 2), it's a sort of raw measure of how good at making accurate guesses you are. Lower confidence levels make your score worse. In 1), you're looking at calibration across probabilities - there are always going to be things you're only 50% or 70% sure about, and making those intervals reflect reality is as important as things you're 95% certain on.

I will edit the original post (in a bit) to reflect this.

Comment by eukaryote on Throw a prediction party with your EA/rationality group · 2017-01-01T23:11:48.742Z · score: 0 (0 votes) · LW · GW

I would imagine that at the 50% level, you can put down a prediction in the positive or negative phrasing, and since it'll be fixed at the beginning of the year (IE, you won't be rephrasing it six months in), you should expect 50% of them to end up happening either way. Right?

(50% predictions are meaningless for calculating Brier scores, but seem valuable for general calibration levels. I suppose forcing them to 45/55% so that you can incorporate them in Brier scores / etc isn't a bad idea. I'm not much of a statistician. Is that what you were saying, Douglas_Knight?)

The 99%/97% thing is true in that you're jumping from one probability to a probability that's 3 times as high, but it seems practically less necessary in that A) if you're making fewer than 30 predictions at that interval, you shouldn't expect any of them to be true, and B) I have a hard time mentally distinguishing 97% and 99% chances, and would expect other people to be similarly bad at it (unless they practiced or did some rigorous evaluation of the evidence.) I'm not sure how much credence I should lend to this.

Comment by eukaryote on Welcome to Less Wrong! (9th thread, May 2016) · 2016-12-29T04:10:43.076Z · score: 5 (5 votes) · LW · GW

Hello friends! I have been orbiting around effective altruism and rationality ever since a friend sent me a weird Harry Potter fanfiction back in high school. I started going to Seattle EA meetings on and off a couple years ago, and have since read a bunch of blogs, made friends who were into existential risk, started my own blog, graduated college, and moved to Seattle.

I went to EA Global this summer, attend and occasionally help organize Seattle EA/rationality events, and work in a bacteriophage lab. I plan on studying international security and biodefense. I recently got back from a trip to the Bay Area, that gaping void in our coastline that all local EA group leaders are eventually sucked into, and was lucky to escape with my life.

I'm gray on the LessWrong slack, and I also have a real name. I had a LW account back early in college that I used for a couple months, but then I got significantly more entangled in the community, heard about the LW revitalization, and wanted a clean break - so here we are. In very recent news, I'm pleased to announce in celebration of finding the welcome thread, I'm making a welcome post.

I wasn't sure if it would be tacky to directly link my blog here, so I put it in my profile instead. :)

Areas of expertise, or at least interest: Microbiology, existential risk, animal ethics and welfare, group social norms, EA in general.

Some things I've been thinking about lately include:

  • How to give my System 1 a visceral sense of what "humanity winning" looks like
  • What mental effects hormonal birth control might have
  • Which invertebrates might be able to feel pain
  • What an alternate system of taxonomy based on convergent evolution, rather than phylogeny, would look like
  • How to start a useful career in biorisk/biodefense
Comment by eukaryote on Open thread, Dec. 26, 2016 - Jan. 1, 2017 · 2016-12-28T23:34:19.178Z · score: 6 (6 votes) · LW · GW

I'd like to know if anyone knows good research (or just good estimates) of the following:

  • Mental effects of hormonal birth control, especially long-term or subtle (think personality changes, maybe cognition, etc, not just increased risk of diagnosed mental illness)
  • If anyone's estimated QALYs lost by menstruating

If not, I'm planning on researching it, but I love when people have already done the thing.

Comment by eukaryote on The engineer and the diplomat · 2016-12-28T23:18:46.193Z · score: 5 (5 votes) · LW · GW

This resonates. When a group conversation became unexpectedly intimate, I've definitely felt that urge to bail - or interfere and bring the conversation back to a normal level of engagement. It feels like an intense discomfort, maybe a sense of "I shouldn't be here" or "they shouldn't have to answer that question."

I think that's often a good instinct to have. (In this context, where 'interesting' seems to mean not just a topic you think is neat, but something like 'substantive and highly relevant to someone' or 'involving querying a person's deep-held beliefs', etc. Correct me if I'm wrong.) Where "diplomat mode" might be coming from:

  • The person starting an intensive conversation might be 'inflicting' it on the other person, who can't gracefully duck out

  • Both people are well-acquainted and clearly interested in having the conversation, but haven't considered that they're in public, and in retrospect would prefer not to have everyone else there

  • Even if they seem to be fine with me being there, my role is unclear if I'm not well-versed on the issue - am I suppose to ask questions, chime in with uneducated opinions, just listen to them talk?

  • Relatedly, conversations specific to people's deeply held interests are likely to require more knowledge to engage with, and thus exclude people from the conversation.

  • If other people are sharing personal stories or details, I might feel pressure to do that too

  • Conversations that run closer to what people really care about are more likely to be upsetting, and I don't want to be upset (or, depending, expect them to want to be upset in front of me)

  • I expect other people are uncomfortable, for whatever (any of the above) reasons

Most of these seem to apply less in small groups, or groups where everybody knows each other quite well. Attempting diplomat --> engineering shifts in large group seems interesting, but risky if there are near-strangers present, and also like managing or participating in that would take a whole different set of group-based social skills. (IE: Reducing risks from the above, assessing how comfortable everybody is with increased above risks, etc.)

Comment by eukaryote on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2016-12-24T07:32:47.412Z · score: 2 (2 votes) · LW · GW

Comparative solsticeology: I helped organize the Seattle Solstice, and also attended the Bay Solstice. Both were really nice. A couple major observations:

The Seattle Solstice (also, I think, the New York one) had a really clear light-dark-light progression throughout the presentations, the Bay one didn't - it seemed like each speech or song was its own small narrative arc, and there wasn't an over-arching one.

Seattle's was also in a small venue where there were chairs, but most people sat on cushions of various size on the floors, and were quite close to the performers and speakers. The Bay's was on a stage. While the cushion version probably wouldn't work for a much larger solstice, it felt intimate and communal. (Despite, I think, ~100 attendees at Seattle. Not sure how many people came to the Bay one, ~150 marked themselves as having gone on Facebook but it seemed larger.)