Posts

Universal Eudaimonia 2020-10-05T13:45:19.609Z
Thoughts on hacking aromanticism? 2016-06-02T11:52:55.241Z
Is it sensible for an ambitious nonsmoker to use e-cigarettes? 2015-11-24T22:48:52.998Z
Find someone to talk to thread 2015-09-26T22:24:32.676Z

Comments

Comment by hg00 on Speaking of Stag Hunts · 2021-11-09T02:59:38.166Z · LW · GW

There's a major challenge in all of this in that I see any norms you introduce as being additional tools that can be abused to win–just selectively call out your opponents for alleged violations to discredit them.

I think this is usually done subconsciously -- people are more motivated to find issues with arguments they disagree with.

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-22T10:10:50.990Z · LW · GW

It seems like you wanted me to respond to this comment, so I'll write a quick reply.

Now for the rub: I think anyone working on AI alignment (or any technical question of comparable difficulty) mustn't exhibit this attitude with respect to [the thing they're working on]. If you have a problem where you're not able to achieve high confidence in your own models of something (relative to competing ambient models), you're not going to be able to follow your own thoughts far enough to do good work--not without being interrupted by thoughts like "But if I multiply the probability of this assumption being true, by the probability of that assumption being true, by the probability of that assumption being true..." and "But [insert smart person here] thinks this assumption is unlikely to be true, so what probability should I assign to it really?"

This doesn't seem true for me. I think through details of exotic hypotheticals all the time.

Maybe others are different. But it seems like maybe you're proposing that people self-deceive in order to get themselves confident enough to explore the ramifications of a particular hypothesis. I think we should be a bit skeptical of intentional self-deception. And if self-deception is really necessary, let's make it a temporary suspension of belief sort of thing, as opposed to a life belief that leads you to not talk to those with other views.

It's been a while since I read Inadequate Equilibria. But I remember the message of the book being fairly nuanced. For example, it seems pretty likely to me that there's no specific passage which contradicts the statement "hedgehogs make better predictions on average than foxes".

I support people trying to figure things out for themselves, and I apologize if I unintentionally discouraged anyone from doing that -- it wasn't my intention. I also think people consider learning from disagreement to be virtuous for a good reason, not just due to "epistemic learned helplessness". Also, learning from disagreement seems importantly different from generic deference -- especially if you took the time to learn about their views and found yourself unpersuaded. Basically, I think people should account for both known unknowns (in the form of people who disagree whose views you don't understand) and unknown unknowns, but it seems OK to not defer to the masses / defer to authorities if you have a solid grasp of how they came to their conclusion (this is my attempt to restate the thesis of Inadequate Equilibria as I remember it).

I don't deny that learning from disagreement has costs. Probably some people do it too much. I encouraged MIRI to do it more on the margin, but it could be that my guess about their current margin is incorrect, who knows.

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-22T09:55:24.330Z · LW · GW

Separately, I don't think the MIRI/CFAR associated social circle is a cult.

Nor do I. (I've donated money to at least one of those organizations.) [Edit: I think they might be too tribal for their own good -- many groups are -- but the word "cult" seems too strong.]

I do think MIRI/CFAR is to some degree an "internet tribe". You've probably noticed that those can be pathological.

Anyway, you're writing a lot of words here. There's plenty of space to propose or cite a specific norm, explain why you think it's a generally good norm, and explain why Ilya violated it. I think if you did that, and left off the rest of the rhetoric, it would read as more transparent and less manipulative to me. A norm against "people [comparing] each other to characters from Rick and Morty" seems suspiciously specific to this case (and also not necessarily a great norm in general).

Basically I'm getting more of an "ostracize him!" vibe than a "how can we keep the garden clean?" vibe -- you were pretending to do the second one in your earlier comment, but I think the cursing here makes it clear that your true intention is more like the first. I don't like mob justice, even if the person is guilty. (BTW, proposing specific norms also helps keep you honest, e.g. if your proposed norm was "don't be crass", cursing would violate that norm.)

(It sounds like you view statements like the above as an expression of "aggressive conformism". I could go on about how I disagree with that, but instead I'll simply note that under a slight swap of priors, one could easily make the argument that it was the original comment by Ilya that's an example of "aggressive conformism". And yet I note that for some reason your perception of aggressive conformism was only triggered in response to a comment attacking a position with which you happen to agree, rather than by the initial comment itself. I think it's quite fair to call this a worrisome flag--by your own standards, no less.)

Ilya's position is not one I agree with.

I'm annoyed by aggressive conformism wherever I see it. When it comes to MIRI/CFAR, my instinct is to defend them in venues where everyone criticizes them, and criticize them in venues where everyone defends them.

I'll let you have the last word in this thread. Hopefully that will cut down on unwanted meta-level discussion.

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-22T06:25:34.239Z · LW · GW

It's not obvious to me that Ilya meant his comment as aggressively as you took it. We're all primates and it can be useful to be reminded of that, even if we're primates that go to space sometimes. Asking yourself "would I be responding similar to how I'm responding now if I was, in fact, in a cult" seems potentially useful. It's also worth remembering that people coded as good aren't always good.

Your comment was less crass than Ilya's, but it felt like you were slipping "we all agree my opponent is a clear norm violator" into a larger argument without providing any supporting evidence. I was triggered by a perception of manipulativeness and aggressive conformism, which put me in a more factionalistic mindset.

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-22T04:11:40.666Z · LW · GW

It is an invitation to turn the comments section into something like a factionalized battleground

If you want to avoid letting a comments section descend into a factionalized battleground, you also might want to avoid saying that people "would not much be missed" if they are banned. From my perspective, you're now at about Ilya's level, but with a lot more words (and a lot more people in your faction).

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-21T10:53:57.940Z · LW · GW

I am interested in the fact that you find the comment so cult-y though, because I didn't pick that up.

It's a fairly incoherent comment which argues that we shouldn't work to overcome our biases or engage with people outside our group, with strawmanning that seems really flimsy... and it has a bunch of upvotes. Seems like curiosity, argument, and humility are out, and hubris is in.

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-21T08:45:18.162Z · LW · GW

Thanks, this is encouraging.

I think mostly everyone agrees with this, and has tried, and in practice, we keep hitting "inferential distance" shaped walls, and become discouraged, and (partially) give up.

I've found an unexpected benefit of trying to explain my thinking and overcome the inferential distance is that I think of arguments which change my mind. Just having another person to bounce ideas off of causes me to look at things differently, which sometimes produces new insights. See also the book passage I quoted here.

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-21T08:31:00.747Z · LW · GW

which in turn I fundamentally see as a consequence of epistemic learned helplessness run rampant

Not sure I follow. It seems to me that the position you're pushing, that learning from people who disagree is prohibitively costly, is the one that goes with learned helplessness. ("We've tried it before, we encountered inferential distances, we gave up.")

Suppose there are two execs at an org on the verge of building AGI. One says "MIRI seems wrong for many reasons, but we should try and talk to them anyways to see what we learn." The other says "Nah, that's epistemic learned helplessness, and the costs are prohibitive. Turn this baby on." Which exec do you agree with?

This isn't exactly hypothetical, I know someone at a top AGI org (I believe they "take seriously the idea that they are a computation/algorithm") who reached out to MIRI and was basically ignored. It seems plausible to me that MIRI is alienating a lot of people this way, in fact. I really don't get the impression they are spending excessive resources engaging people with different worldviews.


Anyway, one way to think about it is talking to people who disagree is just a much more efficient way to increase the accuracy of your beliefs. Suppose the population as a whole is 50/50 pro-Skub and anti-Skub. Suppose you learn that someone is pro-Skub. This should cause you to update in the direction that they've been exposed to more evidence for the pro-Skub position than the anti-Skub position. If they're trying to learn facts about the world as quickly as possible, their time is much better spent reading an anti-Skub book than a pro-Skub book, since the pro-Skub book will have more facts they already know. An anti-Skub book also has more decision-relevant info. If they read a pro-Skub book, they'll probably still be pro-Skub afterwards. If they read an anti-Skub book, they might change their position and therefore change their actions.

Talking to an informed anti-Skub in person is even more efficient, since the anti-Skub person can present the very most relevant/persuasive evidence that is the very most likely to change their actions.

Applying this thinking to yourself, if you've got a particular position you hold, that's evidence you've been disproportionately exposed to facts that favor that position. If you want to get accurate beliefs quickly you should look for the strongest disconfirming evidence you can find.

None of this discussion even accounts for confirmation bias, groupthink, or information cascades! I'm getting a scary "because we read a website that's nominally about biases, we're pretty much immune to bias" vibe from your comment. Knowing about a bias and having implemented an effective, evidence-based debiasing intervention for it are very different.

BTW this is probably the comment that updated me the most in the direction that LW will become / already is a cult.

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-21T07:11:24.708Z · LW · GW

I'm not sure I agree with Jessica's interpretation of Eliezer's tweets, but I do think they illustrate an important point about MIRI: MIRI can't seem to decide if it's an advocacy org or a research org.

"if you actually knew how deep neural networks were solving your important mission-critical problems, you'd never stop screaming" is frankly evidence-free hyperbole, of the same sort activist groups use (e.g. "taxation is theft"). People like Chris Olah have studied how neural nets solve problems a lot, and I've never heard of them screaming about what they discovered.

Suppose there was a libertarian advocacy group with a bombastic leader who liked to go tweeting things like "if you realized how bad taxation is for the economy, you'd never stop screaming". After a few years of advocacy, the group decides they want to switch to being a think tank. Suppose they hire some unusually honest economists, who study taxation and notice things in the data that kinda suggest taxation might actually be good for the economy sometimes. Imagine you're one of those economists and you're gonna ask your boss about looking into this more. You might have second thoughts like: Will my boss scream at me? Will they fire me? The organizational incentives don't seem to favor truthseeking.

Another issue with advocacy is you can get so caught up in convincing people that the problem needs to be solved that you forget to solve it, or even take actions that are counterproductive for solving it. For AI safety advocacy, you want to convince everyone that the problem is super difficult and requires more attention and resources. But for AI safety research, you want to make the problem easy, and solve it with the attention and resources you have.

In The Algorithm Design Manual, Steven Skiena writes:

In any group brainstorming session, the most useful person in the room is the one who keeps asking “Why can’t we do it this way?”; not the nitpicker who keeps telling them why. Because he or she will eventually stumble on an approach that can’t be shot down... The correct answer to “Can I do it this way?” is never “no,” but “no, because. . . .” By clearly articulating your reasoning as to why something doesn’t work, you can check whether you have glossed over a possibility that you didn’t think hard enough about. It is amazing how often the reason you can’t find a convincing explanation for something is because your conclusion is wrong.

Being an advocacy org means you're less likely to hire people who continually ask "Why can’t we do it this way?", and those who are hired will be discouraged from this behavior if it's implied that a leader might scream if they dislike the proposed solution. The activist mindset tends to favor evidence-free hyperbole over carefully checking if you glossed over a possibility, or wondering if an inability to convince others means your conclusion is wrong.

I dunno if there's an easy solution to this -- I would like to see both advocacy work and research work regarding AI safety. But having them in the same org seems potentially suboptimal.

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-21T04:52:31.807Z · LW · GW

The most natural shared interest for a group united by "taking seriously the idea that you are a computation" seems like computational neuroscience, but that's not on your list, nor do I recall it being covered in the sequences. If we were to tell 5 random philosophically inclined STEM PhD students to write a lit review on "taking seriously the idea that you are a computation" (giving them that phrase and nothing else), I'm quite doubtful we would see any sort of convergence towards the set of topics you allude to (Haskell, anthropics, mathematical logic).

As a way to quickly sample the sequences, I went to Eliezer's userpage, sorted by score, and checked the first 5 sequence posts:

IMO very little of the content of these 5 posts fits strongly into the theme of "taking seriously the idea that you are a computation". I think this might be another one of these rarity narrative things (computers have been a popular metaphor for the brain for decades, but we're the only ones who take this seriously, same way we're the only ones who are actually trying).

the sequences pipeline is largely creating a selection for this philosophical stance

I think the vast majority of people who bounce off the sequences do so either because it's too longwinded or they don't like Eliezer's writing style. I predict that if you ask someone involved in trying to popularize the sequences, they will agree.

In this post Eliezer wrote:

I've written about how "science" is inherently public...

But that's only one vision of the future. In another vision, the knowledge we now call "science" is taken out of the public domain—the books and journals hidden away, guarded by mystic cults of gurus wearing robes, requiring fearsome initiation rituals for access—so that more people will actually study it.

I assume this has motivated a lot of the stylistic choices in the sequences and Eliezer's other writing: the 12 virtues of rationality, the litany of Gendlin/Tarski/Hodgell, parables and fables, Jeffreyssai and his robes/masks/rituals.

I find the sequences to be longwinded and repetitive. I think Eliezer is a smart guy with interesting ideas, but if I wanted to learn quantum mechanics (or any other academic topic the sequences cover), I would learn it from someone who has devoted their life to understanding the subject and is widely recognized as a subject matter expert.

From my perspective, the question how anyone gets through all 1800+ pages of the sequences. My answer is that the post I linked is right. The mystical presentation, where Eliezer plays the role of your sensei who throws you to the mat out of nowhere if you forgot to keep your center of gravity low, really resonates with some people (and really doesn't resonate with others). By the time someone gets through all 1800+ pages, they've invested a significant chunk of their ego in Eliezer and his ideas.

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-21T01:42:17.546Z · LW · GW

Something I try to keep in mind about critics is that people who deeply disagree with you are also not usually very invested in what you're doing, so from their perspective there isn't much of an incentive to put effort into their criticism. But in theory, the people who disagree with you the most are also the ones you can learn the most from.

You want to be the sort of person where if you're raised Christian, and an atheist casually criticizes Christianity, you don't reject the criticism immediately because "they didn't even take the time to read the Bible!"

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-21T01:37:21.571Z · LW · GW

Any thoughts on how we can help you be at peace?

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-21T01:35:23.249Z · LW · GW

Thanks. After thinking for a bit... it doesn't seem to me that Topher frobnitzes Scott, so indeed Eliezer's reaction seems inappropriately strong. Publishing emails that someone requested (and was not promised) privacy for is not an act of sadism.

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-21T00:55:13.372Z · LW · GW

It seems quite possible to me that the philosophical stance + mathematical taste you're describing aren't "natural kinds" (e.g. the topics you listed don't actually have a ton in common, besides being popular MIRI-sphere topics).

If that's the case, selecting for people with the described philosophical stance + mathematical taste could basically be selecting for "people with little resistance to MIRI's organizational narrative" (people who have formed their opinions about math + philosophy based on opinions common in/around MIRI).

selection on philosophical competence, and thus, by proxy, philosophical agreement

It sounds like you're saying that at MIRI, you approximate a potential hire's philosophical competence by checking to see how much they agree with you on philosophy. That doesn't seem great for group epistemics?

I've been following MIRI for many years now. I've sometimes noticed conversations that I'm tempted to summarize as "patting each other on the back for how right we all are". ("No one else is actually trying" has that flavor. Here is a comment I wrote recently which might also be illustrative. You could argue this is a universal human tendency, but when I look back at different organizations where I've worked, I don't think any of them had it nearly as bad as MIRI does. Or at least, how bad MIRI used to have it. I believe it's gotten a bit better in recent years.)

I think MIRI is doing important work on important problems. I also think it would be high value of information for MIRI to experiment with trying to learn from people who don't share the "typical MIRI worldview" -- people interested in topics that MIRI-sphere people don't talk about much, people who have a somewhat different philosophical stance, etc. I think this could make MIRI's research significantly stronger. The marginal value of talking to / hiring a researcher who's already ~100% in agreement with you seems low compared to the marginal value of talking to / hiring a researcher who brings something new to the table.

If you're still in the mode of "searching for more promising paths", I think this sort of exploration strategy could be especially valuable. Perhaps you could establish some sort of visiting scholars program. This could maximize your exposure to diverse worldviews, and also encourage new researchers to be candid in their disagreements, if their employment is for a predetermined, agreed-upon length. (I know that SIAI had a visiting fellows program in years past that wasn't that great. If you want me to help you think about how to run something better I'm happy to do that.)

Another thought is it might be helpful to try & articulate precisely what makes MIRI different from other AI safety organizations, and make sure your hiring selects for that and nothing else. When I think about what makes MIRI different from other AI safety orgs, there are some broad things that come to mind:

But there are also some much more specific things, like the ones you mentioned -- interest in specific, fairly narrow mathematical & philosophical topics. From the outside it looks kinda like MIRI suffers from "not invented here" syndrome.

My personal guess is that MIRI would be a stronger org, and the AI safety ecosystem as a whole would be stronger, if MIRI expanded their scope to the bullet points I listed above and tried to eliminate the influence of "not invented here" on their hiring decisions. (My reasoning is partially based on the fact that I can't think of AI safety organizations besides MIRI which match the bullet points I listed. I think this proposal would be an expansion into neglected research territory. I'd appreciate a correction if there are orgs I'm unaware of / not remembering.)

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-20T23:11:57.867Z · LW · GW

What screed are you referring to?

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-20T23:00:44.559Z · LW · GW

These claims seem rather extreme and unsupported to me:

  • "Lots of upper middle class adults hardly know how to have conversations..."

  • "the average workplace [is] more than 1/10th as damaging to most employees’ basic human capacities, compared to Leverage_2018-2019."

I suggest if you write a toplevel post, you search for evidence for/against them.

Elaborating a bit on my reasons for skepticism:

  • It seems like for the past 10+ years, you've been mostly interacting with people in CFAR-adjacent contexts. I'm not sure what your source of knowledge is on "average" upper middle class adults/workplaces. My personal experience is normal people are comfortable having non-superficial conversations if you convince them you aren't weird first, and normal workplaces are pretty much fine. (I might be overselecting on smaller companies where people have a sense of humor.)

    • A specific concrete piece of evidence: Joe Rogan has one of the world's most popular podcasts, and the episodes I've heard very much seem to me like they're "hitting new unpredictable thoughts". Rogan is notorious for talking to guests about DMT, for instance.
  • The two observations seems a bit inconsistent, if you'll grant that working class people generally have worse working conditions than upper middle class people -- you'd expect them to experience more workplace abuse and therefore have more trauma. (In which context would an abusive boss be more likely to get called out successfully: a tech company or a restaurant?)

  • I've noticed a pattern where people like Vassar will make extreme claims without much supporting evidence and people will respond with "wow, what an interesting guy" instead of asking for evidence. I'm trying to push back against that.

  • I can imagine you'd be tempted to rationalize that whatever pathological stuff is/was present at CFAR is also common in the general population / organizations in general.

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T09:53:08.251Z · LW · GW

I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway - you're treating a symptom.

If psychosis is caused by an underlying physiological/biochemical process, wouldn't that suggest that e.g. exposure to Leverage Research wouldn't be a cause of it?

If being part of Leverage is causing less reality-based thoughts and nudging someone into mild psychosis, I would expect that being part of some other group could cause more reality-based thoughts and nudge someone away from mild psychosis. Why would causation be possible in one direction but not the other?

I guess another hypothesis here is that some cases are caused by social/environmental factors and others are caused by biochemical factors. If that's true, I'd expect changing someone's environment to be more helpful for the former sort of case.

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T09:30:25.896Z · LW · GW

Does anyone have thoughts about avoiding failure modes of this sort?

Especially in the "least convenient possible world" where some of the bullet points are actually true -- like, if we're disseminating principles for wannabe AI Manhattan Projects, and we're optimizing the principles for the possibility that one of the wannabe AI Manhattan Projects is the real deal, what principles should we disseminate?


Most of my ideas are around "staying grounded" -- spend significant time hanging out with "normies" who don't buy into your worldview, maintain your sense of humor, fully unplug from work at least one day per week, have hobbies outside of work (perhaps optimizing explicitly for escapism in the form of computer games, TV shows, etc.) Possibly live somewhere other than the Bay Area, someplace with fewer alternative lifestyles and a stronger sense of community. (I think Oxford has been compared favorably to Berkeley with regard to presence of homeless people, at least.)

But I'm just guessing, and I encourage others to share their thoughts. Especially people who've observed/experienced mental health crises firsthand -- how could they have been prevented?

EDIT: I'm also curious how to think about scrupulosity. It seems to me that team members for an AI Manhattan Project should ideally have more scrupulosity/paranoia than average, for obvious reasons. ("A bit above the population average" might be somewhere around "they can count on one hand the number of times they blacked out while drinking" -- I suspect communities like ours already select for high-ish levels of scrupulosity.) However, my initial guess is that instead of directing that scrupulosity towards implementation of some sort of monastic ideal, they should instead direct that scrupulosity towards trying to make sure their plan doesn't fail in some way they didn't anticipate, trying to make sure their code doesn't have any bugs, monitoring their power-seeking tendencies, seeking out informed critics to learn from, making sure they themselves aren't a single point of failure, making sure that important secrets stay secret, etc. (what else should be on this list?) But, how much paranoia/scrupulosity is too much?

Comment by hg00 on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T09:16:14.389Z · LW · GW

The community still seems in the middle of sensemaking around Leverage

Understanding how other parts of the community were similar/dissimilar to Leverage seems valuable from a sensemaking point of view.

Lots of parts the post sort of implicitly presents things as important, or asks you to draw conclusions without explicitly pointing out those conclusions.

I think you may be asking your reader to draw the conclusion that this is a dishonest way to write, without explicitly pointing out that conclusion :-) Personally, I see nothing wrong with presenting only observations.

Comment by hg00 on Common knowledge about Leverage Research 1.0 · 2021-10-16T01:40:47.013Z · LW · GW

The rationalist community did in fact have to have such conversations about Eliezer over the years, and (IMO) mostly concluded that he actively wants to just sit in a comfortable cave and produce FAI progress with his team, and so he delegates any social authority/power he gains to trusted others, making him a safer weirdo leader figure than most.

Was this conversation held publicly on a non-Eliezer-influenced online forum?

I think there's a pretty big difference -- from accounts I've read about Leverage, the "Leverage community" had non-public conversations about Geoff as well, and they concluded he was a great guy.

Comment by hg00 on Review: The End Is Always Near · 2021-09-28T21:49:06.183Z · LW · GW

https://xkcd.com/808/

If it worked, militaries would be using it to... Ditch boot camp and instead harden soldiers for battle by having them participate in group therapy

I think you're both too credulous of the research you linked, and extrapolating from it too confidently.

Comment by hg00 on Jimrandomh's Shortform · 2021-07-23T02:31:52.194Z · LW · GW

The advice I've heard is to eat a variety of fruits and vegetables of different colors to get a variety of antioxidants in your diet.

Until recently, the thinking had been that the more antioxidants, the less oxidative stress, because all of those lonely electrons would quickly get paired up before they had the chance to start mucking things up in our cells. But that thinking has changed.

Drs. Cleva Villanueva and Robert Kross published a 2012 review titled “Antioxidant-Induced Stress” in the International Journal of Molecular Sciences. We spoke via Skype about the shifting understanding of antioxidants.

“Free radicals are not really the bad ones or antioxidants the good ones.” Villanueva told me. Their paper explains the process by which antioxidants themselves become reactive, after donating an electron to a free radical. But, in cases when a variety of antioxidants are present, like the way they come naturally in our food, they can act as a cascading buffer for each other as they in turn give up electrons to newly reactive molecules.

https://blogs.scientificamerican.com/food-matters/antioxidant-supplements-too-much-of-a-kinda-good-thing/

On a meta level, I don't think we understand nutrition well enough to reason about it from first principles, so if the lore among dietitians is that people who eat a variety of foods are healthier, I think we should put stock in that.

Similarly: "Institutional sources consistently overstate the importance of a varied diet, because this prevents failures of dietary advice from being too legible; if you tell someone to eat a varied diet, they can't blame you if they're diagnosed with a deficiency." But there's a real point here, e.g. suppose that you have just a few standard meals, but all of the high-magnesium food items are being paired with phytates, and you end up magnesium deficient.

Comment by hg00 on [Link] Musk's non-missing mood · 2021-07-14T06:40:00.104Z · LW · GW

An interesting missing mood I've observed in discussions of AI safety: When a new idea for achieving safe AI is proposed, you might expect that people concerned with AI risk would show a glimmer of eager curiosity. Perhaps the AI safety problem is actually solvable!

But I've pretty much never observed this. A more common reaction seems to be a sort of an uneasy defensiveness, sometimes in combination with changing the subject.

Another response I occasionally see is someone mentioning a potential problem in a manner that practically sounds like they are rebuking the person who shared the new idea.

I eventually came to the conclusion that there is some level on which many people in the AI safety community actually don't want to see the problem of AI safety solved, because too much of their self-concept is wrapped up in AI safety being a super difficult problem. I highly doubt this occurs on a conscious level, it's probably due to the same sort of subconscious psychological defenses you describe, e.g. embarrassment at not having seen the solution oneself.

Comment by hg00 on [Prediction] What war between the USA and China would look like in 2050 · 2021-05-28T04:40:38.182Z · LW · GW

Hiding an aircraft carrier battle group on the open sea isn't possible.

This think tank disagrees.

Comment by hg00 on Does playing hard to get work? AB testing for romance · 2020-10-28T06:03:00.499Z · LW · GW

Looking forward to the results.

Comment by hg00 on Can we hold intellectuals to similar public standards as athletes? · 2020-10-07T06:59:54.602Z · LW · GW

Somewhere I read that a big reason IQ tests aren't all that popular is because when they were first introduced, lots of intellectuals took them and didn't score all that high.  I'm hoping prediction markets don't meet a similar fate.

Comment by hg00 on Universal Eudaimonia · 2020-10-06T13:01:18.410Z · LW · GW

It's fiction ¯\_(ツ)_/¯

I guess I'll say a few words in defense of doing something like this... Supposing we're taking an ethically consequentialist stance.  In that case, the only purpose of punishment, basically, is to serve as a deterrent.  But in our glorious posthuman future, nanobots will step in before anyone is allowed to get hurt, and crimes will be impossible to commit.  So deterrence is no longer necessary and the only reason to punish people is due to spite.  But if people are feeling spiteful towards one another on Eudaimonia that would kill the vibe.  Being able to forgive one person you disagree with seems like a pretty low bar where being non-spiteful is concerned.  (Other moral views might consider punishment to be a moral imperative even if it isn't achieving anything from a consequentialist point of view.  But consequentialism is easily the most popular moral view on LW according to this survey.)

A more realistic scheme might involve multiple continents for people with value systems that are strongly incompatible, perhaps allowing people to engage in duels on a voluntary basis if they're really sure that is what they want to do.

In any case, the name of the site is "Less Wrong" not "Always Right", so I feel pretty comfortable posting something which I suspect may be flawed and letting commenters find flaws (and in fact that was part of why I made this post, to see what complaints people would have, beyond the utility of sharing a fun whimsical story.  But overall the post was more optimized for whimsy.)

Comment by hg00 on Rationality and Climate Change · 2020-10-06T04:01:08.441Z · LW · GW

For some thoughts on how climate change stacks up against other world-scale issues, see this.

Comment by hg00 on Universal Eudaimonia · 2020-10-06T03:45:26.426Z · LW · GW

Yep. Good thing a real AI would come up with a much better idea! :)

Comment by hg00 on Needed: AI infohazard policy · 2020-09-30T10:33:17.822Z · LW · GW

It seems to me that under ideal circumstances, once we think we've invented FAI, before we turn it on, we share the design with a lot of trustworthy people we think might be able to identify problems.  I think it's good to have the design be as secret as possible at that point, because that allows the trustworthy people to scrutinize it at their leisure.  I do think the people involved in the design are liable to attract attention--keeping this "FAI review project" secret will be harder than keeping the design itself secret.  (It's easier to keep the design for the bomb secret than hide the fact that top physicists keep mysteriously disappearing.)  And any purported FAI will likely come after a series of lesser systems with lucrative commercial applications used to fund the project, and those lucrative commercial applications are also liable to attract attention.  So I think it's strategically valuable to have the distance between published material and a possible FAI design be as large as possible.  To me, the story of nuclear weapons is a story of how this is actually pretty hard even when well-resourced state actors try to do it.

Of course, that has to be weighed against the benefit of openness.  How is openness helpful?  Openness lets other researchers tell you if they think you're pursuing a dangerous research direction, or if there are serious issues with the direction you're pursuing which you are neglecting.  Openness helps attract collaborators.  Openness helps gain prestige.  (I would argue that prestige is actually harmful because it's better to keep a low profile, but I guess prestige is useful for obtaining required funding.)  How else is openness helpful?

My suspicion is that those papers on Arxiv with 5 citations are mostly getting cited by people who already know the author, and the Arxiv publication isn't actually doing much to attract collaboration.  It feels to me like if our goal is to help researchers get feedback on their research direction or find collaborators, there are better ways to do this than encouraging them to publish their work.  So if we could put mechanisms in place to achieve those goals, that could remove much of the motivation for openness, which would be a good thing in my view.

Comment by hg00 on EA Relationship Status · 2020-09-21T06:57:12.581Z · LW · GW

Dating is a project that can easily suck up a lot of time and attention, and the benefits seem really dubious (I know someone who had their life ruined by a bad divorce).

I would be interested in the opposite question: Why *would* an EA try and find someone to marry? I'm not trying to be snarky, I genuinely want to hear why in case I should change my strategy. The only reason I can think of is if you're a patient longtermist and you think your kids are more likely to be EAs.

Comment by hg00 on Open & Welcome Thread - June 2020 · 2020-08-08T14:26:25.916Z · LW · GW

I spent some time reading about the situation in Venezuela, and from what I remember, a big reason people are stuck there is simply that the bureaucracy for processing passports is extremely slow/dysfunctional (and lack of a passport presents a barrier for achieving a legal immigration status in any other country). So it might be worthwhile to renew your passport more regularly than is strictly necessary, so you always have at least a 5 year buffer on it say, in case we see the same kind of institutional dysfunction. (Much less effort than acquiring a second passport.)

Side note: I once talked to someone who became stuck in a country that he was not a citizen of because he allowed his passport to expire and couldn't travel back home to get it renewed. (He was from a small country. My guess is that the US offers passport services without needing to travel back home. But I could be wrong.)

Comment by hg00 on Open & Welcome Thread - July 2020 · 2020-08-08T06:46:03.010Z · LW · GW

Worth noting that we have at least one high-karma user who is liable to troll us with any privileges granted to high-karma users.

Comment by hg00 on Do Women Like Assholes? · 2020-06-24T05:44:41.557Z · LW · GW
I was always nice and considerate, and it didn’t work until I figured out how to filter for women who are themselves lovely and kind.

Does anyone have practical tips on finding lonely single women who are lovely and kind? I've always assumed that these were universally attractive attributes, and thus there would be much more competition for such women.

Comment by hg00 on Most reliable news sources? · 2020-06-06T22:01:08.035Z · LW · GW

The Financial Times, maybe FiveThirtyEight

hedonometer.org is a quick way to check if something big has happened

Comment by hg00 on Open & Welcome Thread - June 2020 · 2020-06-06T08:41:01.719Z · LW · GW

Permanent residency (as opposed to citizenship) is a budget option. For example, for Panama, I believe if you're a citizen of one of 50 nations on their "Friendly Nations" list, you can obtain permanent residency by depositing $10K in a Panamanian bank account. If I recall correctly, Paraguay's permanent residency has similar prerequisites ($5K deposit required) and is the easiest to maintain--you just need to be visiting the country every 3 years.

Comment by hg00 on The Chilling Effect of Confiscation · 2020-04-26T08:41:32.434Z · LW · GW

I think this is the best argument I've seen in favor of mask seizure / media misrepresentation on this:

https://www.reddit.com/r/slatestarcodex/comments/g5yh64/us_federal_government_seizing_ppe_to_what_end/fo6pyel/

Comment by hg00 on Judgment, Punishment, and the Information-Suppression Field · 2019-11-06T22:02:38.892Z · LW · GW

Downvoted because I don't want LW to be the kind of place where people casually make inflammatory political claims, in a way that seems to assume this is something we all know and agree with, without any supporting evidence.

Comment by hg00 on Open & Welcome Thread - October 2019 · 2019-10-14T04:31:45.833Z · LW · GW

The mental models in this post seem really generally useful: https://www.lesswrong.com/posts/ZQG9cwKbct2LtmL3p/evaporative-cooling-of-group-beliefs

Comment by hg00 on Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists · 2019-09-27T03:44:39.351Z · LW · GW

Nice post. I think one thing which can be described in this framework is a kind of "distributed circular reasoning". The argument is made that "we know sharing evidence for Blue positions causes harmful effects due to Green positions A, B, and C", but the widespread acceptance of Green positions A, B, and C itself rests on the fact that evidence for Green positions is shared much more readily than evidence for Blue positions.

Comment by hg00 on Religion as Goodhart · 2019-07-12T02:09:15.070Z · LW · GW

The trouble is that tradition is undocumented code, so you aren't sure what is safe to change when circumstances change.

Comment by hg00 on Self-consciousness wants to make everything about itself · 2019-07-05T19:48:29.957Z · LW · GW
Seems like a bad comparison, since, as an atheist, you don't accept the Bible's truth, so the things the preacher is saying are basically spam from your perspective. There's also no need to feel self-conscious or defend your good-person-ness to this preacher, as you don't accept the premises he's arguing from.

Yes, and the preacher doesn't ask me about my premises before attempting to impose their values on me. Even if I share some or all of the preacher's premises, they're trying to force a strong conclusion about my moral character upon me and put my reputation at stake without giving me a chance to critically examine the logic with which that conclusion was derived or defend my reputation. Seems like a rather coercive conversation, doesn't it?

Does it seem to you that the preacher is engaging with me in good faith? Are they curious, or have they already written the bottom line?

Comment by hg00 on Self-consciousness wants to make everything about itself · 2019-07-05T17:08:10.068Z · LW · GW

I think I see a motte and bailey around what it means to be a good person. Notice at the beginning of the post, we've got statements like

Anita reassured Susan that her comments were not directed at her personally

...

they spent the duration of the meeting consoling Susan, reassuring her that she was not at fault

And by the end, we've got statements like

it's quite hard to actually stop participating in racism... In societies with structural racism, ethical behavior requires skillfully and consciously reducing harm

...

almost every person's behavior is morally depraved a lot of the time

...

What if there are bad things that are your fault?

...

accept that you are irredeemably evil

Maybe Susan knows on some level that her colleagues aren't being completely honest when they claim to think she's not at fault. Maybe she correctly reads conversational subtext suggesting she is morally depraved, bad things are her fault, and she is irredeemably evil. This could explain why she reacts so negatively.

The parallel you draw to Calvinist doctrine is interesting. Presumably most of us would not take a Christian preacher very seriously if they told us we were morally depraved. As an atheist, when a preacher on the street tells me this, I see it as an unwelcome attempt to impose their values on me. I don't tell the preacher that I accept the fact that I'm irredeemably evil, because I don't want to let the preacher browbeat me into changing the way I live my life.

Now suppose you were accosted by such a preacher, and when you responded negatively, they proclaimed that your choice to defend yourself (by telling about times when you worked to make the world a better place, say) was further evidence of your depravity. The preacher brings out their bible and points to a verse which they interpret to mean "it is a sin to defend yourself against street preachers". How do you react?

Seems like a bit of a Catch-22 eh? The preacher has created a situation where if I accept their conversational frame, I'm considered a terrible person if I don't do whatever they say. See numbers 13, 18 and 21 on this list.

Comment by hg00 on Self-consciousness wants to make everything about itself · 2019-07-04T04:41:13.269Z · LW · GW

Maybe you're right, I haven't seen it used much in practice. Feel free to replace "Something like Nonviolent Communication" with "Advice for getting along with people" in that sentence.

Comment by hg00 on Self-consciousness wants to make everything about itself · 2019-07-04T03:07:48.192Z · LW · GW

Agreed. Also, remember that conversations are not always about facts. Oftentimes they are about the relative status of the participants. Something like Nonviolent Communication might seem like tone policing, but through a status lens, it could be seen as a practice where you stop struggling for higher status with your conversation partner and instead treat them compassionately as an equal.

Comment by hg00 on Scholarship: How to Do It Efficiently · 2019-06-23T23:44:13.859Z · LW · GW

Just saw this Facebook group for getting papers. There's also this. And https://libkey.io/

Comment by hg00 on The Relationship Between Hierarchy and Wealth · 2019-01-26T11:19:58.983Z · LW · GW

Interesting post. I think it might be useful to examine the intuition that hierarchy is undesirable, though.

It seems like you might want to separate out equality in terms of power from equality in terms of welfare. Most of the benefits from hierarchy seem to be from power inequality (let the people who are the most knowledgable and the most competent make important decisions). Most of the costs come in the form of welfare inequality (decision-makers co-opting resources for themselves). (The best argument against this frame would probably be something about the average person having self-actualization, freedom, and mastery of their destiny. This could be a sense in which power equality and welfare equality are the same thing.)

Robin Hanson's "vote values, bet beliefs" proposal is an intriguing way to get the benefits of inequality without the costs. You have the decisions being made by wealthy speculators, who have a strong financial incentive to leave the prediction market if they are less knowledgable and competent than the people they're betting against. But all those brains get used in the service of achieving values that everyone in society gets equal input on. So you have a lot of power inequality but a lot of welfare equality. Maybe you could even address the self-actualization point by making that one of the values that people vote on somehow. (Also, it's not clear to me that voting on values rather than politicians actually represents loss of freedom to master your destiny etc.)

This is also interesting.

Comment by hg00 on Reverse Doomsday Argument is hitting preppers hard · 2018-12-28T21:11:07.315Z · LW · GW

If you're willing to go back more than 70 years, in the US at least, the math suggests prepping is a good strategy:

https://medium.com/s/story/the-surprisingly-solid-mathematical-case-of-the-tin-foil-hat-gun-prepper-15fce7d10437

Comment by hg00 on “She Wanted It” · 2018-11-12T19:47:57.092Z · LW · GW

+1 for this. It's tremendously refreshing to see someone engage the opposing position on a controversial issue in good faith. I hope you don't regret writing it.

Would your model predict that if we surveyed fans of *50 Shades of Grey*, they have experienced traumatic abuse at a rate higher than the baseline? This seems like a surprising but testable prediction.

Personally, I think your story might be accurate for your peer group, but that your peer group is also highly non-representative of the population at large. There is very wide variation in female sexual preferences. For example, the stupidslutsclub subreddit was created for women to celebrate their enjoyment of degrading and often dubiously consensual sex. The conversation there looks nothing like the conversation about sex in the rationalist community, because they are communities for very different kinds of people. When I read the stupidslutsclub subreddit, I don't get the impression that the female posters are engaging in the sort of self-harm you describe. They're just women with some weird kinks.

Most PUA advice is optimized for picking up neurotypical women who go clubbing every weekend. Women in the rationalist community are far more likely to spend Friday evening reading Tumblr than getting turnt.
We shouldn't be surprised if there are a lot of mating behaviors that women in one group enjoy and women in the other group find disturbing.

If I hire someone to commit a murder, I'm guilty of something bad. By creating an incentive for a bad thing to happen, I have caused a bad thing to happen, therefore I'm guilty. By the same logic, we could argue that if a woman systematically rejects non-abusive men in favor of abusive men, she is creating an incentive for men to be abusive, and is therefore guilty. (I'm not sure whether I agree with this argument. It's not immediately compatible with the "different strokes for different folks" point from previous paragraphs. But if feminists made it, I would find it more plausible that their desire is to stop a dynamic they consider harmful, as opposed to engage in anti-male sectarianism.)

Another point: Your post doesn't account for replaceability effects. If a woman is systematically rejecting non-abusive men in favor of abusive men, and a guy presents himself as someone who's abusive enough to be attractive to her but less abusive than the average guy she would date, then you could argue that she gains utility through dating him. And if she has a kid, we'd probably like her to have a kid with someone who's pretending to be a jerk than someone who actually is a jerk, since the kid only inherits jerk genes in the latter case. (BTW, I think the "systematically rejecting non-abusive men in favor of abusive men" is an extreme case that is probably quite rare/nonexistent in the population, but it's simpler to think about.)

Once you account for replaceability, it could be that the most effective intervention for decreasing abuse is actually to help non-abusive guys be more attractive. If non-abusive guys are more attractive, some women who would have dated abusive guys will date them instead, so the volume of abuse will decrease. This could involve, for example, advice for how to be dominant in a sexy but non-abusive way.

Comment by hg00 on You Are Being Underpaid · 2018-05-05T06:48:04.401Z · LW · GW

https://kenrockwell.com/business/two-hour-rule.htm