Posts

Universal Eudaimonia 2020-10-05T13:45:19.609Z
Thoughts on hacking aromanticism? 2016-06-02T11:52:55.241Z
Is it sensible for an ambitious nonsmoker to use e-cigarettes? 2015-11-24T22:48:52.998Z
Find someone to talk to thread 2015-09-26T22:24:32.676Z

Comments

Comment by hg00 on [Prediction] What war between the USA and China would look like in 2050 · 2021-05-28T04:40:38.182Z · LW · GW

Hiding an aircraft carrier battle group on the open sea isn't possible.

This think tank disagrees.

Comment by hg00 on Does playing hard to get work? AB testing for romance · 2020-10-28T06:03:00.499Z · LW · GW

Looking forward to the results.

Comment by hg00 on Can we hold intellectuals to similar public standards as athletes? · 2020-10-07T06:59:54.602Z · LW · GW

Somewhere I read that a big reason IQ tests aren't all that popular is because when they were first introduced, lots of intellectuals took them and didn't score all that high.  I'm hoping prediction markets don't meet a similar fate.

Comment by hg00 on Universal Eudaimonia · 2020-10-06T13:01:18.410Z · LW · GW

It's fiction ¯\_(ツ)_/¯

I guess I'll say a few words in defense of doing something like this... Supposing we're taking an ethically consequentialist stance.  In that case, the only purpose of punishment, basically, is to serve as a deterrent.  But in our glorious posthuman future, nanobots will step in before anyone is allowed to get hurt, and crimes will be impossible to commit.  So deterrence is no longer necessary and the only reason to punish people is due to spite.  But if people are feeling spiteful towards one another on Eudaimonia that would kill the vibe.  Being able to forgive one person you disagree with seems like a pretty low bar where being non-spiteful is concerned.  (Other moral views might consider punishment to be a moral imperative even if it isn't achieving anything from a consequentialist point of view.  But consequentialism is easily the most popular moral view on LW according to this survey.)

A more realistic scheme might involve multiple continents for people with value systems that are strongly incompatible, perhaps allowing people to engage in duels on a voluntary basis if they're really sure that is what they want to do.

In any case, the name of the site is "Less Wrong" not "Always Right", so I feel pretty comfortable posting something which I suspect may be flawed and letting commenters find flaws (and in fact that was part of why I made this post, to see what complaints people would have, beyond the utility of sharing a fun whimsical story.  But overall the post was more optimized for whimsy.)

Comment by hg00 on Rationality and Climate Change · 2020-10-06T04:01:08.441Z · LW · GW

For some thoughts on how climate change stacks up against other world-scale issues, see this.

Comment by hg00 on Universal Eudaimonia · 2020-10-06T03:45:26.426Z · LW · GW

Yep. Good thing a real AI would come up with a much better idea! :)

Comment by hg00 on Needed: AI infohazard policy · 2020-09-30T10:33:17.822Z · LW · GW

It seems to me that under ideal circumstances, once we think we've invented FAI, before we turn it on, we share the design with a lot of trustworthy people we think might be able to identify problems.  I think it's good to have the design be as secret as possible at that point, because that allows the trustworthy people to scrutinize it at their leisure.  I do think the people involved in the design are liable to attract attention--keeping this "FAI review project" secret will be harder than keeping the design itself secret.  (It's easier to keep the design for the bomb secret than hide the fact that top physicists keep mysteriously disappearing.)  And any purported FAI will likely come after a series of lesser systems with lucrative commercial applications used to fund the project, and those lucrative commercial applications are also liable to attract attention.  So I think it's strategically valuable to have the distance between published material and a possible FAI design be as large as possible.  To me, the story of nuclear weapons is a story of how this is actually pretty hard even when well-resourced state actors try to do it.

Of course, that has to be weighed against the benefit of openness.  How is openness helpful?  Openness lets other researchers tell you if they think you're pursuing a dangerous research direction, or if there are serious issues with the direction you're pursuing which you are neglecting.  Openness helps attract collaborators.  Openness helps gain prestige.  (I would argue that prestige is actually harmful because it's better to keep a low profile, but I guess prestige is useful for obtaining required funding.)  How else is openness helpful?

My suspicion is that those papers on Arxiv with 5 citations are mostly getting cited by people who already know the author, and the Arxiv publication isn't actually doing much to attract collaboration.  It feels to me like if our goal is to help researchers get feedback on their research direction or find collaborators, there are better ways to do this than encouraging them to publish their work.  So if we could put mechanisms in place to achieve those goals, that could remove much of the motivation for openness, which would be a good thing in my view.

Comment by hg00 on EA Relationship Status · 2020-09-21T06:57:12.581Z · LW · GW

Dating is a project that can easily suck up a lot of time and attention, and the benefits seem really dubious (I know someone who had their life ruined by a bad divorce).

I would be interested in the opposite question: Why *would* an EA try and find someone to marry? I'm not trying to be snarky, I genuinely want to hear why in case I should change my strategy. The only reason I can think of is if you're a patient longtermist and you think your kids are more likely to be EAs.

Comment by hg00 on Open & Welcome Thread - June 2020 · 2020-08-08T14:26:25.916Z · LW · GW

I spent some time reading about the situation in Venezuela, and from what I remember, a big reason people are stuck there is simply that the bureaucracy for processing passports is extremely slow/dysfunctional (and lack of a passport presents a barrier for achieving a legal immigration status in any other country). So it might be worthwhile to renew your passport more regularly than is strictly necessary, so you always have at least a 5 year buffer on it say, in case we see the same kind of institutional dysfunction. (Much less effort than acquiring a second passport.)

Side note: I once talked to someone who became stuck in a country that he was not a citizen of because he allowed his passport to expire and couldn't travel back home to get it renewed. (He was from a small country. My guess is that the US offers passport services without needing to travel back home. But I could be wrong.)

Comment by hg00 on Open & Welcome Thread - July 2020 · 2020-08-08T06:46:03.010Z · LW · GW

Worth noting that we have at least one high-karma user who is liable to troll us with any privileges granted to high-karma users.

Comment by hg00 on Do Women Like Assholes? · 2020-06-24T05:44:41.557Z · LW · GW
I was always nice and considerate, and it didn’t work until I figured out how to filter for women who are themselves lovely and kind.

Does anyone have practical tips on finding lonely single women who are lovely and kind? I've always assumed that these were universally attractive attributes, and thus there would be much more competition for such women.

Comment by hg00 on Most reliable news sources? · 2020-06-06T22:01:08.035Z · LW · GW

The Financial Times, maybe FiveThirtyEight

hedonometer.org is a quick way to check if something big has happened

Comment by hg00 on Open & Welcome Thread - June 2020 · 2020-06-06T08:41:01.719Z · LW · GW

Permanent residency (as opposed to citizenship) is a budget option. For example, for Panama, I believe if you're a citizen of one of 50 nations on their "Friendly Nations" list, you can obtain permanent residency by depositing $10K in a Panamanian bank account. If I recall correctly, Paraguay's permanent residency has similar prerequisites ($5K deposit required) and is the easiest to maintain--you just need to be visiting the country every 3 years.

Comment by hg00 on The Chilling Effect of Confiscation · 2020-04-26T08:41:32.434Z · LW · GW

I think this is the best argument I've seen in favor of mask seizure / media misrepresentation on this:

https://www.reddit.com/r/slatestarcodex/comments/g5yh64/us_federal_government_seizing_ppe_to_what_end/fo6pyel/

Comment by hg00 on Judgment, Punishment, and the Information-Suppression Field · 2019-11-06T22:02:38.892Z · LW · GW

Downvoted because I don't want LW to be the kind of place where people casually make inflammatory political claims, in a way that seems to assume this is something we all know and agree with, without any supporting evidence.

Comment by hg00 on Open & Welcome Thread - October 2019 · 2019-10-14T04:31:45.833Z · LW · GW

The mental models in this post seem really generally useful: https://www.lesswrong.com/posts/ZQG9cwKbct2LtmL3p/evaporative-cooling-of-group-beliefs

Comment by hg00 on Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists · 2019-09-27T03:44:39.351Z · LW · GW

Nice post. I think one thing which can be described in this framework is a kind of "distributed circular reasoning". The argument is made that "we know sharing evidence for Blue positions causes harmful effects due to Green positions A, B, and C", but the widespread acceptance of Green positions A, B, and C itself rests on the fact that evidence for Green positions is shared much more readily than evidence for Blue positions.

Comment by hg00 on Religion as Goodhart · 2019-07-12T02:09:15.070Z · LW · GW

The trouble is that tradition is undocumented code, so you aren't sure what is safe to change when circumstances change.

Comment by hg00 on Self-consciousness wants to make everything about itself · 2019-07-05T19:48:29.957Z · LW · GW
Seems like a bad comparison, since, as an atheist, you don't accept the Bible's truth, so the things the preacher is saying are basically spam from your perspective. There's also no need to feel self-conscious or defend your good-person-ness to this preacher, as you don't accept the premises he's arguing from.

Yes, and the preacher doesn't ask me about my premises before attempting to impose their values on me. Even if I share some or all of the preacher's premises, they're trying to force a strong conclusion about my moral character upon me and put my reputation at stake without giving me a chance to critically examine the logic with which that conclusion was derived or defend my reputation. Seems like a rather coercive conversation, doesn't it?

Does it seem to you that the preacher is engaging with me in good faith? Are they curious, or have they already written the bottom line?

Comment by hg00 on Self-consciousness wants to make everything about itself · 2019-07-05T17:08:10.068Z · LW · GW

I think I see a motte and bailey around what it means to be a good person. Notice at the beginning of the post, we've got statements like

Anita reassured Susan that her comments were not directed at her personally

...

they spent the duration of the meeting consoling Susan, reassuring her that she was not at fault

And by the end, we've got statements like

it's quite hard to actually stop participating in racism... In societies with structural racism, ethical behavior requires skillfully and consciously reducing harm

...

almost every person's behavior is morally depraved a lot of the time

...

What if there are bad things that are your fault?

...

accept that you are irredeemably evil

Maybe Susan knows on some level that her colleagues aren't being completely honest when they claim to think she's not at fault. Maybe she correctly reads conversational subtext suggesting she is morally depraved, bad things are her fault, and she is irredeemably evil. This could explain why she reacts so negatively.

The parallel you draw to Calvinist doctrine is interesting. Presumably most of us would not take a Christian preacher very seriously if they told us we were morally depraved. As an atheist, when a preacher on the street tells me this, I see it as an unwelcome attempt to impose their values on me. I don't tell the preacher that I accept the fact that I'm irredeemably evil, because I don't want to let the preacher browbeat me into changing the way I live my life.

Now suppose you were accosted by such a preacher, and when you responded negatively, they proclaimed that your choice to defend yourself (by telling about times when you worked to make the world a better place, say) was further evidence of your depravity. The preacher brings out their bible and points to a verse which they interpret to mean "it is a sin to defend yourself against street preachers". How do you react?

Seems like a bit of a Catch-22 eh? The preacher has created a situation where if I accept their conversational frame, I'm considered a terrible person if I don't do whatever they say. See numbers 13, 18 and 21 on this list.

Comment by hg00 on Self-consciousness wants to make everything about itself · 2019-07-04T04:41:13.269Z · LW · GW

Maybe you're right, I haven't seen it used much in practice. Feel free to replace "Something like Nonviolent Communication" with "Advice for getting along with people" in that sentence.

Comment by hg00 on Self-consciousness wants to make everything about itself · 2019-07-04T03:07:48.192Z · LW · GW

Agreed. Also, remember that conversations are not always about facts. Oftentimes they are about the relative status of the participants. Something like Nonviolent Communication might seem like tone policing, but through a status lens, it could be seen as a practice where you stop struggling for higher status with your conversation partner and instead treat them compassionately as an equal.

Comment by hg00 on Scholarship: How to Do It Efficiently · 2019-06-23T23:44:13.859Z · LW · GW

Just saw this Facebook group for getting papers. There's also this. And https://libkey.io/

Comment by hg00 on The Relationship Between Hierarchy and Wealth · 2019-01-26T11:19:58.983Z · LW · GW

Interesting post. I think it might be useful to examine the intuition that hierarchy is undesirable, though.

It seems like you might want to separate out equality in terms of power from equality in terms of welfare. Most of the benefits from hierarchy seem to be from power inequality (let the people who are the most knowledgable and the most competent make important decisions). Most of the costs come in the form of welfare inequality (decision-makers co-opting resources for themselves). (The best argument against this frame would probably be something about the average person having self-actualization, freedom, and mastery of their destiny. This could be a sense in which power equality and welfare equality are the same thing.)

Robin Hanson's "vote values, bet beliefs" proposal is an intriguing way to get the benefits of inequality without the costs. You have the decisions being made by wealthy speculators, who have a strong financial incentive to leave the prediction market if they are less knowledgable and competent than the people they're betting against. But all those brains get used in the service of achieving values that everyone in society gets equal input on. So you have a lot of power inequality but a lot of welfare equality. Maybe you could even address the self-actualization point by making that one of the values that people vote on somehow. (Also, it's not clear to me that voting on values rather than politicians actually represents loss of freedom to master your destiny etc.)

This is also interesting.

Comment by hg00 on Reverse Doomsday Argument is hitting preppers hard · 2018-12-28T21:11:07.315Z · LW · GW

If you're willing to go back more than 70 years, in the US at least, the math suggests prepping is a good strategy:

https://medium.com/s/story/the-surprisingly-solid-mathematical-case-of-the-tin-foil-hat-gun-prepper-15fce7d10437

Comment by hg00 on “She Wanted It” · 2018-11-12T19:47:57.092Z · LW · GW

+1 for this. It's tremendously refreshing to see someone engage the opposing position on a controversial issue in good faith. I hope you don't regret writing it.

Would your model predict that if we surveyed fans of *50 Shades of Grey*, they have experienced traumatic abuse at a rate higher than the baseline? This seems like a surprising but testable prediction.

Personally, I think your story might be accurate for your peer group, but that your peer group is also highly non-representative of the population at large. There is very wide variation in female sexual preferences. For example, the stupidslutsclub subreddit was created for women to celebrate their enjoyment of degrading and often dubiously consensual sex. The conversation there looks nothing like the conversation about sex in the rationalist community, because they are communities for very different kinds of people. When I read the stupidslutsclub subreddit, I don't get the impression that the female posters are engaging in the sort of self-harm you describe. They're just women with some weird kinks.

Most PUA advice is optimized for picking up neurotypical women who go clubbing every weekend. Women in the rationalist community are far more likely to spend Friday evening reading Tumblr than getting turnt.
We shouldn't be surprised if there are a lot of mating behaviors that women in one group enjoy and women in the other group find disturbing.

If I hire someone to commit a murder, I'm guilty of something bad. By creating an incentive for a bad thing to happen, I have caused a bad thing to happen, therefore I'm guilty. By the same logic, we could argue that if a woman systematically rejects non-abusive men in favor of abusive men, she is creating an incentive for men to be abusive, and is therefore guilty. (I'm not sure whether I agree with this argument. It's not immediately compatible with the "different strokes for different folks" point from previous paragraphs. But if feminists made it, I would find it more plausible that their desire is to stop a dynamic they consider harmful, as opposed to engage in anti-male sectarianism.)

Another point: Your post doesn't account for replaceability effects. If a woman is systematically rejecting non-abusive men in favor of abusive men, and a guy presents himself as someone who's abusive enough to be attractive to her but less abusive than the average guy she would date, then you could argue that she gains utility through dating him. And if she has a kid, we'd probably like her to have a kid with someone who's pretending to be a jerk than someone who actually is a jerk, since the kid only inherits jerk genes in the latter case. (BTW, I think the "systematically rejecting non-abusive men in favor of abusive men" is an extreme case that is probably quite rare/nonexistent in the population, but it's simpler to think about.)

Once you account for replaceability, it could be that the most effective intervention for decreasing abuse is actually to help non-abusive guys be more attractive. If non-abusive guys are more attractive, some women who would have dated abusive guys will date them instead, so the volume of abuse will decrease. This could involve, for example, advice for how to be dominant in a sexy but non-abusive way.

Comment by hg00 on You Are Being Underpaid · 2018-05-05T06:48:04.401Z · LW · GW

https://kenrockwell.com/business/two-hour-rule.htm

Comment by hg00 on Open thread, October 2 - October 8, 2017 · 2017-10-18T01:35:28.803Z · LW · GW

This is sad.

Some of his old tweets are pretty dark:

I haven't talked to anyone face to face since 2015

https://twitter.com/Grognor/status/868640995856068609

I just want to remind everyone that this thread exists.

Comment by hg00 on Reaching out to people with the problems of friendly AI · 2017-10-10T17:19:28.577Z · LW · GW

Here is another link

Comment by hg00 on Open thread, June. 19 - June. 25, 2017 · 2017-07-01T18:59:53.560Z · LW · GW

The sentiment is the same, but mine has an actual justification behind it. Care to attack the justification?

Comment by hg00 on Open thread, June. 19 - June. 25, 2017 · 2017-06-21T05:11:40.432Z · LW · GW

Happy thought of the day: If the simulation argument is correct, and you find that you are not a p-zombie, it means some super civilization thinks you're doing something important/interesting enough to expend the resources simulating you.

"I think therefore I am a player character."

Comment by hg00 on Reaching out to people with the problems of friendly AI · 2017-06-01T06:50:08.755Z · LW · GW

I just saw this link, maybe you have thoughts?

(Let's move subsequent discussion over there)

Comment by hg00 on Reaching out to people with the problems of friendly AI · 2017-05-27T06:20:22.624Z · LW · GW

This is a zero-sum game where every person working on x-risk is a technical person explicitly not working on advancing technologies (like AI) that will increase standards of living and help solve our global problems. If someone chooses to work on AI x-risk, they are probably qualified to work directly on the hard problems of AI itself. By not working on AI they are incrementally slowing down AI efforts, and therefore delaying access to technology that could save the world.

I wouldn't worry much about this, because the financial incentives to advance AI are much stronger than the ones to work on AI safety. AI safety work is just a blip compared to AI advancement work.

So here's a utilitarian calculation for you: assume that AGI will allow us to conquer disease and natural death, by virtue of the fact that true AGI removes scarcity of intellectual resources to work on these problems. It's a bit of a naïve view, but I'm asking you to assume it only for the sake of argument. Then every moment someone is working on x-risk problems instead, they are potentially delaying the advent of true AGI by some number of minutes, hours, or days. Multiply that by the number of people who die unnecessary deaths every day -- hundreds of thousands -- and that is the amount of blood on the hands of someone who is capable but chooses not to work on making the technology widely available as quickly as possible. Existential risk can only be justified as a more pressing concern if can be reasonably demonstrated to have a higher probability of causing more deaths than inaction.

You should really read Astronomical Waste before you try to make this kind of quasi-utilitarian argument about x-risk :)

Show me the code. Demonstrate for me (in a toy but realistic environment) an AI/proto-AGI that turns evil, built using the architectures that are the current focus of research, and give me reasonable technical justification for why we should expect the same properties in larger, more complex environments.

What do you think of this example?

https://www.facebook.com/jesse.newton.37/posts/776177951574

(I'm sure there are better examples to be found, I'm just trying to figure out what you are looking for.)

Comment by hg00 on Reaching out to people with the problems of friendly AI · 2017-05-25T03:16:22.974Z · LW · GW

I don't like the precautionary principle either, but reversed stupidity is not intelligence.

"Do you think there's a reason why we should privilege your position" was probably a bad question to ask because people can argue forever about which side "should" have the burden of proof without actually making progress resolving a disagreement. A statement like

The burden of proof therefore belongs to those who propose restrictive measures.

...is not one that we can demonstrate to be true or false through some experiment or deductive argument. When a bunch of transhumanists get together to talk about the precautionary principle, it's unsurprising that they'll come up with something that embeds the opposite set of values.

BTW, what specific restrictive measures do you see the AI safety folks proposing? From Scott Alexander's AI Researchers on AI Risk:

The “skeptic” position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research.

The “believers”, meanwhile, insist that although we shouldn’t panic or start trying to ban AI research, we should probably get a couple of bright people to start working on preliminary aspects of the problem.

(Control-f 'controversy' in the essay to get more thoughts along the same lines)

Like Max More, I'm a transhumanist. But I'm also a utilitarian. If you are too, maybe we can have a productive discussion where we work from utilitarianism as a shared premise.

As a utilitarian, I find Nick Bostrom's argument for existential risk minimization pretty compelling. Do you have thoughts?

Note Bostrom doesn't necessarily think we should be biased towards slow tech progress:

...instead of thinking about sustainability as is commonly known, as this static concept that has a stable state that we should try to approximate, where we use up no more resources than are regenerated by the natural environment, we need, I think, to think about sustainability in dynamical terms, where instead of reaching a state, we try to enter and stay on a trajectory that is indefinitely sustainable in the sense that we can contain it to travel on that trajectory indefinitely and it leads in a good direction.

http://www.stafforini.com/blog/bostrom/

So speaking from a utilitarian perspective, I don't see good reasons to have a strong pro-tech prior or a strong anti-tech prior. Tech has brought us both disease reduction and nuclear weapons.

Predicting the future is unsolved in the general case. Nevertheless, I agree with Max More that we should do the best we can, and in fact one of the most serious attempts I know of to forecast AI has come out of the AI safety community: http://aiimpacts.org/ Do you know of any comparable effort being made by people unconcerned with AI safety?

Comment by hg00 on Reaching out to people with the problems of friendly AI · 2017-05-24T05:47:49.672Z · LW · GW

You describe the arguments of AI safety advocates as being handwavey and lacking rigor. Do you believe you have arguments for why AI safety should not be a concern that are more rigorous? If not, do you think there's a reason why we should privilege your position?

Most of the arguments I've heard from you are arguments that AI is going to progress slowly. I haven't heard arguments from AI safety advocates that AI will progress quickly, so I'm not sure there is a disagreement. I've heard arguments that AI may progress quickly, but a few anecdotes about instances of slow progress strike me as a pretty handwavey/non-rigorous response. I could just as easily provide anecdotes of unexpectedly quick progress (e.g. AIs able to beat humans at Go arrived ~10 years ahead of schedule). Note that the claim you are going for is a substantially stronger one than the one I hear from AI safety folks: you're saying that we can be confident that things will play out in one particular way, and AI safety people say that we should be prepared for the possibility that things play out in a variety of different ways.

FWIW, I'm pretty sure Bostrom's thinking on AI predates Less Wrong by quite a bit.

Comment by hg00 on Open thread, May 15 - May 21, 2017 · 2017-05-23T20:03:13.106Z · LW · GW

A lot more folks will be using it soon...

?

Comment by hg00 on Reaching out to people with the problems of friendly AI · 2017-05-23T06:35:23.830Z · LW · GW

I'm sure the first pocket calculator was quite difficult to make work "in production", but nonetheless once created, it vastly outperformed humans in arithmetic tasks. Are you willing to bet our future on the idea that AI development won't have similar discontinuities?

Also, did you read Superintelligence?

Comment by hg00 on Hidden universal expansion: stopping runaways · 2017-05-16T00:13:20.423Z · LW · GW

That applies much more strongly to your comment than mine. I linked you to an essay's worth of supporting arguments, you just offered a flat assertion.

Comment by hg00 on Hidden universal expansion: stopping runaways · 2017-05-14T06:47:42.665Z · LW · GW

Sorry, I'm gonna be that guy. http://lesswrong.com/lw/vx/failure_by_analogy/

Comment by hg00 on Thoughts on civilization collapse · 2017-05-13T23:29:22.117Z · LW · GW

Hm. I think there is a quote in this podcast about how prisoners form prison gangs as a bulwark against anarchy: http://www.econtalk.org/archives/2015/03/david_skarbek_o.html

Comment by hg00 on How I'd Introduce LessWrong to an Outsider · 2017-05-08T05:35:27.953Z · LW · GW

Earlier today, it occurred to me that the rationalist community might be accurately characterized as "a support group for high IQ people". This seems concordant with your observations.

Comment by hg00 on Thoughts on civilization collapse · 2017-05-08T05:15:25.101Z · LW · GW

Another data point: the existence of prison gangs (typically organized along racial lines).

Comment by hg00 on OpenAI makes humanity less safe · 2017-04-04T05:12:05.429Z · LW · GW

Thanks for saying what (I assume) a lot of people were thinking privately.

I think the problem is that Elon Musk is an entrepreneur not a philosopher, so he has a bias for action, "fail fast" mentality, etc. And he's too high-status for people to feel comfortable pointing out when he's making a mistake (as in the case of OpenAI). (I'm generally an admirer of Mr. Musk, but I am really worried that the intuitions he's honed through entrepreneurship will turn out to be completely wrong for AI safety.)

Comment by hg00 on Rationality Considered Harmful (In Politics) · 2017-01-11T04:12:39.185Z · LW · GW

Rationality in politics might be harmful for you as an individual, but vital to humans collectively. Spreading the attitude "rationality is harmful in politics" could be very bad. Politics is too important to cede to the emotionalists.

Comment by hg00 on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2017-01-06T22:10:23.912Z · LW · GW

OK. I thought you were Eugene because he's been creating sockpuppets to post low effort SJ content and discredit SJ ever since downvoting was disabled. You know, mean girls feminism type stuff that treats ideas like clothes ("outdated", "makes me laugh", "someone somewhere might think it's ugly", etc.)

Comment by hg00 on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2017-01-06T02:51:13.797Z · LW · GW

Eugene?

Comment by hg00 on A different argument against Universal Basic Income · 2016-12-31T10:02:15.848Z · LW · GW

This is similar to the current situation, where being unemployed means you have a lot of time to post on the internet and influence elections.

Comment by hg00 on Off-switch for CRISPR-Cas9 gene editing system discovered · 2016-12-31T08:15:41.422Z · LW · GW

"These inhibitors provide a mechanism to block nefarious or out-of-control CRISPR applications, making it safer to explore all the ways this technology can be used to help people."

I'm Dr Nefarious and I'm using CRISPR for nefarious applications. How, concretely, will I be blocked?

Comment by hg00 on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2016-12-23T01:27:12.622Z · LW · GW

Complaining about people who cause problems is an undersupplied public service in our community. I appreciate Elo's willingness to overcome the bystander effect. At the same time, gossiping about people on the internet should only be done with great care.

My understanding is, in the relationship between Katie and Andromeda, Andromeda wears the pants. And letting Andromeda wear the pants sucks up time and energy. Using rich person parenting styles has costs if you're poor.

I'm generally sympathetic to parents who complain about unsolicited childrearing advice. But lots of people in the community have been helping Katie with Andromeda. This is admirable, and I think if these people have a hand in supporting a child, they deserve a voice in how it is raised.

Comment by hg00 on Open Thread: how do you look for information? · 2016-12-14T03:14:51.078Z · LW · GW

Apparently there are services that help with this.