Posts

Thoughts on hacking aromanticism? 2016-06-02T11:52:55.241Z · score: 10 (19 votes)
Is it sensible for an ambitious nonsmoker to use e-cigarettes? 2015-11-24T22:48:52.998Z · score: 3 (10 votes)
Find someone to talk to thread 2015-09-26T22:24:32.676Z · score: 22 (23 votes)

Comments

Comment by hg00 on Religion as Goodhart · 2019-07-12T02:09:15.070Z · score: 3 (3 votes) · LW · GW

The trouble is that tradition is undocumented code, so you aren't sure what is safe to change when circumstances change.

Comment by hg00 on Self-consciousness wants to make everything about itself · 2019-07-05T19:48:29.957Z · score: 2 (2 votes) · LW · GW
Seems like a bad comparison, since, as an atheist, you don't accept the Bible's truth, so the things the preacher is saying are basically spam from your perspective. There's also no need to feel self-conscious or defend your good-person-ness to this preacher, as you don't accept the premises he's arguing from.

Yes, and the preacher doesn't ask me about my premises before attempting to impose their values on me. Even if I share some or all of the preacher's premises, they're trying to force a strong conclusion about my moral character upon me and put my reputation at stake without giving me a chance to critically examine the logic with which that conclusion was derived or defend my reputation. Seems like a rather coercive conversation, doesn't it?

Does it seem to you that the preacher is engaging with me in good faith? Are they curious, or have they already written the bottom line?

Comment by hg00 on Self-consciousness wants to make everything about itself · 2019-07-05T17:08:10.068Z · score: 3 (3 votes) · LW · GW

I think I see a motte and bailey around what it means to be a good person. Notice at the beginning of the post, we've got statements like

Anita reassured Susan that her comments were not directed at her personally

...

they spent the duration of the meeting consoling Susan, reassuring her that she was not at fault

And by the end, we've got statements like

it's quite hard to actually stop participating in racism... In societies with structural racism, ethical behavior requires skillfully and consciously reducing harm

...

almost every person's behavior is morally depraved a lot of the time

...

What if there are bad things that are your fault?

...

accept that you are irredeemably evil

Maybe Susan knows on some level that her colleagues aren't being completely honest when they claim to think she's not at fault. Maybe she correctly reads conversational subtext suggesting she is morally depraved, bad things are her fault, and she is irredeemably evil. This could explain why she reacts so negatively.

The parallel you draw to Calvinist doctrine is interesting. Presumably most of us would not take a Christian preacher very seriously if they told us we were morally depraved. As an atheist, when a preacher on the street tells me this, I see it as an unwelcome attempt to impose their values on me. I don't tell the preacher that I accept the fact that I'm irredeemably evil, because I don't want to let the preacher browbeat me into changing the way I live my life.

Now suppose you were accosted by such a preacher, and when you responded negatively, they proclaimed that your choice to defend yourself (by telling about times when you worked to make the world a better place, say) was further evidence of your depravity. The preacher brings out their bible and points to a verse which they interpret to mean "it is a sin to defend yourself against street preachers". How do you react?

Seems like a bit of a Catch-22 eh? The preacher has created a situation where if I accept their conversational frame, I'm considered a terrible person if I don't do whatever they say. See numbers 13, 18 and 21 on this list.

Comment by hg00 on Self-consciousness wants to make everything about itself · 2019-07-04T04:41:13.269Z · score: 1 (1 votes) · LW · GW

Maybe you're right, I haven't seen it used much in practice. Feel free to replace "Something like Nonviolent Communication" with "Advice for getting along with people" in that sentence.

Comment by hg00 on Self-consciousness wants to make everything about itself · 2019-07-04T03:07:48.192Z · score: 2 (2 votes) · LW · GW

Agreed. Also, remember that conversations are not always about facts. Oftentimes they are about the relative status of the participants. Something like Nonviolent Communication might seem like tone policing, but through a status lens, it could be seen as a practice where you stop struggling for higher status with your conversation partner and instead treat them compassionately as an equal.

Comment by hg00 on Scholarship: How to Do It Efficiently · 2019-06-23T23:44:13.859Z · score: 3 (2 votes) · LW · GW

Just saw this Facebook group for getting papers. There's also this.

Comment by hg00 on The Relationship Between Hierarchy and Wealth · 2019-01-26T11:19:58.983Z · score: 3 (2 votes) · LW · GW

Interesting post. I think it might be useful to examine the intuition that hierarchy is undesirable, though.

It seems like you might want to separate out equality in terms of power from equality in terms of welfare. Most of the benefits from hierarchy seem to be from power inequality (let the people who are the most knowledgable and the most competent make important decisions). Most of the costs come in the form of welfare inequality (decision-makers co-opting resources for themselves). (The best argument against this frame would probably be something about the average person having self-actualization, freedom, and mastery of their destiny. This could be a sense in which power equality and welfare equality are the same thing.)

Robin Hanson's "vote values, bet beliefs" proposal is an intriguing way to get the benefits of inequality without the costs. You have the decisions being made by wealthy speculators, who have a strong financial incentive to leave the prediction market if they are less knowledgable and competent than the people they're betting against. But all those brains get used in the service of achieving values that everyone in society gets equal input on. So you have a lot of power inequality but a lot of welfare equality. Maybe you could even address the self-actualization point by making that one of the values that people vote on somehow. (Also, it's not clear to me that voting on values rather than politicians actually represents loss of freedom to master your destiny etc.)

This is also interesting.

Comment by hg00 on Reverse Doomsday Argument is hitting preppers hard · 2018-12-28T21:11:07.315Z · score: 2 (2 votes) · LW · GW

If you're willing to go back more than 70 years, in the US at least, the math suggests prepping is a good strategy:

https://medium.com/s/story/the-surprisingly-solid-mathematical-case-of-the-tin-foil-hat-gun-prepper-15fce7d10437

Comment by hg00 on “She Wanted It” · 2018-11-12T19:47:57.092Z · score: 8 (8 votes) · LW · GW

+1 for this. It's tremendously refreshing to see someone engage the opposing position on a controversial issue in good faith. I hope you don't regret writing it.

Would your model predict that if we surveyed fans of *50 Shades of Grey*, they have experienced traumatic abuse at a rate higher than the baseline? This seems like a surprising but testable prediction.

Personally, I think your story might be accurate for your peer group, but that your peer group is also highly non-representative of the population at large. There is very wide variation in female sexual preferences. For example, the stupidslutsclub subreddit was created for women to celebrate their enjoyment of degrading and often dubiously consensual sex. The conversation there looks nothing like the conversation about sex in the rationalist community, because they are communities for very different kinds of people. When I read the stupidslutsclub subreddit, I don't get the impression that the female posters are engaging in the sort of self-harm you describe. They're just women with some weird kinks.

Most PUA advice is optimized for picking up neurotypical women who go clubbing every weekend. Women in the rationalist community are far more likely to spend Friday evening reading Tumblr than getting turnt.
We shouldn't be surprised if there are a lot of mating behaviors that women in one group enjoy and women in the other group find disturbing.

If I hire someone to commit a murder, I'm guilty of something bad. By creating an incentive for a bad thing to happen, I have caused a bad thing to happen, therefore I'm guilty. By the same logic, we could argue that if a woman systematically rejects non-abusive men in favor of abusive men, she is creating an incentive for men to be abusive, and is therefore guilty. (I'm not sure whether I agree with this argument. It's not immediately compatible with the "different strokes for different folks" point from previous paragraphs. But if feminists made it, I would find it more plausible that their desire is to stop a dynamic they consider harmful, as opposed to engage in anti-male sectarianism.)

Another point: Your post doesn't account for replaceability effects. If a woman is systematically rejecting non-abusive men in favor of abusive men, and a guy presents himself as someone who's abusive enough to be attractive to her but less abusive than the average guy she would date, then you could argue that she gains utility through dating him. And if she has a kid, we'd probably like her to have a kid with someone who's pretending to be a jerk than someone who actually is a jerk, since the kid only inherits jerk genes in the latter case. (BTW, I think the "systematically rejecting non-abusive men in favor of abusive men" is an extreme case that is probably quite rare/nonexistent in the population, but it's simpler to think about.)

Once you account for replaceability, it could be that the most effective intervention for decreasing abuse is actually to help non-abusive guys be more attractive. If non-abusive guys are more attractive, some women who would have dated abusive guys will date them instead, so the volume of abuse will decrease. This could involve, for example, advice for how to be dominant in a sexy but non-abusive way.

Comment by hg00 on You Are Being Underpaid · 2018-05-05T06:48:04.401Z · score: 1 (1 votes) · LW · GW

https://kenrockwell.com/business/two-hour-rule.htm

Comment by hg00 on Open thread, October 2 - October 8, 2017 · 2017-10-18T01:35:28.803Z · score: 0 (0 votes) · LW · GW

This is sad.

Some of his old tweets are pretty dark:

I haven't talked to anyone face to face since 2015

https://twitter.com/Grognor/status/868640995856068609

I just want to remind everyone that this thread exists.

Comment by hg00 on Reaching out to people with the problems of friendly AI · 2017-10-10T17:19:28.577Z · score: 0 (0 votes) · LW · GW

Here is another link

Comment by hg00 on Open thread, June. 19 - June. 25, 2017 · 2017-07-01T18:59:53.560Z · score: 0 (0 votes) · LW · GW

The sentiment is the same, but mine has an actual justification behind it. Care to attack the justification?

Comment by hg00 on Open thread, June. 19 - June. 25, 2017 · 2017-06-21T05:11:40.432Z · score: 1 (1 votes) · LW · GW

Happy thought of the day: If the simulation argument is correct, and you find that you are not a p-zombie, it means some super civilization thinks you're doing something important/interesting enough to expend the resources simulating you.

"I think therefore I am a player character."

Comment by hg00 on Reaching out to people with the problems of friendly AI · 2017-06-01T06:50:08.755Z · score: 0 (0 votes) · LW · GW

I just saw this link, maybe you have thoughts?

(Let's move subsequent discussion over there)

Comment by hg00 on Reaching out to people with the problems of friendly AI · 2017-05-27T06:20:22.624Z · score: 0 (0 votes) · LW · GW

This is a zero-sum game where every person working on x-risk is a technical person explicitly not working on advancing technologies (like AI) that will increase standards of living and help solve our global problems. If someone chooses to work on AI x-risk, they are probably qualified to work directly on the hard problems of AI itself. By not working on AI they are incrementally slowing down AI efforts, and therefore delaying access to technology that could save the world.

I wouldn't worry much about this, because the financial incentives to advance AI are much stronger than the ones to work on AI safety. AI safety work is just a blip compared to AI advancement work.

So here's a utilitarian calculation for you: assume that AGI will allow us to conquer disease and natural death, by virtue of the fact that true AGI removes scarcity of intellectual resources to work on these problems. It's a bit of a naïve view, but I'm asking you to assume it only for the sake of argument. Then every moment someone is working on x-risk problems instead, they are potentially delaying the advent of true AGI by some number of minutes, hours, or days. Multiply that by the number of people who die unnecessary deaths every day -- hundreds of thousands -- and that is the amount of blood on the hands of someone who is capable but chooses not to work on making the technology widely available as quickly as possible. Existential risk can only be justified as a more pressing concern if can be reasonably demonstrated to have a higher probability of causing more deaths than inaction.

You should really read Astronomical Waste before you try to make this kind of quasi-utilitarian argument about x-risk :)

Show me the code. Demonstrate for me (in a toy but realistic environment) an AI/proto-AGI that turns evil, built using the architectures that are the current focus of research, and give me reasonable technical justification for why we should expect the same properties in larger, more complex environments.

What do you think of this example?

https://www.facebook.com/jesse.newton.37/posts/776177951574

(I'm sure there are better examples to be found, I'm just trying to figure out what you are looking for.)

Comment by hg00 on Reaching out to people with the problems of friendly AI · 2017-05-25T03:16:22.974Z · score: 0 (0 votes) · LW · GW

I don't like the precautionary principle either, but reversed stupidity is not intelligence.

"Do you think there's a reason why we should privilege your position" was probably a bad question to ask because people can argue forever about which side "should" have the burden of proof without actually making progress resolving a disagreement. A statement like

The burden of proof therefore belongs to those who propose restrictive measures.

...is not one that we can demonstrate to be true or false through some experiment or deductive argument. When a bunch of transhumanists get together to talk about the precautionary principle, it's unsurprising that they'll come up with something that embeds the opposite set of values.

BTW, what specific restrictive measures do you see the AI safety folks proposing? From Scott Alexander's AI Researchers on AI Risk:

The “skeptic” position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research.

The “believers”, meanwhile, insist that although we shouldn’t panic or start trying to ban AI research, we should probably get a couple of bright people to start working on preliminary aspects of the problem.

(Control-f 'controversy' in the essay to get more thoughts along the same lines)

Like Max More, I'm a transhumanist. But I'm also a utilitarian. If you are too, maybe we can have a productive discussion where we work from utilitarianism as a shared premise.

As a utilitarian, I find Nick Bostrom's argument for existential risk minimization pretty compelling. Do you have thoughts?

Note Bostrom doesn't necessarily think we should be biased towards slow tech progress:

...instead of thinking about sustainability as is commonly known, as this static concept that has a stable state that we should try to approximate, where we use up no more resources than are regenerated by the natural environment, we need, I think, to think about sustainability in dynamical terms, where instead of reaching a state, we try to enter and stay on a trajectory that is indefinitely sustainable in the sense that we can contain it to travel on that trajectory indefinitely and it leads in a good direction.

http://www.stafforini.com/blog/bostrom/

So speaking from a utilitarian perspective, I don't see good reasons to have a strong pro-tech prior or a strong anti-tech prior. Tech has brought us both disease reduction and nuclear weapons.

Predicting the future is unsolved in the general case. Nevertheless, I agree with Max More that we should do the best we can, and in fact one of the most serious attempts I know of to forecast AI has come out of the AI safety community: http://aiimpacts.org/ Do you know of any comparable effort being made by people unconcerned with AI safety?

Comment by hg00 on Reaching out to people with the problems of friendly AI · 2017-05-24T05:47:49.672Z · score: 1 (1 votes) · LW · GW

You describe the arguments of AI safety advocates as being handwavey and lacking rigor. Do you believe you have arguments for why AI safety should not be a concern that are more rigorous? If not, do you think there's a reason why we should privilege your position?

Most of the arguments I've heard from you are arguments that AI is going to progress slowly. I haven't heard arguments from AI safety advocates that AI will progress quickly, so I'm not sure there is a disagreement. I've heard arguments that AI may progress quickly, but a few anecdotes about instances of slow progress strike me as a pretty handwavey/non-rigorous response. I could just as easily provide anecdotes of unexpectedly quick progress (e.g. AIs able to beat humans at Go arrived ~10 years ahead of schedule). Note that the claim you are going for is a substantially stronger one than the one I hear from AI safety folks: you're saying that we can be confident that things will play out in one particular way, and AI safety people say that we should be prepared for the possibility that things play out in a variety of different ways.

FWIW, I'm pretty sure Bostrom's thinking on AI predates Less Wrong by quite a bit.

Comment by hg00 on Open thread, May 15 - May 21, 2017 · 2017-05-23T20:03:13.106Z · score: 0 (0 votes) · LW · GW

A lot more folks will be using it soon...

?

Comment by hg00 on Reaching out to people with the problems of friendly AI · 2017-05-23T06:35:23.830Z · score: 0 (0 votes) · LW · GW

I'm sure the first pocket calculator was quite difficult to make work "in production", but nonetheless once created, it vastly outperformed humans in arithmetic tasks. Are you willing to bet our future on the idea that AI development won't have similar discontinuities?

Also, did you read Superintelligence?

Comment by hg00 on Hidden universal expansion: stopping runaways · 2017-05-16T00:13:20.423Z · score: 0 (0 votes) · LW · GW

That applies much more strongly to your comment than mine. I linked you to an essay's worth of supporting arguments, you just offered a flat assertion.

Comment by hg00 on Hidden universal expansion: stopping runaways · 2017-05-14T06:47:42.665Z · score: 3 (3 votes) · LW · GW

Sorry, I'm gonna be that guy. http://lesswrong.com/lw/vx/failure_by_analogy/

Comment by hg00 on Thoughts on civilization collapse · 2017-05-13T23:29:22.117Z · score: 0 (0 votes) · LW · GW

Hm. I think there is a quote in this podcast about how prisoners form prison gangs as a bulwark against anarchy: http://www.econtalk.org/archives/2015/03/david_skarbek_o.html

Comment by hg00 on How I'd Introduce LessWrong to an Outsider · 2017-05-08T05:35:27.953Z · score: 1 (1 votes) · LW · GW

Earlier today, it occurred to me that the rationalist community might be accurately characterized as "a support group for high IQ people". This seems concordant with your observations.

Comment by hg00 on Thoughts on civilization collapse · 2017-05-08T05:15:25.101Z · score: 1 (1 votes) · LW · GW

Another data point: the existence of prison gangs (typically organized along racial lines).

Comment by hg00 on OpenAI makes humanity less safe · 2017-04-04T05:12:05.429Z · score: 10 (11 votes) · LW · GW

Thanks for saying what (I assume) a lot of people were thinking privately.

I think the problem is that Elon Musk is an entrepreneur not a philosopher, so he has a bias for action, "fail fast" mentality, etc. And he's too high-status for people to feel comfortable pointing out when he's making a mistake (as in the case of OpenAI). (I'm generally an admirer of Mr. Musk, but I am really worried that the intuitions he's honed through entrepreneurship will turn out to be completely wrong for AI safety.)

Comment by hg00 on Rationality Considered Harmful (In Politics) · 2017-01-11T04:12:39.185Z · score: 1 (1 votes) · LW · GW

Rationality in politics might be harmful for you as an individual, but vital to humans collectively. Spreading the attitude "rationality is harmful in politics" could be very bad. Politics is too important to cede to the emotionalists.

Comment by hg00 on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2017-01-06T22:10:23.912Z · score: 0 (0 votes) · LW · GW

OK. I thought you were Eugene because he's been creating sockpuppets to post low effort SJ content and discredit SJ ever since downvoting was disabled. You know, mean girls feminism type stuff that treats ideas like clothes ("outdated", "makes me laugh", "someone somewhere might think it's ugly", etc.)

Comment by hg00 on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2017-01-06T02:51:13.797Z · score: 0 (0 votes) · LW · GW

Eugene?

Comment by hg00 on A different argument against Universal Basic Income · 2016-12-31T10:02:15.848Z · score: 1 (1 votes) · LW · GW

This is similar to the current situation, where being unemployed means you have a lot of time to post on the internet and influence elections.

Comment by hg00 on Off-switch for CRISPR-Cas9 gene editing system discovered · 2016-12-31T08:15:41.422Z · score: 2 (2 votes) · LW · GW

"These inhibitors provide a mechanism to block nefarious or out-of-control CRISPR applications, making it safer to explore all the ways this technology can be used to help people."

I'm Dr Nefarious and I'm using CRISPR for nefarious applications. How, concretely, will I be blocked?

Comment by hg00 on A quick note on weirdness points and Solstices [And also random other Solstice discussion] · 2016-12-23T01:27:12.622Z · score: 19 (12 votes) · LW · GW

Complaining about people who cause problems is an undersupplied public service in our community. I appreciate Elo's willingness to overcome the bystander effect. At the same time, gossiping about people on the internet should only be done with great care.

My understanding is, in the relationship between Katie and Andromeda, Andromeda wears the pants. And letting Andromeda wear the pants sucks up time and energy. Using rich person parenting styles has costs if you're poor.

I'm generally sympathetic to parents who complain about unsolicited childrearing advice. But lots of people in the community have been helping Katie with Andromeda. This is admirable, and I think if these people have a hand in supporting a child, they deserve a voice in how it is raised.

Comment by hg00 on Open Thread: how do you look for information? · 2016-12-14T03:14:51.078Z · score: 0 (0 votes) · LW · GW

Apparently there are services that help with this.

Comment by hg00 on Yudkowsky vs Trump: the nuclear showdown. · 2016-11-25T10:05:38.840Z · score: 1 (1 votes) · LW · GW

Well, you could see the issues America is facing as being a long-term effect of importing slaves from Africa and liberalization of immigration laws in the 1960s. But racial tension is not the only thing I'm worried about.

Comment by hg00 on Yudkowsky vs Trump: the nuclear showdown. · 2016-11-24T09:23:13.552Z · score: 1 (1 votes) · LW · GW

Yeah, my current view is that long-term risks of high immigration outweigh near-term benefits.

Comment by hg00 on Why Power Women Are Micro-Dosing LSD at Work · 2016-11-22T07:47:38.183Z · score: 1 (1 votes) · LW · GW

The only thing dumber than long term possession of a Schedule I drug is announcing that you possess it on your website ;)

Comment by hg00 on Mismatched Vocabularies · 2016-11-22T07:42:05.810Z · score: 1 (1 votes) · LW · GW

You've discovered anti-intellectualism. Now you just need to figure out what humanity should do about it.

Comment by hg00 on Yudkowsky vs Trump: the nuclear showdown. · 2016-11-19T22:19:31.878Z · score: 1 (1 votes) · LW · GW

Lots of "politically incorrect" claims are true, and this matters for policy. E.g. for immigration.

Comment by hg00 on Rationality Heuristic for Bias Detection: Updating Towards the Net Weight of Evidence · 2016-11-18T08:51:36.923Z · score: 2 (2 votes) · LW · GW

Interesting post.

if P(B|A) > 1

I don't see how this can ever be greater than 1.

Comment by hg00 on Yudkowsky vs Trump: the nuclear showdown. · 2016-11-14T07:42:27.232Z · score: 0 (2 votes) · LW · GW

The concrete accusation against Yudkowsky is apparently that he made several posts which mentioned his position on Trump before making the posts which laid out the reasoning behind his position. If that is a vice, it seems like a minor one.

The way I remember it, he "mentioned his position" in a way that came across as mudslinging. It certainly did not seem like his goal was to "have a conversation where people who disagree about Trump can talk with each other productively". I think you're holding me to a higher standard than you're holding him.

But maybe you're right that I should have written my comment more carefully.

Comment by hg00 on Yudkowsky vs Trump: the nuclear showdown. · 2016-11-11T22:01:46.242Z · score: 5 (15 votes) · LW · GW

Some of Yudkowsky's arguments were good, but he was still an embarrassment to the movement. If I recall correctly he posted maybe half a dozen Facebook statuses to the effect of "OMG Trump is THE WORST" before offering any sort of argument. Of course, this plays in to the idea that people who oppose Trump are bullies who care more about optics than substance.

And the evidence he offered us was filtered evidence. He mentioned that open letter, but he didn't mention this list of conservative intellectuals who endorse Trump or this list of generals.

Comment by hg00 on Yudkowsky vs Trump: the nuclear showdown. · 2016-11-11T18:32:51.352Z · score: 10 (16 votes) · LW · GW

I'm a right winger and I totally disagree with this comment.

For me, conservatism is about willingness to face up to the hard facts about reality. I'm just as cosmopolitan in my values as liberals are--but I'm not naive about how to go about achieving them. My goal is to actually help people, not show all my friends how progressive I am.

In practice I think US stability is extremely important for the entire world. Which means I'm against giving impulsive people the nuclear codes, and I'm also against Hillary Clinton's "invade the world, invite the world" foreign policy.

Also: I don't like Yudkowsky, but I would like him and the people in his circle to take criticism seriously, so... could we maybe start spelling his name correctly? It ends in a y. (I think Yudkowsky himself is probably a lost cause, but there are a lot of smart, rational people in his thrall who should not be. And many of them will take the time to read and seriously evaluate critical arguments if they're well-presented.)

Comment by hg00 on Agential Risks: A Topic that Almost No One is Talking About · 2016-10-17T22:25:59.535Z · score: 2 (2 votes) · LW · GW

I'm familiar with lots of the things Eliezer Yudkowsky has said about AI. That doesn't mean I agree with them. Less Wrong has an unfortunate culture of not discussing topics once the Great Teacher has made a pronouncement.

Plus, I don't think philosophytorres' claim is obvious even if you accept Yudkowsky's arguments.

Fragility of value thesis. Getting a goal system 90% right does not give you 90% of the value, any more than correctly dialing 9 out of 10 digits of my phone number will connect you to somebody who’s 90% similar to Eliezer Yudkowsky. There are multiple dimensions for which eliminating that dimension of value would eliminate almost all value from the future. For example an alien species which shared almost all of human value except that their parameter setting for “boredom” was much lower, might devote most of their computational power to replaying a single peak, optimal experience over and over again with slightly different pixel colors (or the equivalent thereof). Friendly AI is more like a satisficing threshold than something where we’re trying to eke out successive 10% improvements. See: Yudkowsky (2009, 2011).

From here.

OK, so do my best friend's values constitute a 90% match? A 99.9% match? Do they pass the satisficing threshold?

Also, Eliezer's boredom-free scenario sounds like a pretty good outcome to me, all things considered. If an AGI modified me so I could no longer get bored, and then replayed a peak experience for me for millions of years, I'd consider that a positive singularity. Certainly not a "catastrophe" in the sense that an earthquake is a catastrophe. (Well, perhaps a catastrophe of opportunity cost, but basically every outcome is a catastrophe of opportunity cost on a long enough timescale, so that's not a very interesting objection.) The utility function is not up for grabs--I am the expert on my values, not the Great Teacher.

Here's the abstract from his 2011 paper:

A common reaction to first encountering the problem statement of Friendly AI (“Ensure that the creation of a generally intelligent, self-improving, eventually superintelligent system realizes a positive outcome”) is to propose a single moral value which allegedly suffices; or to reject the problem by replying that “constraining” our creations is undesirable or unnecessary. This paper makes the case that a criterion for describing a “positive outcome,” despite the shortness of the English phrase, contains considerable complexity hidden from us by our own thought processes, which only search positive-value parts of the action space, and implicitly think as if code is interpreted by an anthropomorphic ghost-in-the-machine. Abandoning inheritance from human value (at least as a basis for renormalizing to reflective equilibria) will yield futures worthless even from the standpoint of AGI researchers who consider themselves to have cosmopolitan values not tied to the exact forms or desires of humanity.

It sounds to me like Eliezer's point is more about the complexity of values, not the need to prevent slight misalignment. In other words, Eliezer seems to argue here that a naively programmed definition of "positive value" constitutes a gross misalignment, NOT that a slight misalignment constitutes a catastrophic outcome.

Please think critically.

Comment by hg00 on Agential Risks: A Topic that Almost No One is Talking About · 2016-10-17T09:03:50.493Z · score: 2 (2 votes) · LW · GW

Good post!

While not all sociopaths are violent, a disproportionate number of criminals and dictators have (or very likely have) had the condition.

Luckily sociopaths tend to have poor impulse control.

It follows that some radical environmentalists in the future could attempt to use technology to cause human extinction, thereby “solving” the environmental crisis.

Reminds me of Derrick Jensen. He doesn't talk about human extinction, but he does talk about bringing down civilization.

Fortunately, this version of negative utilitarianism is not a position that many non-academics tend to hold, and even among academic philosophers it is not especially widespread.

For details see http://www.amirrorclear.net/academic/ideas/negative-utilitarianism/

This is worrisome because recent research shows that even slight misalignments between our values and those motivating a superintelligence could have existentially catastrophic consequences.

Citation? This is commonly asserted by AI risk proponents, but I'm not sure I believe it. My best friend's values are slightly misaligned relative to my own, but if my best friend became superintelligent, that seems to me like it'd be a pretty good outcome.

Comment by hg00 on Seeking Advice About Career Paths for Non-USA Citizen · 2016-09-28T01:43:41.450Z · score: 6 (6 votes) · LW · GW

My understanding is that a USA programmer would start at the $20,000-a-year level (?), and that someone with experience can probably get twice that, and a senior one can get $100,000/year.

A pessimistic starting salary for a competent US computer programmer is $60K and senior ones can clear $200K. $100K is a typical starting salary for a computer science student who just graduated from a top university (also the median nationwide salary).

In the US market, foreigners come work as computer programmers by getting H1B visas. The stereotypical H1B visa programmer is from India, speaks mostly intelligible English with a heavy accent, gets hired by a company that wants to save money by replacing their expensive American programmers, and exists under the thumb of their employer (if they lose their job, their visa is jeopardized). I think that the average H1B makes less money than the average American coder. It sounds to me like you'd be a significantly more attractive hire than a typical H1B--you're fluent in English, and you've made contributions to Scheme?

The cost of living in the US is much higher than the Philippines. Raising a family in Silicon Valley is notoriously expensive. Especially if you want your kids to go to a "good school" where they won't be bullied. I don't know what metro has the best job availability/cost of living/school quality tradeoff. It will probably be one of the cities that's referred to as a "startup hub", perhaps Seattle or Austin. If your wife is willing to homeschool, you don't have to worry about school quality.

You can dip your toes in Option 1 without taking a big risk. Just start applying to US software companies. They'll interview you via Skype at first, and if you seem good, the best companies will be willing to pay for your flight to the US to meet the team. To save time you probably want to line up several US interviews for a single visit so you can cut down on the number of flights. Here are some characteristics to look for in companies to apply to:

  • The company has a process in place for hiring foreigners.

  • The company is looking for developers with your skill set.

  • The company's developer team is "clued in". Contributing to Scheme is going to be a big positive signal to the right employer. You can do things like read the company engineering blog, use BuiltWith, look up the employees on LinkedIn to figure out if the company seems clued in. Almost all companies funded by Y Combinator are clued in. If your interviewer's response to seeing Scheme on your resume is "What is Scheme?", then you're interviewing at the wrong company and you'll be offered a higher salary elsewhere.

  • The company is profitable but not sexy. For example, selling software to small enterprises. (You probably don't want to work for a business that sells software to large enterprises, as these firms are generally not "clued in". See above.) Getting a job at a sexy consumer product company like Google or Facebook is difficult because those are the companies that everyone is applying to. You can interview at those companies for fun, as the last places you look at. And you don't want to apply for a startup that's not yet profitable because then you're risking your wife and kids on an unproven business. I'm not going to tell you how to find these companies--if you use the same methods everyone else uses to find companies to apply to, you'll be applying to the same places everyone else is.

Of course you'll be sending out lots of resumes because you don't have connections. Maybe experiment with writing an email cover letter very much like the post you wrote here, including the word "fucking". I've participated in hiring software developers before, and my experience is that attempts at formal cover letters inevitably come across as stuffy and inauthentic. Catch the interviewer's interest with an interesting email subject line+first few sentences and tell a good story.

Actually you might have some connections--consider reaching out to companies that are affiliated with the rationalist community, posting to the Scheme mailing list if that's considered an acceptable thing to do, etc.

Consider donating some $ to MIRI if my advice ends up proving useful.

Comment by hg00 on Article on IQ: The Inappropriately Excluded · 2016-09-21T04:23:09.385Z · score: 3 (3 votes) · LW · GW

Is anyone from LW part of a high IQ society that's more exclusive than Mensa? Can you tell us what it's like?

Comment by hg00 on Open thread, Sep. 19 - Sep. 25, 2016 · 2016-09-21T03:50:10.071Z · score: 0 (0 votes) · LW · GW

Glad I could help :D

Comment by hg00 on Open thread, Sep. 19 - Sep. 25, 2016 · 2016-09-21T03:47:58.642Z · score: 0 (0 votes) · LW · GW

How many doctors do you think get sued for giving patients adderal?

I'm assuming you think the answer is "not many". If so, this shows it's not a very risky drug--it rarely causes side effects that are nasty enough for a patient to want to sue their doctor.

From what I've read about pharmaceutical lobbying, it consists primarily of things like buying doctors free meals for in exchange for using the company's drug instead of a competitor's drug. I doubt many doctors are willing to run a serious risk of losing their career over some free meals.

Comment by hg00 on Open thread, Sep. 19 - Sep. 25, 2016 · 2016-09-20T05:49:55.603Z · score: 3 (3 votes) · LW · GW

Drugs are prescribed based on a cost-benefit analysis. In general, the medical establishment is pretty conservative (there's little benefit to the doctor if your problem gets solved, but if they hurt you they're liable to get sued). In the usual case for amphetamines, the cost is the risk of side effects and the benefit is helping someone manage their ADHD. For you, the cost is the same but it sounds like the benefit is much bigger. So even by the standards of the risk-averse medical establishment, this sounds like a risk you should take.

You're an entrepreneur. A successful entrepreneur thinks and acts for themselves. This could be a good opportunity to practice being less scrupulous. Paul Graham on what makes founders successful:

Naughtiness

Though the most successful founders are usually good people, they tend to have a piratical gleam in their eye. They're not Goody Two-Shoes type good. Morally, they care about getting the big questions right, but not about observing proprieties. That's why I'd use the word naughty rather than evil. They delight in breaking rules, but not rules that matter. This quality may be redundant though; it may be implied by imagination.

Sam Altman of Loopt is one of the most successful alumni, so we asked him what question we could put on the Y Combinator application that would help us discover more people like him. He said to ask about a time when they'd hacked something to their advantage—hacked in the sense of beating the system, not breaking into computers. It has become one of the questions we pay most attention to when judging applications.

I'd recommend avoiding Adderall as a first option. I've heard stories of people whose focus got worse over time as tolerance to the drug's effects developed.

Modafinil, on the other hand, is a focus wonder drug. It's widely used in the nootropics community and bad experiences are quite rare. (/r/nootropics admin: "I just want to remind everyone that this is a subreddit for discussing all nootropics, not just beating modafinil to death.")

The legal risks involved with Modafinil seem pretty low. Check out Gwern's discussion.

My conclusion is that buying some Modafinil and trying it once could be really valuable, if only for comfort zone expansion and value of information. I have very little doubt that this is the right choice for you. Check out Gwern's discussion of suppliers. (Lying to your doctor is another option if you really want to practice being naughty.)

Comment by hg00 on Open Thread, Aug 29. - Sept 5. 2016 · 2016-09-02T11:12:13.420Z · score: 0 (0 votes) · LW · GW

Thanks, I linked this comment from the piracy thread.