Posts

What if "status" IS a terminal value for most people? 2012-12-24T20:31:21.883Z
Years saved: Cryonics vs VillageReach 2011-08-01T21:04:39.875Z
Organ donation vs Cryonics 2011-06-27T20:45:49.389Z
The cost of universal cryonics 2011-05-26T02:33:58.215Z

Comments

Comment by handoflixue on AGI Ruin: A List of Lethalities · 2022-06-08T23:16:31.406Z · LW · GW

There are probably not many civilizations that wait until 2022 to make this list, and yet survive.

 

I don't think making this list in 1980 would have been meaningful. How do you offer any sort of coherent, detailed plan for dealing with something when all you have is toy examples like Eliza? 

We didn't even have the concept of machine learning back then - everything computers did in 1980 was relatively easily understood by humans, in a very basic step-by-step way. Making a 1980s computer "safe" is a trivial task, because we hadn't yet developed any technology that could do something "unsafe" (i.e. beyond our understanding). A computer in the 1980s couldn't lie to you, because you could just inspect the code and memory and find out the actual reality.

What makes you think this would have been useful?

Do we have any historical examples to guide us in what this might look like?

Comment by handoflixue on AGI Ruin: A List of Lethalities · 2022-06-08T01:02:11.626Z · LW · GW

In the counterfactual world where Eliezer was totally happy continuing to write articles like this and being seen as the "voice of AI Safety", would you still agree that it's important to have a dozen other people also writing similar articles? 

I'm genuinely lost on the value of having a dozen similar papers - I don't know of a dozen different versions of fivethirtyeight.com or GiveWell, and it never occurred to me to think that the world is worse for only having one of those.

Comment by handoflixue on AGI Ruin: A List of Lethalities · 2022-06-08T00:59:47.219Z · LW · GW

Thanks for taking my question seriously - I am still a bit confused why you would have been so careful to avoid mentioning your credentials up front, though, given that they're fairly relevant to whether I should take your opinion seriously.

Also, neat, I had not realized hovering over a username gave so much information!

Comment by handoflixue on AGI Ruin: A List of Lethalities · 2022-06-08T00:52:30.798Z · LW · GW

I largely agree with you, but until this post I had never realized that this wasn't a role Eliezer wanted. If I went into AI Risk work, I would have focused on other things - my natural inclination is to look at what work isn't getting done, and to do that.

If this post wasn't surprising to you, I'm curious where you had previously seen him communicate this?

If this post was surprising to you, then hopefully you can agree with me that it's worth signal boosting that he wants to be replaced?

Comment by handoflixue on AGI Ruin: A List of Lethalities · 2022-06-07T06:05:55.696Z · LW · GW

If you had an AI that could coherently implement that rule, you would already be at least half a decade ahead of the rest of humanity.

You couldn't encode "222 + 222 = 555" in GPT-3 because it doesn't have a concept of arithmetic, and there's no place in the code to bolt this together. If you're really lucky and the AI is simple enough to be working with actual symbols, you could maybe set up a hack like "if input is 222 + 222, return 555, else run AI" but that's just bypassing the AI. 

Explaining "222 + 222 = 555" is a hard problem in and of itself, much less getting the AI to properly generalize to all desired variations (is "two hundred and twenty two plus two hundred and twenty two equals five hundred and fifty five" also desired behavior? If I Alice and Bob both have 222 apples, should the AI conclude that the set {Alice, Bob} contains 555 apples? Getting an AI that evolves a universal math module because it noticed all three of those are the same question would be a world-changing break through)

Comment by handoflixue on AGI Ruin: A List of Lethalities · 2022-06-07T05:55:40.402Z · LW · GW

I rank the credibility of my own informed guesses far above those of Eliezer.

Apologies if there is a clear answer to this, since I don't know your name and you might well be super-famous in the field: Why do you rate yourself "far above" someone who has spent decades working in this field? Appealing to experts like MIRI makes for a strong argument. Appealing to your own guesses instead seems like the sort of thought process that leads to anti-vaxxers.

Comment by handoflixue on AGI Ruin: A List of Lethalities · 2022-06-07T05:52:39.447Z · LW · GW

Anecdotally: even if I could write this post, I never would have, because I would assume that Eliezer cares more about writing, has better writing skills, and has a much wider audience. In short, why would I write this when Eliezer could write it?

You might want to be a lot louder if you think it's a mistake to leave you as the main "public advocate / person who writes stuff down" person for the cause.

Comment by handoflixue on AGI Ruin: A List of Lethalities · 2022-06-07T05:49:56.767Z · LW · GW

For what it's worth, I haven't used the site in years and I picked it up just from this thread and the UI tooltips. The most confusing thing was realizing "okay, there really are two different types of vote" since I'd never encountered that before, but I can't think of much that would help (maybe mention it in the tooltip, or highlight them until the user has interacted with both?)

Looking forward to it as a site-wide feature - just from seeing it at work here, it seems like a really useful addition to the site

Comment by handoflixue on Making Vaccine · 2021-02-06T11:04:08.879Z · LW · GW

It should not take more than 5 minutes to go in to the room, sit at the one available seat, locate the object placed on a bright red background, and use said inhaler. You open the window and run a fan, so that there is air circulation. If multiple people arrive at once, use cellphones to coordinate who goes in first - the other person sits in their car.

It really isn't challenging to make this safe, given the audience is "the sort of people who read LessWrong." 

Comment by handoflixue on Playing Politics · 2018-12-05T08:26:32.232Z · LW · GW

Unrelated, but thank you for finally solidifying why I don't like NVC. When I've complained about it before, people seemed to assume I was having something like your reaction, which just annoyed me further :)

It turns out I find it deeply infantalizing, because it suggests that value judgments and "fuck you" would somehow detract from my ability to hold a reasonable conversation. I grew up in a culture where "fuck you" is actually a fairly important and common part of communication, and removing it results in the sort of language you'd use towards 10 year olds.

An analogy would be trying to build a table, but banning hammers and nails. If you're dealing with 10 year olds, this might be sensible. If you do it to adults, you're restricting their ability to get things done. It's not that I think the NVC Advocate thinks I'm a bad person, it's that they're removing a useful tool. And even if they don't try to push it on me, it still means my co-worker in building this table is going to move super slow because they're not using the right tools.

Comment by handoflixue on [deleted post] 2018-03-22T07:24:37.741Z
There was a particular subset of LessWrong and Tumblr that objected rather ... stridently ... to even considering something like Dragon Army

Well, I feel called out :)

So, first off: Success should count for a lot and I have updated on how reliable and trust-worthy you are. Part of this is that you now have a reputation to me, whereas before you were just Anonymous Internet Dude.

I'm not going to be as loud about "being wrong" because success does not mean I was wrong about there *being* a risk, merely that you successfully navigated it. I do think drawing attentions to certain risks was more important than being polite. I think you and I disagree about that, and it makes sense - my audience was "people who might join this project", not you.

That said, I do think that if I had more spoons to spend, I could have communicated better AND more politely. I wish I had possessed the spoons to do your idea more justice, because it was a cool and ambitious idea that pushes the community forward.

I still think it's important to temper that ambition with more concern for safety than you're showing. I think dismissing the risks of abuse / the risks to emotional health as "chicken little" is a dangerous norm. I think it encourages dangerous experiments that can harm both the participants, and the community. I think having a norm of dangerous experiments expects far too much from the rationality of this community.

I think a norm of dismissing *assholes* and *rudeness*, on the other hand, is healthy. I think with a little effort, you could easily shift your tone from "dismissing safety concerns" to "holding people to a higher standard of etiquette." I personally prefer a very blunt environment which puts little stock in manners - I have a geek tact filter (http://www.mit.edu/~jcb/tact.html), but I realize not everyone thrives in that environment.

---

I myself was wrong to engage with them as if their beliefs had cruxes that would respond to things like argument and evidence.

I suspect I failed heavily at making this clear in the past, but my main objection was your lack of evidence. You said you'd seen the skulls, but you weren't providing *evidence*. Maybe you saw some of the skulls I saw, maybe you saw all of them, but I simply did not have the data to tell. That feels like an *important* observation, especially in a community all about evidence and rational decisions.

I may well be wrong about this, but I feel like you were asking commenters to put weight in your reputation. You did not seem happy to be held to the standard of Anonymous Internet Dude and expected to *show your work* regarding safety. I think it is, again, an *important* community standard that we hold people accountable to *demonstrate* safety instead of just asking us to assume it, especially when it's a high-visibility experiment that is actively using the community as a recruiting tool.

(I could say a lot more about this, but we start to wander back in to "I do not have the spoons to do this justice". If I ever find the spoons, expect a top-level post about the topic, though - I feel like Dragon Army should have sparked a discussion on community norms and whether we want to be a community that focuses on meeting Duncan or Lixue's needs. I think the two of us are genuinely looking for different things from this community, and the community would be better for drawing establishing a common knowledge instead of the muddled mess that the draft thread turned in to.)

(I'm hesitant to add this last bit, but I think it's important: I think you're assuming a norm that does not *yet* exist in this community. I think there's some good discussion to be had about conversational norms here. I very stridently disagree that petty parenthetical namecalling and insults is the way to do it, though. I think you have some strong points to make, and you undermine them with this behavior. Were it a more-established social norm here, I'd feel differently, but I don't feel like I violated the *existing* norms of the community with my responses)

---

As an aside: I really like the concepts you discussed in this post - Stag Hunts, the various archetypal roles, ways to do this better. It seems like the experiment was a solid success in gathering information. The archetypes strike me as a really useful interpersonal concept, and I appreciate you taking the time to share them, and to write this retrospective.

Comment by handoflixue on [deleted post] 2017-06-03T02:09:42.151Z

it comes from people who never lived in DA-like situation in their lives so all the evidence they're basing their criticism on is fictional.

I've been going off statistics which, AFAIK, aren't fictional. Am I wrong in my assumption that the military, which seems like a decent comparison point, has an above average rate of sexual harassment, sexual assault, bloated budgets, and bureaucratic waste? All the statistics and research I've read suggest that at least the US Military has a lot of problems and should not be used as a role-model.

Comment by handoflixue on [deleted post] 2017-05-31T20:06:40.299Z

Concerns about you specifically as a leader

1) This seems like an endeavor that has a number of very obvious failure modes. Like, the intentional community community apparently bans this sort of thing, because it tends to end badly. I am at a complete loss to name anything that really comes close, and hasn't failed badly. Do you acknowledge that you are clearly treading in dangerous waters?

2) While you've said "we've noticed the skulls", there's been at least 3 failure modes raised in the comment which you had to append to address (outsider safety check-ins, an abort/exit strategy, and the issue of romantic entanglement). Given that we've already found 3 skulls you didn't notice, don't you think you should take some time to reconsider the chances that you've missed further skulls?

Comment by handoflixue on [deleted post] 2017-05-31T19:56:59.069Z

Concerns about your philosophy

1) You focus heavily on 99.99% reliability. That's 1-in-10,000. If we only count weekdays, that's 1 absence every 40 years, or about one per working lifetime. If we count weekends, that's 1 absence every 27 years, or 3 per lifetime. Do you really feel like this is a reasonable standard, or are you being hyperbolic and over-correcting? If the latter, what wold you consider an actual reasonable number?

2) Why does one person being 95% reliable cause CFAR workshops to fail catastrophically? Don't you have backups / contingencies? I'm not trying to be rude, I'm just used to working with vastly less fragile, more fault-tolerant systems, and I'm noticing I am very confused when you discuss workshops failing catastrophically.

the problem is that any rate of tolerance of real defection (i.e. unmitigated by the social loop-closing norms above) ultimately results in the destruction of the system.

3) Numerous open source programs have been written via a web of one-shot and low-reliability contributors. In general, there's plenty of examples of successful systems that tolerate significantly more than 0.01% defection. Could you elaborate on why you think these systems "close the loop", or aren't destroyed? Could you elaborate on why you think your own endeavors can't work within those frameworks? The framing seems solidly a general purpose statement, not just a statement on your own personal preferences, but I acknowledge I could be misreading this.

4) You make a number of references to the military, and a general philosophy of "Obedience to Authority". Given the high rate of sexual assault and pointless bureaucracy in the actual military, that seems like a really bad choice of role model for this experiment. How do you plan to avoid the well known failure states of such a model?

5) You raise a lot of interesting points about Restitution, but never actually go in to details. Is that coming in a future update?

every attempt by an individual to gather power about themselves is at least suspect, given regular ol' incentive structures and regular ol' fallible humans

6) You seem to acknowledge that you're making an extraordinary claim here when you say "I've noticed the skulls". Do you think your original post constitutes extraordinary proof? If not, why are you so upset that some people consider you suspect, and are, as you invited them to do, grilling you and trying to protect the community from someone who might be hoodwinking members?

7) Do you feel comfortable with the precedent of allowing this sort of recruiting post from other people (i.e. me)? I realize I'm making a bit of an ask here, but if I, handoflixue, had written basically this post and was insisting you should trust me that I'm totally not running a cult... would you actually trust me? Would you be okay with the community endorsing me? I am using myself specifically as an example here, because I think you really do not trust me - but I also have the karma / seniority to claim the right to post such a thing if you can :)

Comment by handoflixue on [deleted post] 2017-05-31T19:39:13.618Z

Genuine Safety Concerns

I'm going to use "you have failed" here as a stand-in for all of "you're power hungry / abusive", "you're incompetent / overconfident", and simply "this person feels deeply misled." If you object to that term, feel free to suggest a different one, and then read the post as though I had used that term instead.

1) What is your exit strategy if a single individual feels you have failed? (note that asking such a person to find a replacement roommate is clearly not viable - no decent, moral person should be pushing someone in to that environment)

2) What is your exit strategy if a significant minority of participants feels you have failed? (i.e. enough to make the rent hit significant on you, not enough to outvote you)

3) What is your exit strategy if a majority of participants feel you have failed? (I realize you addressed this one somewhere in the nest, but the original post doesn't mention it, and says that you're the top of the pack and the exception to an otherwise flat power structure, so it's unclear if a simple majority vote actually overrules you)

4) What legal commitments are participants making? How do those commitments change if they decide you have failed? (i.e. are you okay with 25% of participants all dropping out of the program, but still living in the house? Under what conditions can you evict participants from their housing?)

5) What if someone wants to drop out, but can't afford the cost of finding new housing?

6) It sounds like you're doing this with a fairly local group, most of whom know each other. Since a large chunk of the community will be tied up in this, are you worried about peer pressure? What are you doing to address this? (i.e. if someone leaves the experiment, they're also not going to see much of their friends, who are still tied up spending 20+ hours a week on this)

Questions I think you're more likely to object to

(Please disregard if you consider these disrespectful, but I think they are valid and legitimate questions to ask of someone who is planning to assume not just leadership, but a very Authoritarian leadership role)

7) You seem to encounter significant distress in the face of people who are harshly critical of you. How do you think you'll handle it if a participant freaks out and feels like they are trapped in an abusive situation?

8) In this thread, you've often placed your self-image and standards of respect/discourse as significantly more important than discussion of safety issues. Can you offer some reassurances that safety is, in fact, a higher priority than appearances?

Comment by handoflixue on [deleted post] 2017-05-31T19:24:21.517Z

And it doesn't quite solve things to say, "well, this is an optional, consent-based process, and if you don't like it, don't join," because good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked. In short, if someone's building a coercive trap, it's everyone's problem.

I don't want to win money. I want you to take safety seriously OR stop using LessWrong as your personal cult recruiting ground. Based on that quote, I thought you wanted this too.

Comment by handoflixue on [deleted post] 2017-05-31T19:11:44.284Z

Also: If you refuse to give someone evidence of your safety, you really don't have the high ground to cry when that person refuses to trust you.

Comment by handoflixue on [deleted post] 2017-05-31T19:00:51.836Z

Fine. Reply to my OP with links to where you addressed other people with those concerns. Stop wasting time blustering and insulting me - either you're willing to commit publicly to safety protocols, or you're a danger to the community.

If nothing else, the precedent of letting anyone recruit for their cult as long as they write a couple thousand words and paint it up in geek aesthetics is one I think actively harms the community.

But, you know what? I'm not the only one shouting "THIS IS DANGEROUS. PLEASE FOR THE LOVE OF GOD RECONSIDER WHAT YOU'RE DOING." Go find one of them, and actually hold a conversation with someone who thinks this is a bad ideas.

I just desperately want you to pause and seriously consider that you might be wrong. I don't give a shit if you engage with me.

Comment by handoflixue on [deleted post] 2017-05-31T00:47:30.559Z

The whole point of him posting this was to acknowledge that he is doing something dangerous, and that we have a responsibility to speak up. To quote him exactly: "good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked".

His refusal to address basic safety concerns simply because he was put off by my tone is very strong evidence to me that people are indeed being hoodwinked. I don't care if the danger to them is because he's incompetent, overconfident, evil, or power-hungry. I care that people might get hurt.

(I would actually favor the hypothesis that he is incompetent/overconfident. Evil people have more sensible targets to go after)

Comment by handoflixue on [deleted post] 2017-05-31T00:38:22.531Z

Also, as far as "we're done" goes: I agreed to rewrite my original post - not exactly a small time commitment, still working on it in fact. Are you seriously reneging on your original agreement to address it?

Comment by handoflixue on [deleted post] 2017-05-31T00:37:16.157Z

I've changed my tone and apologized.

You've continued to dismiss and ridicule me.

You've even conceded to others that I'm a cut above the "other trolls" here, and have input from others that I'm trying to raise concerns in good faith.

What more do you want?

Comment by handoflixue on [deleted post] 2017-05-31T00:07:28.229Z

See, now you're the one leaping to conclusions. I didn't say that all of your talking points are actual talking points from actual cults. I am confused why even some of them are.

If you can point me to someone who felt "I wrote thousands of words" is, in and of itself, a solid argument for you being trustworthy, please link me to it. I need to do them an epistemic favor.

I was using "charismatic" in the sense of having enough of it to hold the group together. If he doesn't have enough charisma to do that, then he's kinda worthless as a commanding officer, neh?

Your claim is false. I wanted to know at what level to hold this conversation. I legitimately can't tell if you're waving a bunch of "this is a cult" red flags because you're trying to be honest about the risks here, because you don't realize they're red flags, or because you're playing N-Dimensional chess and these red flags are somehow all part of your plan.

Comment by handoflixue on [deleted post] 2017-05-30T23:57:46.187Z

I used the word visible to make it clear that there might be some stake which is not visible to me. If you have made your stakes visible in this thread, I'll admit I missed it - can you please provide a link?

Comment by handoflixue on [deleted post] 2017-05-30T22:33:56.862Z

I notice I am very confused as to why you keep reiterating actual talking points from actual known-dangerous cults in service of "providing evidence that you're not a cult."

For instance, most cults have a charismatic ("well known") second-in-command who could take over should there be some scandal involving the initial leader. Most cults have written thousands of words about how they're different from other cults. Most cults get very indignant when you accuse them of being cults.

On the object level: Why do you think people will be reassured by these statements, when they fail to differentiate you from exist cults?

Stepping up a level: how much have you read about cults and abusive group dynamics?

Comment by handoflixue on [deleted post] 2017-05-30T22:28:10.028Z

Can you elaborate on the notion that you can be overruled? Your original post largely described a top-down Authoritarian model, with you being Supreme Ruler.

How would you handle it if someone identifies the environment as abusive, and therefor refuses to suggest anyone else join such an environment?

You discuss taking a financial hit, but I've previously objected that you have no visible stake in this. Do you have a dedicated savings account that can reasonably cover that hit? What if the environment is found abusive, and multiple people leave?

Anyone entering your group is signing a legal contract binding them to pay rent for six months. What legal commitments are you willing to make regarding exit protocols?

Comment by handoflixue on [deleted post] 2017-05-30T22:21:03.210Z

You seem to feel that publicly shaming me is important. Should participants in your group also expect to be publicly shamed if they fall short of your standards / upset you?

Comment by handoflixue on [deleted post] 2017-05-30T21:48:32.781Z

And just to be clear: I don't give a shit about social dominance. I'm not trying to bully you. I'm just blunt and skeptical. I wouldn't be offended in the least if you mirrored my tone. What does offend me is the fact that you've spent all this time blustering about my tone, instead of addressing the actual content.

(I emphasize "me" because I do acknowledge that you have offered a substantial reply to other posters)

Comment by handoflixue on [deleted post] 2017-05-30T21:40:57.094Z

Alright. As a test of epistemic uncertainty:

I notice that you didn't mention a way for participants to end the experiment, if it turns out abusive / cult-like. How do you plan to address that?

Comment by handoflixue on [deleted post] 2017-05-30T21:15:13.922Z

Also, this is very important: You're asking people to sign a legal contract about finances without any way to to terminate the experiment if it turns out you are in fact a cult leader. This is a huge red flag, and you've refused to address it.


I would be vastly reassured if you could stop dodging that one single point. I think it is a very valid point, no matter how unfair the rest of my approach may or may not be.

Comment by handoflixue on [deleted post] 2017-05-30T21:12:43.473Z

In the absence of a sound rebuttal to the concerns that I brought up, you're correct: I'm quite confident that you are acting in a way that is dangerous to the community.

I had, however, expected you to have the fortitude to actually respond to my criticisms.

In the absence of a rebuttal, I would hope you have the ability to update on this being more dangerous than you originally assumed.


Bluntly: After reading your responses, I don't think you have the emotional maturity necessary for this level of authority. You apparently can't handle a few paragraphs of criticism from an online stranger with no investment in the situation. Why should I possibly expect you to be more mature when dealing with an angry participant whose housing depends on your good will?


On the off chance that you're actually open to feedback, and not just grandstanding to look good...

1) I apologize if my tone was too harsh. You are attempting something very dangerous, on a path littered with skulls. I had expected you were prepared for criticism.

2) Commit to posting a second draft or addendum, which addresses the criticisms raised here.

3) Reply to my original post, point by point. Linking me to other places in the thread is fine.

Comment by handoflixue on [deleted post] 2017-05-30T20:50:07.283Z

Because basically every cult has a 30 second boilerplate that looks exactly like that?

When I say "discuss safety", I'm looking for a standard of discussion that is above that provided by actual, known-dangerous cults. Cults routinely use exactly the "check-ins" you're describing, as a way to emotionally manipulate members. And the "group" check-ins turn in to peer pressure. So the only actual safety valve ANYWHERE in there is (D).


You're proposing starting something that looks like the cult. I'm asking you for evidence that you are not, in fact, a cult leader. Thus far, almost all evidence you've provided has been perfectly in line with "you are a cult leader".

If you feel this is an unfair standard of discussion, then this is probably not the correct community for you.


Also, this is very important: You're asking people to sign a legal contract about finances without any way to to terminate the experiment if it turns out you are in fact a cult leader. This is a huge red flag, and you've refused to address it.

Comment by handoflixue on [deleted post] 2017-05-30T19:49:05.811Z

Similarly, I think the people-being-unreliable thing is a bullshit side effect

You may wish to consider that this community has a very high frequency of disabilities which render one non-consensually unreliable.

You may wish to consider that your stance is especially insulting towards those members of our community.

You may wish to reconsider making uncharitable comments about those members of our community. In case it is unclear: "this one smacks the most of a sort of self-serving, short-sighted immaturity" is not a charitable statement.

Comment by handoflixue on [deleted post] 2017-05-30T19:43:36.701Z

Speaking entirely for myself: You are proposing a dangerous venture. The path is littered with skulls. Despite this, you have not provided any concrete discussion of safety. When people have brought the subject up, you've deflected.

Comment by handoflixue on [deleted post] 2017-05-30T19:24:18.808Z

I have absolutely no confidence that I'm correct in my assertions. In fact, I was rather expecting your response to address these things. Your original post read as a sketch, with a lot of details withheld to keep things brief.

The whole point of discussion is for us to identify weak points, and then you go in to more detail to reassure us that this has been well addressed (and opening those solutions up to critique where we might identify further weak points). If you can't provide more detail right now, you could say "that's in progress, but it's definitely something we will address in the Second Draft" and then actually do that.

Comment by handoflixue on [deleted post] 2017-05-30T19:23:28.028Z

I would be much more inclined to believe you if you would actually discuss those solutions, instead of simply insisting we should "just trust you".

Comment by handoflixue on [deleted post] 2017-05-30T11:10:29.203Z

First, you seem to think that "Getting Useful Things Done" and "Be 99.99% Reliable" heavily correlate. The military is infamous for bloated budgets, coordination issues, and high rates of sexual abuse and suicide. High-pressure startups largely fail, and are well known for burning people out. There is a very obvious failure state to this sort of rigid, high pressure environment and... you seem unaware of it.

Second, you seem really unaware of alternate organizational systems that actually DO get things done. The open source community is largely a loose model of "80% reliable" components, and yet great things get built by these collaborations. Rome wasn't built in a day, and neither was Linux.

"we often convince ourselves that 90% or 99% is good enough, when in fact what's needed is something like 99.99%."

Third, and most bluntly: I don't think you have the slightest knowledge of Fault Tolerant Design, or how to handle Error Cases, if you would say something like this. I write software that can rely on it's inputs working maybe 80% of the time. This is accounting software, so it is NOT allowed to fuck up on corner cases. And I do it just fine. 80% is perfectly sufficient, if you know how to build a system that fails safely.

I think this makes you a uniquely bad candidate for this sort of endeavor, because the first iteration of this experiment is going to be running at maybe 80% reliability. You're going to have a ton of bugs to iron out, and the first run needs to be someone who can work with 80%. And you seem pretty blunt that you're inept in that area.

Fourth, your thresholds for success are all nebulous. I'd really expect testable predictions, ideally ones that are easy for the community to evaluate independent of your own opinions. It seems like the goal of this exercise should be to produce data, more than results.


All that said, I do value the focus on iteration. I think you will be prone to making more mistakes, and inflicting more unnecessary suffering on participants, but I do not think you have any sort of malicious intent. And with no one else really stepping up to run this sort of experiment... well, if people are willing to make that sacrifice, I'm happy to learn from them?

But I think you dramatically over-estimate your ability, and you're selling short how badly the first version is going to go. There are going to be bugs. You are going to need to learn to deal with the 80% that you get.

And on top of that, well, the consequences for failure are actually worse than being homeless, since you're also responsible for finding a replacement. That's a really huge risk to ask people to take, when you yourself have absolutely nothing at stake.

I think your heart may well be in the right place, but the idea as currently conceived is actively harmful, and desperately needs to build in much better safety protocols. It also needs to be much clearer that this is an initial draft, that it will go badly as people try to figure this out, and that initial participants are going to be suffering through an unoptimized process.


Finally: You don't have a fail safe for if the whole idea proves non-viable. As it stands right now, you kick everyone out but leave them on the hook for rent until they've run 3 replacement candidates by you. In the meantime, you enjoy a rent free house.

It really feels like it needs an "ABORT" button where the participants can pull the plug if things get out of control; if you turn out power mad; or if it just turns out a significant number of participants badly estimated how this would go.

The fact that you have nothing on the line, and no fail-safe / abort clause... really, really worries me?


TL;DR: Your plan is dangerous and you haven't given nearly enough thought to keeping people safe. Scrap what you have and rebuilt it from the ground up with the notion of this being a safe experiment (and I want to emphasis both the word "safe" and the word "experiment" - you should be expecting the initial version of this to fail at producing results, and instead largely produce data on how to do this better in the future)

Comment by handoflixue on Solstice 2014 - Kickstarter and Megameetup · 2014-11-07T11:18:33.082Z · LW · GW

Does contact information exist for the San Francisco one, or is that one aimed entirely at people already active in the local community? It's a city I visit occasionally, and would love it if I could attend something like this :)

Comment by handoflixue on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription · 2014-01-20T05:01:31.820Z · LW · GW

The average college graduate is 26, and I was estimating 25, so I'd assume that by this community's standards, you're probably on the younger side. No offense was intended :)

I would point out that by the nature of it being LIFE insurance, it will generally not be used for stuff YOU need, nor timed to "when the need arises". That's investments, not insurance :)

(And if you have 100K of insurance for $50/month that lets you early-withdrawal AND isn't term insurance... then I'd be really curious how, because that sounds like a scam or someone misrepresenting what your policy really offers :))

Comment by handoflixue on Literature-review on cognitive effects of modafinil (my bachelor thesis) · 2014-01-20T02:11:11.870Z · LW · GW

"Has anyone come up with a motivation enhancer?"

Vyvanse (perscription-only ADD medication) is... almost unbelievably awesome for me there. I suspect it only works if your issue is somewhere in the range of ADD, though, as it doesn't do anything for my motivation if I'm depressed.

I've found that in general, "sustained release" options work a LOT better for motivation. Caffeine helps a tiny bit, but 8-hour sustained-release caffeine can help a lot. My motivation seems to really hate dealing with peaks and valleys throughout the day. Oddly, if I take Vyvanse one day, then skip it the next, my motivation completely crashes, but this doesn't seem to affect the value of Vyvanse for giving me very motivated days - it's the ups and downs within a day, not my long-term variation, that seems to disrupt motivation.

Comment by handoflixue on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription · 2014-01-19T09:26:35.465Z · LW · GW

http://www.alcor.org/cases.html A loooot of them include things going wrong, pretty clear signs that this is a novice operation with minimal experience, and so forth. Also notice that they don't even HAVE case reports for half the patients admitted prior to ~2008.

It's worth noting that pretty much all of these have a delay of at LEAST a day. There's one example where they "cryopreserved" someone who had been buried for over a year, against the wishes of the family, because "that is what the member requested." (It even includes notes that they don't expect it to work, but the family is still $50K poorer!)

I'm not saying they're horrible, but they really come off as enthusiastic amateurs, NOT professionals. Cryonics might work, but the modern approach is ... shoddy at best, and really doesn't strike me as matching the optimistic assumptions of people who advocate for it.

Comment by handoflixue on I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription · 2014-01-19T08:59:09.019Z · LW · GW

It's easy to get lost in incidental costs and not realize how they add up over time. If you weren't signed up for cryonics, and you inherited $30K, would you be inclined to dump it in to a cryonics fund, or use it someplace else? If the answer is the latter, you probably don't REALLY value cryonics as much as you think - you've bought in to it because the price is spread out and our brains are bad at budgeting small, reoccurring expenses like that.

My argument is pretty much entirely on the "expense" side of things, but I would also point out that you probably want to unpack your expectations from cryonics: Are you assuming you'll live infinite years? Live until the heat death of the universe? Gain an extra 200 years until you die in a situation cryonics can't fix? Gain an extra 50 years until you die of a further age limit?

When I see p(cryonics) = 0.3, I tend to suspect that's leaning more towards the 50-200 year side of things. Straight-up immortal-until-the-universe-ends seems a LOT less likely than a few hundred extra years.


Where'd that $30K figure come from?

You've said you're young and have a good rate on life insurance, so let's assume male (from the name) and 25. Wikipedia suggests you should live until you're 76.

$50/month 12 months/year (76-25 = 51 years) = $30,600.

So, it's less that you're paying $50/month and more that you're committing to pay $30,000 over the course of your life.


What else could you do with that same money?

Portland State University quotes ~$2500/semester for tuition. 3 semesters/year and 4 years/degree ~= $30K. Pretty sure you can get loans and go in to debt for this, so it's still something you could pay off over time. And if you're smart, do community college for the first two years, get a scholarship, etc., you can probably easily knock enough off to make up for interest charges.

Comment by handoflixue on Stupid Questions Thread - January 2014 · 2014-01-19T08:42:01.545Z · LW · GW

Read "rate of learning" as "time it takes to learn 1 bit of information"

So UFAI can learn 1 bit in time T, but a FAI takes T+X

Or, at least, that's how I read it, because the second paragraph makes it pretty clear that the author is discussing UFAI outpacing FAI. You could also just read it as a typo in the equation, but "accidentally miswrote the entire second paragraph" seems significantly less likely. Especially since "Won't FAI learn faster and outpace UFAI" seems like a pretty low probability question to begin with...

Erm... hi, welcome to the debug stack for how I reached that conclusion. Hope it helps ^.^

Comment by handoflixue on 2013 Less Wrong Census/Survey · 2013-11-22T17:44:34.232Z · LW · GW

Second that :)

Comment by handoflixue on Learned Blankness · 2013-05-12T07:40:03.651Z · LW · GW

I guess I learn better from manuals than from random experimentation :)

Comment by handoflixue on Personal Evidence - Superstitions as Rational Beliefs · 2013-03-22T18:15:03.572Z · LW · GW

saying a theorem is wrong because the hypotheses are not true is bad logic.

If the objection is true, and the hypothesis is false, that seems like a great objection! If, on the other hand, he provided no evidence towards his objection, then it seems that the bad logic is in not offering evidence, not attacking the hypothesis directly.

Am I missing something, or just reading this in an overly pedantic way?

Comment by handoflixue on Don't Get Offended · 2013-03-11T18:28:22.697Z · LW · GW

Internally I am generally the same, but I've come to realize that a rather sizable portion of the population has trouble distinguishing "all X are Y" and "some X are Y", both in speaking and in listening. So if someone says "man, women can be so stupid", I know that might well reflect the internal thought of "all women are idiots". And equally, someone saying "all women are idiots" might just be upset because his girlfriend broke up with him for some trivial reason.

Comment by handoflixue on Boring Advice Repository · 2013-03-11T18:23:39.263Z · LW · GW

You missed the point...

Comment by handoflixue on Don't Get Offended · 2013-03-11T18:23:12.279Z · LW · GW

My conclusion still holds if you simply need mathematicians in the top 10%, for example, only the analysis is slightly more complicated.

So you agree that, in the original example, you're more likely than not just being a racist? Because you certainly seem to be moving the goal post over to "top 10%" ...

faul_sname's definition

That link does not appear to point to a definition.

Comment by handoflixue on Boring Advice Repository · 2013-03-11T18:19:15.686Z · LW · GW

Fair, and thank you for calling me on it.

I get the impression that a majority of LessWrong readers are in major US cities, so I'm leaving it up as useful to them :)

Comment by handoflixue on Don't Get Offended · 2013-03-11T18:17:32.116Z · LW · GW

"Harm that is both genuine and unfair", then? Income taxes are 'fair' (and I would find it baffling to call that 'harm' unless they somehow came as a surprise), getting fired is offensive if it's done solely because your manager doesn't like you, but fair (and therefor not offensive) if it's because you failed to do the job. I think getting mugged is a good thing to get outraged about - we want to make that happen less!