Posts

Comments

Comment by thoughtspeed on Stupid Questions September 2017 · 2017-09-20T06:31:14.892Z · score: 0 (0 votes) · LW · GW

I find it funny people think questions about the Chinese Room argument or induction are obvious, tangential, or silly. xD

Anyway: What is the best algorithm for deciding between careers?

(Rule 1: Please don't say the words "consult 80,000 Hours" or "Use the 80K decision tool!" That is analogous to telling an atypically depressed person to "read a book on exercise and then go out and do things!" Like, that's really not a helpful thing, since those people are completely booked (they didn't respond to my application despite my fitting their checkboxes). Also I've been to two of their intro workshops.)

I want to know what object-level tools, procedures, heuristics etc. people here recommend for deciding between careers. Especially if one feels conflicted between different choices. Thanks! :)

Comment by thoughtspeed on What Are The Chances of Actually Achieving FAI? · 2017-07-31T23:14:57.341Z · score: 3 (3 votes) · LW · GW

Exactly ZERO.

...

Zero is not a probability! You cannot be infinitely certain of anything!

Nobody knows what's "friendly" (you can have "godly" there, etc. - with more or less the same effect).

By common usage in this subculture, the concept of Friendliness has a specific meaning-set attached to it that implies a combination of 1) a know-it-when-I-see-it isomorphism to common-usage 'friendliness' (e.g. "I'm not being tortured"), and 2) A deeper sense in which the universe is being optimized by our own criteria by a more powerful optimization process. Here's a better explanation of Friendliness than the sense I can convey. You could also substitute the more modern word 'Aligned' with it.

Worse, it may easily turn out that killing all humanimals instantly is actually the OBJECTIVELY best strategy for any "clever" Superintelligence.

I would suggest reading about the following:

Paperclip Maximizer Orthogonality Thesis The Mere Goodness Sequence. However, in order to understand it well you will want to read the other Sequences first. I really want to emphasize the importance of engaging with a decade-old corpus of material about this subject.

The point of these links is that there is no objective morality that any randomly designed agent will naturally discover. An intelligence can accrete around any terminal goal that you can think of.

This is a side issue, but your persistent use of the neologism "humanimal" is probably costing you weirdness points and detracts from the substance of the points you make. Everyone here knows humans are animals.

Most probably the problem will not be artificial intelligence, but natural stupidity.

Agreed.

Comment by thoughtspeed on Idea for LessWrong: Video Tutoring · 2017-07-01T23:52:21.725Z · score: 0 (0 votes) · LW · GW

I just started a Facebook group to coordinate effective altruist youtubers. I'd definitely say rationality also falls under the umbrella. PM me and I can add you. :)

Comment by thoughtspeed on Torture vs. Dust Specks · 2017-06-19T02:11:54.707Z · score: 1 (1 votes) · LW · GW

There is some minimum threshold below which it just does not count, like saying, "What if we exposed 3^^^3 people to radiation equivalent to standing in front of a microwave for 10 seconds? Would that be worse than nuking a few cities?" I suppose there must be someone in 3^^^3 who is marginally close enough to cancer for that to matter, but no, that rounds down to 0.

Why would that round down to zero? That's a lot more people having cancer than getting nuked!

(It would be hilarious if Zubon could actually respond after almost a decade)

Comment by thoughtspeed on Stupid Questions May 2017 · 2017-04-26T00:47:55.639Z · score: 1 (1 votes) · LW · GW

For 1), you might be interested to know that I recently made a Double Crux UI mockup here. I'm hoping to start some discussion on what an actual interface might look like.

Yep, you were one of the parties I was thinking of. Nice work! :D

Comment by thoughtspeed on April '17 I Care About Thread · 2017-04-25T23:41:58.565Z · score: 3 (3 votes) · LW · GW

What I'm about to say is within the context of seeing you be one of the most frequent commenters on this site.

Otherwise it sounds like entitled whining.

That is really unfriendly to say; honestly the word I want to use is "nasty" but that is probably hyperbolic/hypocritical. I'm not sure if you realize this but a culture of macho challenging like this discourages people from participating. I think you and several other commenters who determine the baseline culture of this site should try to be more friendly. I have seen you in particular use a smiley before so that's good and you're probably a friendly person along many dimensions. But I want to emphasize how intimidated newcomers or people who are otherwise uncomfortable with what is probably interpreted-by-you as joshing-around with LW-friends. To you it may feel like you are pursuing less-wrongness, but to people who are more neurotic and/or more unfamiliar with this forum it can come across as feeling hounded, even if vicariously.

I do not want to pick on people I don't know but there are other frequent commenters who could use this message too.

Comment by thoughtspeed on Stupid Questions May 2017 · 2017-04-25T23:07:30.372Z · score: 3 (3 votes) · LW · GW
  1. Why isn't CFAR or friends building scaleable rationality tools/courses/resources? I played the Credence Calibration game and feel like that was quite helpful in making me grok Overconfidence Bias and the internal process of down-justing one's confidence in propositions. Multiple times I've seen mentioned the idea of an app for Double Crux. That would be quite useful for improving online discourse (seems like Arbital sorta had relevant plans there).

  2. Relatedly: Why doesn't CFAR have a prep course? I asked them multiple times what I can do to prepare, and they said "you don't have to do anything". This doesn't make sense. I would be quite willing to spend hours learning marginal CFAR concepts, even if it was at a lower pacing/information-density/quality. I think the argument is something like 'you must empty your cup so you can learn the material' but I'm not sure.

I am somewhat suspicious that one of the reasons (certainly not the biggest, but one of) for the lack of these things is so they can more readily indoctrinate AI Safety as a concern. Regardless if that's a motivator, I think their goals would be more readily served by developing scaffolding to help train rationality amongst a broader base of people online (and perhaps use that as a pipeline for the more in-depth workshop).

Comment by thoughtspeed on "Flinching away from truth” is often about *protecting* the epistemology · 2017-04-25T22:06:42.283Z · score: 0 (1 votes) · LW · GW

Did it get resolved? :)

Comment by thoughtspeed on What's up with Arbital? · 2017-04-25T21:34:40.882Z · score: 3 (1 votes) · LW · GW

I had asked someone how I could contribute, and they said there was a waitlist or whatever. Like others have mentioned, I would recommend prioritizing maximal user involvement. Try to iterate quickly and get as many eyeballs on it as you can so you can see what works and what breaks. You can't control people.

Comment by ThoughtSpeed on [deleted post] 2017-04-17T19:01:42.738Z

I do want to heap heavy praise on the OP for Just Going Out And Trying Something, but yes, consult with other projects to avoid duplication of effort. :)

Comment by thoughtspeed on Net Utility and Planetary Biocide · 2017-04-16T05:00:20.280Z · score: 0 (0 votes) · LW · GW

Honestly, it probably is. :) Not a bad sign as in you are a bad person, but bad sign as in this is an attractor space of Bad Thought Experiments that rationalist-identifying people seem to keep falling into because they're interesting.

Comment by thoughtspeed on Project Hufflepuff: Planting the Flag · 2017-04-15T20:59:57.529Z · score: 0 (0 votes) · LW · GW

I think "upskill" is another one of these.

Comment by thoughtspeed on The Social Substrate · 2017-04-04T00:08:28.820Z · score: 0 (0 votes) · LW · GW

What is NMC?

(For anyone who doesn't know: NVC stands for Nonviolent Communication. I would highly recommend it.)

Comment by thoughtspeed on Lifestyle interventions to increase longevity · 2017-03-28T08:12:44.557Z · score: 1 (1 votes) · LW · GW

Agreed that this is a problem! Thankfully there are a lot of integrations with Beeminder that automatically enter data. You can hook up hundreds of different applications to it through IFTTT or Zapier.

Comment by thoughtspeed on Rationalist horoscopes: A low-hanging utility generator. · 2017-03-27T12:53:35.688Z · score: 2 (2 votes) · LW · GW

What happened to this? It seems like the Tumblr is defunct?

Comment by thoughtspeed on I Want To Live In A Baugruppe · 2017-03-20T19:15:49.987Z · score: 0 (0 votes) · LW · GW

Intredasted!

Comment by thoughtspeed on Terminology is important · 2016-12-02T08:43:02.716Z · score: 1 (1 votes) · LW · GW

The Elephant and the Rider.

Comment by thoughtspeed on Open thread, Sep. 26 - Oct. 02, 2016 · 2016-09-28T08:16:59.610Z · score: 0 (0 votes) · LW · GW

I think my go-to here would be Low of Solipsism from Death Note. As an aspiring villain being resurrected, I can't think of anything more dastardly.

Comment by thoughtspeed on Open thread, Sep. 26 - Oct. 02, 2016 · 2016-09-28T08:08:45.213Z · score: 2 (2 votes) · LW · GW

Is that for real or are you kidding? Can you link to it?

Comment by thoughtspeed on Why CFAR? The view from 2015 · 2016-08-21T08:11:58.030Z · score: 0 (0 votes) · LW · GW

Did this ever get answered?

Comment by thoughtspeed on A rational unfalsifyable believe · 2016-07-30T18:18:12.567Z · score: 1 (1 votes) · LW · GW

I think the names you chose were quite distracting from the problem, at least for me. See paragraphs 4-6 in this article for why: http://lesswrong.com/lw/gw/politics_is_the_mindkiller/

Comment by thoughtspeed on Problems in Education · 2013-04-09T05:33:02.788Z · score: 4 (10 votes) · LW · GW

Not to lower signal-to-noise, but - I really liked this comment. It shows of a fine mind made cynical, a delicate sarcasm born of an impinging upon by a horrific, Cthulhian reality.

"People are crazy, the world is mad."

Comment by thoughtspeed on New applied rationality workshops (April, May, and July) · 2013-04-09T05:18:54.629Z · score: 3 (3 votes) · LW · GW

At what point do you guys estimate CFAR will scale such that economically disadvantaged individuals such as myself will be able to afford a retreat? In the next few years will there be more of a focus on making money off of increased demand from business, heavier-pocketbook type individuals, or lowering costs for hungry student types?

I would love nothing more than to go, if only it was cheaper.

Comment by thoughtspeed on Welcome to Less Wrong! (July 2012) · 2013-02-27T06:07:20.381Z · score: 8 (10 votes) · LW · GW

Hi. 18 years old. Typical demographics. 26.5-month lurker and well-read of the Sequences. Highly motivated/ambitious procrastinator/perfectionist with task-completion problems and analysis paralysis that has caused me to put off this comment for a long time. Quite non-optimal to do so, but... must fight that nasty sunk cost of time and stop being intimidated and fearing criticism. Brevity to assure it is completed - small steps on a longer journey. Hopefully writing this is enough of an anchor. Will write more in future time of course.

Finally. It is written. So many choices... so many thoughts, ideas, plans to express... No! It is done! Another time you silly brain! We must choose futures! We will improve, brain, I promise.

I look forward to at last becoming an active member of this community, and LEVELING UP! Tsuyoku naritai!