Comment by cursed on [Link] Introducing OpenAI · 2015-12-11T22:37:41.289Z · score: 7 (7 votes) · LW · GW

With Sam Altman (CEO of YCombinator) talking so much about AI safety and risk over the last 2-3 months, I was so sure that he was working out a deal to fund MIRI. I wonder why they decided to create their own non-profit instead.

Although on second thought, they're aiming for different goals. While MIRI is focused on safety once strong AI occurs, OpenAI is trying to actually speed up the research of strong AI.

Comment by cursed on Open thread, Nov. 09 - Nov. 15, 2015 · 2015-11-30T22:04:06.598Z · score: 0 (0 votes) · LW · GW

Great analysis, thanks!

Comment by cursed on Open thread, Nov. 09 - Nov. 15, 2015 · 2015-11-20T08:33:53.704Z · score: 0 (0 votes) · LW · GW

This isn't bad, though I feel like:

This I call "pretending to be Wise". Of course there are many ways to try and signal wisdom. But trying to signal wisdom by refusing to make guesses - refusing to sum up evidence - refusing to pass judgment - refusing to take sides - staying above the fray and looking down with a lofty and condescending gaze - which is to say, signaling wisdom by saying and doing nothing - well, that I find particularly pretentious.

would apply to the XKCD example, but not to the people claiming that the Lebanon attacks should've been publicized more than the Paris attacks. I hope I'm not treading too much into political territory here.

Comment by cursed on Open thread, Nov. 09 - Nov. 15, 2015 · 2015-11-15T08:26:44.810Z · score: 1 (3 votes) · LW · GW

Is there a good word for The closest word I can think of is "countersignaling", but it doesn't precisely describe it. I've noticed this sort of behavior a lot on Facebook recently, with the Paris terrorist attacks.

Comment by cursed on Open Thread, Jul. 6 - Jul. 12, 2015 · 2015-07-07T05:31:59.003Z · score: 2 (2 votes) · LW · GW

Whenever the conjunction fallacy is brought up, it always irks me, because it doesn't seem like a real fallacy. In the example given by Rationality A to Z, "[...] found that experimental subjects consdiered it less likely that a strong tennis player would lose the first set than he would lose the first set but win the match."

There's two valid interpretations of this statement here:

1) The fallacious interpretation: P(Lose First Set) < P(Lose First Set and Win Match)

2) P(Lose First Set) < P(Win Match | Lose First Set), which is a valid and not necessarily fallacious reasoning, given the context that the tennis player is considered strong. Another possible phrasing of "he would lose the first set but win the match" is "given that he lost his first set, what's the chance of him winning the match?"

Has this been addressed before?

Comment by cursed on What are "the really good ideas" that Peter Thiel says are too dangerous to mention? · 2015-04-12T21:21:25.189Z · score: 4 (4 votes) · LW · GW

I haven't really looked into it, but there was an odd message that he left in his IAMA in regards to Girardian philosophy: . Would love for anyone who knows more to jump in.

Comment by cursed on Open thread, Jan. 26 - Feb. 1, 2015 · 2015-01-29T23:33:56.438Z · score: 1 (1 votes) · LW · GW

I like Bill's EA tendencies.

Comment by cursed on CFAR fundraiser far from filled; 4 days remaining · 2015-01-28T03:57:37.195Z · score: 2 (2 votes) · LW · GW

I'm not sure if this response was directed towards me, because I don't know what their reasonings are.

Comment by cursed on CFAR fundraiser far from filled; 4 days remaining · 2015-01-28T02:54:41.148Z · score: 3 (3 votes) · LW · GW

As noted in, they haven't even started yet. Also, just replicating a study they cite in their rationality training would be a good step.

One of the future premises of CFAR is that we can eventually apply the full scientific method to the problem of constructing a rationality curriculum (by measuring variations, counting things, re-testing, etc.) -- we aim to eventually be an evidence-based organization. In our present state this continues to be a lot harder than we would like; and our 2014 workshop, for example, was done via crude "what do you feel you learnt?" surveys and our own gut impressions.

Comment by cursed on CFAR fundraiser far from filled; 4 days remaining · 2015-01-28T02:42:01.144Z · score: 8 (12 votes) · LW · GW

On CFAR's front page:

In the process, we’re breaking new ground in studying the long-term effects of rationality training on life outcomes using randomized controlled trials.

Despite CFAR's 2-3 year existence (probably longer informally, as well) they have yet to publish a single paper on these "randomized controlled trials". I would advise not donating until they make good on their claims.

edit: I've also made some notes on CFAR and their use of science as an applause light in previous comments.

Comment by cursed on Open thread, Dec. 15 - Dec. 21, 2014 · 2014-12-16T11:23:03.880Z · score: 3 (3 votes) · LW · GW

Thinking about a quote from HPMOR (the podcast is quite good, if anyone was interested):

But human beings had four times the brain size of a chimpanzee. 20% of a human's metabolic energy went into feeding the brain. Humans were ridiculously smarter than any other species. That sort of thing didn't happen because the environment stepped up the difficulty of its problems a little. Then the organisms would just get a little smarter to solve them. Ending up with that gigantic outsized brain must have taken some sort of runaway evolutionary process, something that would push and push without limits.

And today's scientists had a pretty good guess at what that runaway evolutionary process had been.


It really made you appreciate what millions of years of hominids trying to outwit each other - an evolutionary arms race without limit - had led to in the way of increased mental capacity.

Besides the quoted "Chimpanzee Politics" are there any other references to this hypothesis? I've tried Googling around for 5 minutes and I couldn't find anything.

Edit: seems like I was looking using the wrong keywords: Wikipedia seems to have a small paragraph on evolution of human brain due to competitive social behavior, but I'd still like to see if anyone else had any articles on the matter.

Comment by cursed on Podcast: Rationalists in Tech · 2014-12-16T11:11:49.737Z · score: 2 (2 votes) · LW · GW

Personally, I prefer more produced podcasts, in the style of Serial, Freakonomics, etc, because very few people are good interviewees. I would like to hear more if you could improve the microphone quality - I couldn't distinguish some words, even upon relistening. I'm sure the person behind HPMOR Podcast would offer more tips if you contacted him.

Comment by cursed on Musk on AGI Timeframes · 2014-11-18T02:25:06.946Z · score: 3 (3 votes) · LW · GW

Do you mind revealing what Shane's timelines are, and the probability that he thinks that he'll play a role in AGI?

Comment by cursed on Open thread, Oct. 27 - Nov. 2, 2014 · 2014-11-02T13:35:27.758Z · score: 0 (0 votes) · LW · GW

Hey Dan, thanks for responding. I wanted to ask a few questions:

You noted the non-response rate for the 20 randomly selected alumni. What about the non-response rate for the feedback survey?

"0 to 10, are you glad you came?" This is a biased question, because you frame that the person is glad. A similar negative question may say "0 to 10, are you dissatisfied that you came?" Would it be possible to anonymize and post the survey questions and data?

We also sent out a survey earlier this year to 20 randomly selected alumni who had attended workshops in the previous 3-18 months, and asked them the same question. 18 of the 20 filled out the survey, and their average response to that question was 9.6.

It's great that you're following up with people long after the workshops end. Why not survey all alumni? You have their emails.

I've read most of the blog posts about CFAR workshops that you linked to - they were one of my main motivations for attending a workshop. I notice that all reviews are from people who have already participated in LessWrong and related communities. (all refer to some prior CFAR, EA and rationality related topics before they attended camp). Also, it seems like in person conversations are majorly subjected to the availability bias, as the people who attended workshops || know people who work at MIRI/CFAR || are involved in LW meetups in Berkeley and surrounding areas would contribute to the positivity of these conversations.. Also, the evaporative cooling effect may also play a role, in that people who weren't satisfied with the workshop would leave the group. Are there reviews from people who are not already familiar with LW/CFAR staff?

Also, I agree with MTGandP. It would be nice if CFAR could write a blog post or paper on how effective their teachings are, compared to a control group. Perhaps two one-day events, with subjects randomized across both days, should work well as a starting point.

Comment by cursed on Open thread, Oct. 27 - Nov. 2, 2014 · 2014-10-29T05:12:00.421Z · score: 7 (7 votes) · LW · GW

Do you think it was unhelpful because you already had a high level of knowledge on the topics they were teaching and thus didn't have much to learn or because the actual techniques were not effective?

I don't believe I had a high level of knowledge on the specific topics they were teaching (behavior change, and the like). I did study some cognitive science in my undergraduate years, and I take issue with the 'science'.

Do you think your experience was typical?

I believe that the majority of people don't get much, if anything, from CFAR's rationality lessons. However, after the lesson, people may be slightly more motivated to accomplish whatever they want to, in the short term just because they've paid money towards a course to increase their motivation.

How useful do you think it would be to an average person?

There was one average person at one of the workshops I attended. e.g. never read LessWrong/other rationality material. He fell asleep a few hours into the lesson, I don't think he gained much from attending. I'm hesitant to extrapolate, because I'm not exactly sure what an average person entails.

An average rationalist?

I haven't met many rationalists, but would believe they wouldn't benefit much/at all.

Comment by cursed on Open thread, Oct. 27 - Nov. 2, 2014 · 2014-10-28T21:23:39.796Z · score: 2 (2 votes) · LW · GW

That's fantastic. How many cards total do you have, and how many minutes a day do you study?

Comment by cursed on Open thread, Oct. 27 - Nov. 2, 2014 · 2014-10-28T21:15:02.557Z · score: 15 (15 votes) · LW · GW

I didn't learn anything useful. They taught, among other things, "here's what you should do to gain better habits". Tried it and didn't work on me. YMMV.

One thing that really irked me was the use of cognitive 'science' to justify their lessons 'scientifically'. They did this by using big scientific words that felt like they were trying to attempt to impress us with their knowledge. (I'm not sure what the correct phrase is - the words weren't constraining beliefs? don't pay rent? they could have made up scientific sounding words and it would have had the same effect.)

Also, they had a giant 1-2 page listing of citations that they used to back up their lessons. I asked some extremely basic questions about papers and articles I've previously read on the list and they had absolutely no idea what I was talking about.

ETA: I might go to another class in a year or two to see if they've improved. Not convinced that they're worth donating money towards at this moment.

Comment by cursed on Open thread, Oct. 27 - Nov. 2, 2014 · 2014-10-28T06:51:27.988Z · score: 4 (4 votes) · LW · GW

Those who are currently using Anki on a mostly daily or weekly basis: what are you studying/ankifying?

To start: I'm working on memorizing programming languages and frameworks because I have trouble remembering parameters and method names.

Comment by cursed on Open thread, Oct. 27 - Nov. 2, 2014 · 2014-10-28T06:50:06.244Z · score: 9 (9 votes) · LW · GW

I've been to several of CFAR's classes throughout the last 2 years (some test classes and some more 'official' ones) and I feel like it wasn't a good use of my time. Spend your money elsewhere.

Comment by cursed on Open thread, September 22-28, 2014 · 2014-09-24T04:21:34.577Z · score: 3 (3 votes) · LW · GW

Is there a listing of Yvain/slatestarcodex's fiction? I just finished reading The Study of Anglophysics, and I want more.

Comment by cursed on [Sequence announcement] Introduction to Mechanism Design · 2014-05-06T07:06:37.864Z · score: 0 (0 votes) · LW · GW

I'm convinced! Checked out your first post, good stuff so far.

Comment by cursed on [Sequence announcement] Introduction to Mechanism Design · 2014-05-04T05:33:17.213Z · score: 1 (1 votes) · LW · GW

It'd be nice if you could go over why you think you'd be a good candidate to cover the subject.

Comment by cursed on Open Thread April 16 - April 22, 2014 · 2014-04-18T06:40:59.307Z · score: 2 (2 votes) · LW · GW

i'm interested in your other ed-tech startup ideas, if you don't mind sharing.

Comment by cursed on Open thread, 24-30 March 2014 · 2014-03-27T06:00:12.563Z · score: 8 (8 votes) · LW · GW

Cryonics ideas in practice:

"The technique involves replacing all of a patient's blood with a cold saline solution, which rapidly cools the body and stops almost all cellular activity. "If a patient comes to us two hours after dying you can't bring them back to life. But if they're dying and you suspend them, you have a chance to bring them back after their structural problems have been fixed," says surgeon Peter Rhee at the University of Arizona in Tucson, who helped develop the technique."

Comment by cursed on Open Thread for February 11 - 17 · 2014-02-11T21:33:29.893Z · score: 0 (0 votes) · LW · GW

Great, I'll look into the Topology book.

Comment by cursed on Open Thread for February 11 - 17 · 2014-02-11T21:00:14.749Z · score: 0 (0 votes) · LW · GW

I have a degree in computer science, looking to learn more about math to apply to a math graduate program and for fun.

Comment by cursed on Open Thread for February 11 - 17 · 2014-02-11T20:53:34.376Z · score: 0 (0 votes) · LW · GW

Thanks, I made an edit you might not have seen, I mentioned I do have experience with calculus (differential, integral, multi-var), discrete math (basic graph theory, basic proofs), just filling in some gaps since it's been awhile since I've done 'math'. I imagine I'll get through the first two books quickly.

Can you recommend some algebra/analysis/topology books that would be a natural progression of the books I listed above?

Comment by cursed on Open Thread for February 11 - 17 · 2014-02-11T20:33:42.392Z · score: 4 (4 votes) · LW · GW

I'm interested in learning pure math, starting from precalculus. Can anyone give advise on what textbooks I should use? Here's my current list (a lot of these textbooks were taken from the MIRI and LW's best textbook list):

  • Calculus for Science and Engineering
  • Calculus - Spivak
  • Linear Algebra and its Applications - Strang
  • Linear Algebra Done Right
  • Div, Grad, Curl and All That (Vector calc)
  • Fundamentals of Number Theory - LeVeque
  • Basic Set Theory
  • Discrete Mathematics and its Applications
  • Introduction to Mathematical Logic
  • Abstract Algebra - Dummit

I'm well versed in simple calculus, going back to precalc to fill gaps I may have in my knowledge. I feel like I'm missing some major gaps in knowledge jumping from the undergrad to graduate level. Do any math PhDs have any advice?


Comment by cursed on Rational Resolutions: Special CFAR Mini-workshop SATURDAY · 2014-01-03T05:40:20.257Z · score: 1 (1 votes) · LW · GW

"from 11PM to 5PM PST on Saturday, Jan. 4th."

Guessing you meant 11AM. -Edit: The Eventbrite link says 11AM to 7PM. What is it?

I wasn't convinced about testimonials from CFAR camps (also as a student, the price deterred me), but with a money back guarantee it seems like the opportunity cost of spending 6 hours at CFAR outweighs whatever else I would do. Tempted to go.

Comment by cursed on How to Have Space Correctly · 2013-06-26T03:36:45.453Z · score: 1 (1 votes) · LW · GW

In one section, you spelled Kirsh's name Kirsch. Also, it was unexpected to see my professor show up on a Lesswrong post.

Comment by cursed on Soylent Orange - Whole food open source soylent · 2013-03-31T06:35:49.987Z · score: 0 (0 votes) · LW · GW

Awesome, I'm creating my own recipe based off of yours.

Do you mix all of your ingredients together, including the chicken and the supplements?

Comment by cursed on 2012: Year in Review · 2013-01-03T10:28:29.668Z · score: 4 (4 votes) · LW · GW

The link for Feynman's Why Questions is broken.

Comment by cursed on LessWrong podcasts · 2012-12-04T08:31:57.876Z · score: 0 (0 votes) · LW · GW

Which text to speech program do you use?

Comment by cursed on The Best Textbooks on Every Subject · 2011-01-28T08:42:11.748Z · score: 4 (4 votes) · LW · GW

What are the prerequisites for reading this? What level of mathematics and background of classical physics?