Comment by dr_manhattan on The Main Sources of AI Risk? · 2019-03-22T12:42:08.765Z · score: 2 (1 votes) · LW · GW

Great idea, would be awesome if someone adds links to best reference posts for each one of these (additional benefit this will identify whitespace that needs to be filled).

Comment by dr_manhattan on [Question] Tracking accuracy of personal forecasts · 2019-03-21T16:24:15.295Z · score: 2 (1 votes) · LW · GW
"Will Adam be able to get back to cycling within a month [after a recent accident]?"

(Probably unnecessary word of caution) do not forecast your own behavior due to risk of reduced agency.

Comment by dr_manhattan on The application of the secretary problem to real life dating · 2019-03-16T17:12:19.218Z · score: 2 (1 votes) · LW · GW

The confusion I'm trying to resolve is it feels like saving the best for last, if you have priors (assuming no attrition in the pool, which is obviously false). The intuition here is you're using some % of the pool for calibration purposes, and should not be using your best prospects for calibration.

Comment by dr_manhattan on The application of the secretary problem to real life dating · 2019-03-15T12:24:21.876Z · score: 2 (1 votes) · LW · GW

What if there are prior probabilities over "secretary ability" eg reading their resume or getting references? Has anyone worked out that variant?

Comment by dr_manhattan on Leaky Concepts · 2019-03-06T14:00:36.452Z · score: 2 (1 votes) · LW · GW
If you want to go mad, commit to drawing two lines the same length and don’t stop until you are dead from trying to line up atoms to be in the right places. There are quicker ways to go insane.

Related humor https://www.youtube.com/watch?v=w-wbWGwZ7_k

Comment by dr_manhattan on Rule Thinkers In, Not Out · 2019-03-05T14:05:03.029Z · score: 2 (1 votes) · LW · GW

Hi Anna, since you've made the specific claim publicly (I assume intended as a warning), would you mind commenting on this

https://www.lesswrong.com/posts/u8GMcpEN9Z6aQiCvp/rule-thinkers-in-not-out#X7MSEyNroxmsep4yD

Specifically it's given there's some collateral damage when people are introduced to new ideas (or more specifically broken out of their world views). You seem to imply that with Michael it's more than that - (I think Vaniver alludes to it with the creepy comment).

Another words is Quirrell dangerous to some people and deserves a warning label or do you consider Michael Quirrell+ because of his outlook.

Comment by dr_manhattan on Rule Thinkers In, Not Out · 2019-03-03T21:22:34.099Z · score: 2 (1 votes) · LW · GW

Thanks (&Yoav for clarification). So in your opinion is MV dangerous to a class of people with certain kinds of beliefs the way Harry was to Drako (the risk was pure necessity to break out of wrong ideas) or is he dangerous because of an idea package or bad motivations of his own

Comment by dr_manhattan on Rule Thinkers In, Not Out · 2019-03-03T13:20:00.095Z · score: 2 (1 votes) · LW · GW

Ah sorry would you mind elaborating the Draco point in normie speak if you have the bandwidth?

Comment by dr_manhattan on Rule Thinkers In, Not Out · 2019-03-01T17:59:59.053Z · score: 4 (2 votes) · LW · GW

Good points (similar to Raemon). I would find it useful if someone created some guidance for safe ingestion (or alternative source) of MV type ideas/outlook; I do the "subtle skill of seeing the world with fresh eyes" potentially extremely valuable, which is why I suppose Anna kept on encouraging people.

Comment by dr_manhattan on 2018 AI Alignment Literature Review and Charity Comparison · 2019-01-08T15:07:01.531Z · score: 2 (1 votes) · LW · GW

Thanks for doing this! I couldn't figure out how.

Comment by dr_manhattan on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-26T21:03:32.378Z · score: 6 (3 votes) · LW · GW

My take+decision on the MIRI issue, in ROT13 to continue the pattern

Nabgure (zvabe) "Gbc Qbabe" bcvavba. Ba gur ZVEV vffhr: nterr jvgu lbhe pbapreaf, ohg pbagvahr qbangvat, sbe abj. V nffhzr gurl'er shyyl njner bs gur ceboyrz gurl'er cerfragvat gb gurve qbabef naq jvyy nqqerff vg va fbzr snfuvba. Vs gurl qb abg zvtug nqwhfg arkg lrne. Gur uneq guvat vf gung ZVEV fgvyy frrzf zbfg qvssreragvngrq va nccebnpu naq gnyrag bet gung pna hfr shaqf (if BcraNV naq QrrcZvaq naq jryy-shaqrq npnqrzvp vafgvghgvbaf)

Comment by dr_manhattan on Update the best textbooks on every subject list · 2018-11-21T02:22:02.189Z · score: 6 (3 votes) · LW · GW

Done. That thread is uge.

Comment by dr_manhattan on The Best Textbooks on Every Subject · 2018-11-21T02:21:22.407Z · score: 11 (2 votes) · LW · GW

Recommending Ben Lambert's "A student's guide to Bayesian Statistics" as the best all-in-one intro to *applied* Bayesian statistics

The book starts with very little prerequisites, explains the math well while keeping it to a minimum necessary for intuition, (+has good illustrations) and goes all the way to building models in Stan. (Other good books are McEarlath Statistical Rethinking, Kruschke's Doing Bayesian Data Analysis and Gelman's more math-heavy Bayesian Data Analysis). I recommend Lambert for being the most holistic coverage.

I have read McEarlath Statistical Rethinking and Kruschke's Doing Bayesian Data Analysis, skimmed Gelman's Bayesian Data Analysis. Recommend Lambert if you only read 1 book or as your first book in the area.

PS. He has a playlist of complementary videos to go along with the book

Comment by dr_manhattan on New safety research agenda: scalable agent alignment via reward modeling · 2018-11-20T21:20:43.013Z · score: 6 (3 votes) · LW · GW

They mention and link to iterated amplification in the Medium article.

Scaling up
In the long run, we would like to scale reward modeling to domains that are too complex for humans to evaluate directly. To do this, we need to boost the user’s ability to evaluate outcomes. We discuss how reward modeling can be applied recursively: we can use reward modeling to train agents to assist the user in the evaluation process itself. If evaluation is easier than behavior, this could allow us to bootstrap from simpler tasks to increasingly general and more complex tasks. This can be thought of as an instance of iterated amplification.
Comment by dr_manhattan on Update the best textbooks on every subject list · 2018-11-09T16:47:07.963Z · score: 4 (2 votes) · LW · GW

Thanks, added a comment

Comment by dr_manhattan on Update the best textbooks on every subject list · 2018-11-08T21:24:08.844Z · score: 12 (3 votes) · LW · GW

Ben Lambert's "A student's guide to Bayesian Statistics" as the best intro to *applied* Bayesian stats. The book starts with very little prerequisites, explains the math well while keeping it to a minimum necessary for intuition, (+has good illustrations) and goes all the way to building models in Stan. (Other good books are McEarlath Statistical Rethinking, Kruschke's Doing Bayesian Data Analysis and Gelman's more math-heavy Bayesian Data Analysis). I recommend Lambert for being the most holistic coverage.

PS. He has a playlist of complementary videos to go along with the book

ETA: I have read McEarlath Statistical Rethinking and Kruschke's Doing Bayesian Data Analysis, skimmed Gelman's Bayesian Data Analysis. Recommend Lambert if you only read 1 book or as your first book in the area.

Comment by dr_manhattan on Starting Meditation · 2018-11-08T21:11:05.825Z · score: 5 (2 votes) · LW · GW

Thanks!

Comment by dr_manhattan on Bayes Questions · 2018-11-08T13:50:53.710Z · score: 3 (2 votes) · LW · GW

The technologies I'm suggesting are just implementations of Bayes, which is what you're trying to do. There's some theory as to *how* they do inference (special versions of MCMC basically), but this is an "implementation detail" to a degree. Here's some references to get you started, though they're mostly Stan-centered http://mc-stan.org/users/documentation/external.html . If you want a better overall picture of the theory I really like this https://ben-lambert.com/a-students-guide-to-bayesian-statistics/ book, takes you from basics all the way to Stan usage

Comment by dr_manhattan on Bayes Questions · 2018-11-07T17:38:22.023Z · score: 3 (2 votes) · LW · GW

Can you program? In that case I highly recommend using PyMC or Stan for this kind of work. There's a pretty rich literature and culture of how to iteratively improve these types of models, and some tool support around these specific toolkits.

Comment by dr_manhattan on Starting Meditation · 2018-10-26T00:09:14.983Z · score: 2 (1 votes) · LW · GW

I should reread the book, but creation of the precise stages with precise description of experiences seemed unrealistic to me. Would love to hear your take

Comment by dr_manhattan on Starting Meditation · 2018-10-25T18:42:59.609Z · score: 2 (1 votes) · LW · GW

Did you do a review of TMI somewhere? While I liked the book the author seems overconfident (>>> my comfort level) about "you will have this kind of experience at this stage", largely backed by personal experience.

Comment by dr_manhattan on Fasting Mimicking Diet Looks Pretty Good · 2018-10-04T22:07:37.853Z · score: 4 (2 votes) · LW · GW

Thanks for writing this up - Vassar mentioned you've looked into this, was going to ask you!

One thing that I was a bit suspicious in the book is his strong no-skipping-breakfast recommendation, apparently it's based on a study, but seemed a lot less supported than the rest.

Comment by dr_manhattan on Reflections on Being 30 · 2018-10-03T13:58:04.862Z · score: 4 (2 votes) · LW · GW

Great essay Sarah! Mentally added "the cheapest thing to give up is being a dumbass" to my quote file. And Happy Birthday!

Comment by dr_manhattan on Direct Primary Care · 2018-09-25T18:28:23.409Z · score: 2 (1 votes) · LW · GW

It seems if this works on small scale it should scale up. The one counter-force to keep in mind are the rent seekers in this space; they can afford to ignore a small quirky operation but fighting this at scale is existential for them. You might actually need Amazon scale to fight them.

For reference on political forces around this:

Obama said, continuing on the healthcare theme. “Everybody who supports single-payer healthcare says, ‘Look at all this money we would be saving from insurance and paperwork.’ That represents 1 million, 2 million, 3 million jobs of people who are working at Blue Cross Blue Shield or Kaiser or other places. What are we doing with them? Where are we employing them?”

https://www.thenation.com/article/mr-obama-goes-washington/

Comment by dr_manhattan on Open AI co-founder on AGI · 2018-09-16T19:59:57.394Z · score: 1 (4 votes) · LW · GW
not at the same level of rigour as MIRI

I would agree, with a qualification. Due to OAI co-founders personal risk attitude I would say as an organization they care less about it. As far as the actual quality of research they have some pretty respectable people like Paul Christiano Dario Amodei (respectable here is a second-hand judgement, do not consider myself qualified for 1st hand)

Comment by dr_manhattan on The Scent of Bad Psychology · 2018-09-13T15:08:58.556Z · score: 10 (2 votes) · LW · GW

Why not post the full article, especially since this is your own blog?

Comment by dr_manhattan on I am the very model of a self-recursive modeler · 2018-09-07T16:55:29.673Z · score: 4 (2 votes) · LW · GW
It doesn't help that with an IQ of 134 I sometimes feel as dumb as a horse when browsing LW.

We should start a club :). Something like a Jerry daycare.

Comment by dr_manhattan on Would you benefit from audio versions of posts? · 2018-07-28T19:03:35.585Z · score: 3 (2 votes) · LW · GW

For me this does compare favorably to the TTS already built into the getpocket.com app, which also has a) follow along with text b) speed control features.

Comment by dr_manhattan on Would you benefit from audio versions of posts? · 2018-07-26T13:48:16.225Z · score: 4 (2 votes) · LW · GW

I occasionally use TTS in Pocket app (https://getpocket.com) (syncing the posts via their browser plugin). It's quite decent and I expect this technology to improve quite a bit in the coming years.

Comment by dr_manhattan on Paper Trauma · 2018-06-05T18:11:27.367Z · score: 11 (2 votes) · LW · GW

There are now tablet apps (e.g. GoodNotes or OneNote on the iPad) that allow you to have huge virtual paper (due to zoom in feature) and a wide variety of virtual writing tools. It also allows select, copy+paste (instead of re-writing something repetitive). Also solves the preservation of notes problem some people care about. I highly recommend trying it.

Comment by dr_manhattan on The Intelligent Social Web · 2018-05-22T13:09:38.949Z · score: 0 (2 votes) · LW · GW

This really made me think of Gandalf, as being a superb conductor/chef of the social web, based on very raw ingredients (Bilbo, Frodo, Aragorn).

Comment by dr_manhattan on Announcement: AI alignment prize round 2 winners and next round · 2018-04-16T16:47:25.088Z · score: 9 (2 votes) · LW · GW

I wonder if it's possible to help with this in a tax-advantaged way. Maybe set up donation to MIRI earmarked for this kind of thing.

Using rationality to debug Machine Learning

2018-04-10T20:03:44.357Z · score: 55 (12 votes)
Comment by dr_manhattan on Algorithms as Case Studies in Rationality · 2018-03-28T16:18:47.691Z · score: 3 (1 votes) · LW · GW

Recommending http://algorithmstoliveby.com/ for the same reasons

Comment by dr_manhattan on The abruptness of nuclear weapons · 2018-03-06T15:39:10.291Z · score: 7 (3 votes) · LW · GW

I think a highly relevant detail here is that the biggest bottleneck in development of nuclear weapons is refinement of fissionable material which is a tremendously intense industrial process (and still remains the major deterrent for obtainment of nukes). Without it the development would have been a lot more abrupt (and likely successful on German side).

Comment by dr_manhattan on A Proper Scoring Rule for Confidence Intervals · 2018-02-14T18:42:56.766Z · score: 3 (1 votes) · LW · GW

would you mind spelling out the integral part?

Comment by dr_manhattan on The Utility of Human Atoms for the Paperclip Maximizer · 2018-02-02T14:05:49.733Z · score: 9 (3 votes) · LW · GW
Even Friendly AI may deconstruct humans for their atoms in the AI’s early stages, and as such sacrificy will translate in the higher total number of sentient beings in the universe at the end.

“You keep using that word, I do not think it means what you think it means.”

Comment by dr_manhattan on Hammers and Nails · 2018-01-24T21:59:46.745Z · score: 13 (3 votes) · LW · GW

Feynman also explicitly spoke about hammer-mode “[so] I had got a great reputation for doing integrals, only because my box of tools was different from everybody else’s, and they had tried all their tools on it before giving the problem to me”, also some exerpts here https://www.farnamstreetblog.com/2016/07/mental-tools-richard-feynman/

Comment by dr_manhattan on The Right to be Wrong · 2017-11-30T14:46:03.105Z · score: 19 (5 votes) · LW · GW

The "Space Mom" part seems like what exploration/exploitation meta-algorithm feels like from the inside. To do quality exploration you need to shut down the inner voice of exploitation.

Also relevant: https://www.physics.ohio-state.edu/~kilcup/262/feynman.html

Comment by dr_manhattan on Announcing the AI Alignment Prize · 2017-11-03T15:51:10.893Z · score: 9 (3 votes) · LW · GW

Zvi/Vladimir, what's your role in this - are you the judges?

Comment by dr_manhattan on There's No Fire Alarm for Artificial General Intelligence · 2017-10-17T16:48:28.723Z · score: 22 (6 votes) · LW · GW

Hi Sarah, not sure why you felt compelled to answer. Nothing in your reply suggests a contrary logical argument to the Fire Alarm; the only thing I can think of is Eliezer vaguely implying a shorter timeline and you vaguely implying a longer (or at least more diffuse) one. I didn't get the feeling EY implied AGI is possible by scaling current state of art. The argument about peak knowledge was also to explain the Fire Alarm mechanics, rather than imply that top people at Google have "it" already.

As far as your intuitions, I feel similarly about the cogsci stuff (from a lesser base of knowledge) but it should be noted that there's some idea exchange between the graphical models people like Josh and NN people. Also it's possible that NNs can be constructed to learn graphical models. (as an aside would be interesting to ask Josh what his distribution is. Josh begat Noah Goodman, Noah beget Andreas Struhmuller who is quite reachable and in the LW network)

Comment by dr_manhattan on The Just World Hypothesis · 2017-10-09T17:09:15.350Z · score: 10 (3 votes) · LW · GW

For context this is a response to this year's Edge essays https://www.edge.org/responses/what-scientific-term-or%C2%A0concept-ought-to-be-more-widely-known . Not sure if posted by Michael (which is Good News!) or on his behalf

Comment by dr_manhattan on Slack · 2017-10-01T00:39:42.712Z · score: 5 (2 votes) · LW · GW

Agree, this could be a trap, e.g. people feel Github is a required resume item now. I think the strong indicator variable here is whether the thing you're doing is the generally socially acceptable thing (possibly a trap) vs something differentiated.

I personally got into the ML wave before it was cool and everyone in the world wants to be a Data Scientist. Back then it was called "Data Mining" and I decided to do it because I saw being a SWE was intellectually a dead end and wanted to do more mathy things in my job. The generalizable part of this is following your curiosity towards things make sense but are not yet another step on the treadmill.

Comment by dr_manhattan on Why I am not a Quaker (even though it often seems as though I should be) · 2017-09-30T23:34:49.983Z · score: 9 (3 votes) · LW · GW

Orthodox Jews have been captured by internal adverserial forces https://blogs.timesofisrael.com/i-can-do-jewish-for-40000/

Comment by dr_manhattan on Slack · 2017-09-30T22:56:00.539Z · score: 8 (4 votes) · LW · GW

Related (I think): if you're early in your career, use some of the government-mandated slack (aka weekends) to differentiate yourself and build up human capital. If you're "a programmer" and are perceived as a generic resource, the system will often drive you to compete on few visible dimensions - like hours. If you have unique and scarce skills you have bargaining power to have more Slack and your employer might even pay for you to gain more human capital. (I still continue to reinvest part of my slack time into my career, though I a) like what I do and learning more of it b) don't really have to)

Comment by dr_manhattan on Against Individual IQ Worries · 2017-09-30T18:21:10.818Z · score: 6 (4 votes) · LW · GW

There are different kinds of "IQ worries", and this post addresses "IQ anxiety" to some degree. The kind that I'm still concerned about is related, but not the same. Basically the question is one of calibrated resource allocation:

let's say for arguments sake that I have an IQ of 110. This suggests that some areas will be much harder for me to study (advanced statistics), and some probably completely inaccessible (string theory). It would be great if reality provided a guide when to "try harder" vs. "concentrate your energy on something else". IQ seems like that kind of guide, but the Feynman example (and, possibly, Darwin) really throws me off. Would love to hear people's thoughts on this.

Comment by Dr_Manhattan on [deleted post] 2017-09-27T18:12:43.791Z

Duly updated on worthwhileness of trying harder here :). Interesting story from Andrew (know him). I do what you do with writing for reading, doing if for writing is a great tip, thanks.

Another thing that might be interesting to try is creating associations to "states" memory-palace style. I'm an audiobook fiend and I noticed that places I walk in Manhattan often remind me of the exact place and feeling of the book I listened too there. Maybe this can be leveraged to proactively bookmark states.

Comment by Dr_Manhattan on [deleted post] 2017-09-27T17:03:00.498Z

Thanks for the post. I've already noticed that certain decisions, while being rational conclusions of impartial analysis, are easier to implement in a certain emotional state, and having made the decision I've had trouble acting on it when the state was absent. This post suggest that the state can be saved and re-loaded, which is awesome if it works.

Comment by dr_manhattan on Heuristics for textbook selection · 2017-09-07T18:34:16.609Z · score: 1 (1 votes) · LW · GW

Sorry did not quite mean that. Meant it would be nice for everyone if there was a service to do this :)

Comment by dr_manhattan on Heuristics for textbook selection · 2017-09-06T17:01:23.002Z · score: 0 (0 votes) · LW · GW

I agree it might be locally optimal, globally very suboptimal

Comment by dr_manhattan on Heuristics for textbook selection · 2017-09-06T13:33:25.632Z · score: 1 (1 votes) · LW · GW

2.

This is literally doing PageRank, by hand, on books. There's got to be a better way

Bayesian statistics as epistemic advantage

2017-07-25T17:07:49.660Z · score: 0 (0 votes)

MILA gets a grant for AI safety research

2017-07-21T15:34:55.493Z · score: 9 (9 votes)

Learning from Human Preferences - from OpenAI (including Christiano, Amodei & Legg)

2017-06-13T15:52:00.294Z · score: 9 (9 votes)

NIPS 2015

2015-12-07T20:31:11.779Z · score: 7 (8 votes)

Velocity of behavioral evolution

2014-12-19T17:34:36.217Z · score: 3 (4 votes)

What Peter Thiel thinks about AI risk

2014-12-11T21:22:27.167Z · score: 12 (13 votes)

Cognitive distortions of founders

2014-12-11T03:19:58.802Z · score: 3 (6 votes)

FAI PR tracking well [link]

2014-08-15T21:23:48.622Z · score: 7 (8 votes)

Hawking/Russell/Tegmark/Wilczek on dangers of Superintelligent Machines [link]

2014-04-21T16:55:56.240Z · score: 18 (21 votes)

Gunshot victims to be suspended between life and death [link]

2014-03-27T16:33:49.413Z · score: 24 (27 votes)

Huffington Post article on DeepMind-requested AI ethics board, links back to LW [link]

2014-01-30T01:20:10.579Z · score: 13 (26 votes)

H+ review of James Miller's Singularity Rising [link]

2014-01-17T02:16:08.126Z · score: -4 (9 votes)

PSA for LW futurists/academics

2013-10-31T15:54:39.861Z · score: 7 (12 votes)

Nudging around the world - [link]

2013-09-05T15:52:05.786Z · score: 8 (10 votes)

Course - Saving Millions of lives at a time [link]

2013-08-12T16:30:29.192Z · score: 4 (7 votes)

Proposal: periodic repost of the Best Learning resources

2013-08-10T16:21:19.138Z · score: 9 (10 votes)

RIP Doug Engelbart

2013-07-03T19:19:59.616Z · score: 11 (16 votes)

NES-game playing AI [video link and AI-boxing-related comment]

2013-04-12T13:11:36.365Z · score: 30 (35 votes)

Eliezer's YU lecture on FAI and MOR [link]

2013-03-07T16:09:54.710Z · score: 2 (9 votes)

Interesting discussion of concentration and productivity [link]

2013-02-06T13:58:35.082Z · score: 8 (13 votes)

CFAR’s Inaugural Fundraising Drive

2012-12-18T01:19:00.272Z · score: 10 (13 votes)

Popular media coverage of Singularity Summit -the Verge [link]

2012-10-23T03:19:03.676Z · score: 2 (7 votes)

Judea Pearl's Turing Award Lecture video now online

2012-09-10T15:18:43.251Z · score: 6 (9 votes)

Luke' AMA gets a plug @ Wired [link]

2012-08-26T04:15:49.745Z · score: 6 (13 votes)

Opinions on Boltzmann brain arguments constraining modern multiverse theories

2012-08-10T18:05:52.417Z · score: 1 (6 votes)

Russian plan for immortality [link]

2012-08-01T20:49:41.319Z · score: 5 (10 votes)

SIAI May report

2012-06-16T14:01:37.488Z · score: 5 (10 votes)

Arrison, Vassar and de Grey speaking at Peter Thiel's startup class [link]

2012-06-16T13:58:35.578Z · score: 10 (13 votes)

Sebastian Thrun AMA on reddit [link]

2012-06-14T02:53:15.258Z · score: 4 (7 votes)

MIT is working on industrial robots that (attempt to) learn what humans want from them [link]

2012-06-13T15:42:58.015Z · score: 3 (8 votes)

Peter Thiel's AGI discussion in his startups class @ Stanford [link]

2012-06-07T12:27:48.075Z · score: 13 (16 votes)

Audio interview with Judea Pearl [link]

2012-05-10T12:47:07.615Z · score: 7 (8 votes)

"Big Surprise" - the famous atheists are actually Bayesians [link]

2012-04-08T16:11:30.166Z · score: -9 (20 votes)

Singularity-oriented Sci-Fi collection to be published [link]

2012-02-29T16:39:40.714Z · score: 3 (6 votes)

Model Thinking class [link]

2012-02-20T18:27:51.909Z · score: 6 (7 votes)

Top 5 regrets of the dying [link]

2012-02-04T17:56:22.156Z · score: 3 (10 votes)

Bill Gates asks HS students "What are most important choices the world faces"?

2012-01-11T20:34:11.272Z · score: 0 (11 votes)

Nick Bostrom TED talk on world's biggest problems

2012-01-06T18:52:09.991Z · score: 17 (20 votes)

Roger Williams (Author of Metamorphosis of Prime Intellect) on Singularity

2012-01-06T05:09:36.511Z · score: 5 (14 votes)

Future of Moral Machines - New York Times [link]

2011-12-26T14:44:01.763Z · score: 0 (5 votes)

SingInst bloomberg coverage [link]

2011-12-19T19:31:41.651Z · score: 5 (8 votes)

For those in the Stanford AI-class

2011-12-10T22:54:38.294Z · score: 8 (11 votes)

Good interview with Kahneman [link]

2011-12-02T15:04:24.480Z · score: 2 (3 votes)

FAI-relevant XKCD

2011-11-22T13:28:44.070Z · score: -4 (13 votes)

Three more classes coming from Stanford of interest here

2011-11-17T17:57:42.468Z · score: 15 (16 votes)

Human augmented with CBT algorithms games Jeopardy [link]

2011-11-17T03:22:25.901Z · score: 10 (11 votes)

Biointelligence Explosion

2011-11-07T14:05:08.985Z · score: 2 (15 votes)

Guardian coverage of the Summit [link]

2011-11-03T03:17:13.585Z · score: 3 (6 votes)

Singularity Summit - some videos

2011-10-26T14:57:57.244Z · score: 1 (2 votes)