Less Wrong link exchange
post by FiftyTwo · 2011-11-02T10:24:37.111Z · LW · GW · Legacy · 15 commentsContents
15 comments
We've had similar threads before, but not for a while so I thought I'd make one.
Basic rules, share links that are relevant to Less Wrong areas of interest, but aren't worthy of their own post. Please include a brief description with the link. (My own contributions are below.)
15 comments
Comments sorted by top scores.
comment by Alejandro1 · 2011-11-02T15:18:56.695Z · LW(p) · GW(p)
Nate Silver on Herman Cain and the hubris of experts. Not really about politics in the mind-killing sense, but about uncertainty and overconfidence in political predictions. Both peter-hurford and me quoted from it on the monthly quotes thread.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2011-11-03T03:24:33.862Z · LW(p) · GW(p)
It's a solid article just from its political science analysis; I obviously also recommend it.
comment by albert · 2011-11-05T03:29:37.692Z · LW(p) · GW(p)
So, first post in LW!
New TED Talks video about the role of Bayesian inference in controlling human movement: http://www.ted.com/talks/daniel_wolpert_the_real_reason_for_brains.html
Replies from: KPiercomment by betterthanwell · 2011-11-03T11:54:46.850Z · LW(p) · GW(p)
BBC News: Signs of ageing halted in the lab
The onset of wrinkles, muscle wasting and cataracts has been delayed and even eliminated in mice, say researchers in the US. The study, published in Nature, focused on what are known as "senescent cells". They stop dividing into new cells and have an important role in preventing tumours from progressing.
comment by khafra · 2011-11-02T13:47:58.568Z · LW(p) · GW(p)
http://ai-class.syavash.com/naivebayes Someone in Norvig and Thrun's AI class made a bayesian classifier with laplacian smoothing. It shows you the complete equations generated and lets you set the text to classify, text in each training set, and smoothing parameter; so it's a great tool for direct instruction.
comment by FiftyTwo · 2011-11-02T10:27:34.812Z · LW(p) · GW(p)
Cracked (humour website) on logical fallacies and cognitive biases.
Subheadings:
We're Not Programmed to Seek "Truth," We're Programmed to "Win"
Our Brains Don't Understand Probability
We Think Everyone's Out to Get Us Are your enemies innately Evil?
We're Hard-Wired to Have a Double Standard (Fundamental attribution error)
Facts Don't Change Our Minds (We change our minds less often than we think)
Most of which should be familiar but good example of presenting these ideas in a readable style. Might be a useful resource to point people to who would be put off by the style here.
Replies from: falenas108↑ comment by falenas108 · 2011-11-02T13:49:53.256Z · LW(p) · GW(p)
This actually got its own post a few days ago.
comment by FiftyTwo · 2011-11-02T10:26:28.669Z · LW(p) · GW(p)
The Guardian (prominent UK newspaper) on friendly (or otherwise) artificial general intelligence.
Interesting because its a 'popular culture' look at the basics of AI we might consider fairly basic. Might be a bit sensationalist, the tagline is "AI scientists want to make gods. Should that worry us? - Singularitarians believe artificial intelligence will be humanity's saviour. But they also assume AI entities will be benevolent"
Replies from: None, siodine↑ comment by [deleted] · 2011-11-02T11:05:17.627Z · LW(p) · GW(p)
What a disheartening article. The whole thing can be summed up with a quote from Three Major Singularity Schools:
Hey, man, have you heard? There’s this bunch of, like, crazy nerds out there, who think that some kind of unspecified huge nerd thing is going to happen. What a bunch of wackos! It’s geek religion, man.
Reading this article and the comments section really drove home how important rationality skills are when thinking about the future.
Replies from: FiftyTwo↑ comment by FiftyTwo · 2011-11-02T11:13:38.038Z · LW(p) · GW(p)
Agreed. I hope you (and other LW people) contribute to the discussion to try and correct some of these misconceptions.
It is an important reminder of how strange and scary these ideas seem at first glance and the inferential distances involved.
comment by albert · 2011-11-27T04:01:59.603Z · LW(p) · GW(p)
A nice A.I. themed movie for those who've never seen it: http://www.youtube.com/watch?v=vn0cz7vYOcc all 10 parts on youtube