Casey Anthony - analyzing evidence using Bayes 2011-07-07T17:19:11.216Z


Comment by zachary_kurtz on LessWrong search traffic doubles · 2011-03-29T16:36:24.030Z · LW · GW

And is result #6

Comment by zachary_kurtz on death-is-bad-ism going a little bit more mainstream? · 2011-03-24T15:50:22.164Z · LW · GW

Agree with the absurdity bias. For most (even smart) people their exposure to cryonics is things like Woody Allen's Sleeper and Futurama. I almost can't blame them for only seeing the absurd... I'm still trying to come around to it myself.

Comment by zachary_kurtz on Bayesian Methods Reading List · 2011-03-24T15:43:23.008Z · LW · GW

AWesome, thanks!

Comment by zachary_kurtz on Bayesian Methods Reading List · 2011-03-23T21:06:39.239Z · LW · GW

Not completely defined at the moment since I'm a 1st year PhD student at NYU, and currently doing rotations. It'll be something like comparative genomics/regulatory networks to study evolution of bacteria or perhaps communities of bacteria.

Comment by zachary_kurtz on Trip from Ottawa, Canada to NYC on weekend of April 2 · 2011-03-23T18:24:35.980Z · LW · GW

You'll get more response from the NY group (we don't all check LW and discussion board regularly) by making a post to the google group/listserve:

Comment by zachary_kurtz on Bayesian Methods Reading List · 2011-03-23T18:20:19.647Z · LW · GW

Thanks... this should come in handy in my computational research in systems biology

Comment by zachary_kurtz on Folk theories can be useful even when they're entirely wrong · 2011-03-23T18:18:08.048Z · LW · GW

A broken clock is right twice per day. If value theory is incidentally correct, it doesn't make folk theories valuable on the margins - unless of course, if people who hold folk theories do consistently better than rationalists, but then I'd question the rationalist label.

Comment by zachary_kurtz on Rationality Boot Camp · 2011-03-22T18:25:29.314Z · LW · GW

I wish I could take that much time to do this

Comment by zachary_kurtz on Rationality Outreach: A Parable · 2011-03-17T19:26:49.731Z · LW · GW

Is that because if you treat probabilities of (God or not God) as maximum entropy without prior information you'd get 50/50?

Comment by zachary_kurtz on Rationality Outreach: A Parable · 2011-03-17T19:24:13.827Z · LW · GW

Good on them! In my experience, whenever I sneak bayesian updating into the conversation, it's well received by skeptics. When I try to introduce Bayes more formally or start supporting anti-mainstream ideas, such as cryonics, AI, etc, there's much more resistance.

Comment by zachary_kurtz on Rationality Outreach: A Parable · 2011-03-17T15:52:30.364Z · LW · GW

I know a lot of skeptics like this and I try to share with them EY's post on "undiscriminating skepticism." This post 'saved' me from a similar fate when I found myself going down this path.

Comment by zachary_kurtz on Rationality Outreach: A Parable · 2011-03-17T15:40:58.597Z · LW · GW

Again, I like your characters but I think you're missing one. The person who thinks that belief in [a] God is the result of rational and reasonable thought.

Comment by zachary_kurtz on Eliezer Yudkowsky and Michael Vassar at NYU, Thursday March 3rd · 2011-03-03T02:17:11.773Z · LW · GW

I'll be there

Comment by zachary_kurtz on Research methods · 2011-02-23T03:10:10.855Z · LW · GW

could you write the program in your spare time and run the program while you're there, while making it seem like you're working?

Comment by zachary_kurtz on IBM's "Watson" program to compete against "Jeopardy" champions tonight · 2011-02-15T20:24:36.419Z · LW · GW

this about maps with the issues I noticed. Looking forward to the next 2 days of this.

Comment by zachary_kurtz on Applied Rationality: Group Problem Solving Session · 2011-02-15T20:21:47.013Z · LW · GW

the archive password is listed before each external link in every example I've seen. Usually the password is either or

Comment by zachary_kurtz on Applied Rationality: Group Problem Solving Session · 2011-02-09T02:28:57.180Z · LW · GW

instead of buying textbooks check out

Largest collection of [illegal, mostly] free textbooks I've seen on the net.

Comment by zachary_kurtz on Steve Jobs' medical leave, riches and longevity · 2011-01-24T00:59:10.433Z · LW · GW

My woo-dar is tingling a bit regarding this proposal. Can you refer me to this research?

Comment by zachary_kurtz on Steve Jobs' medical leave, riches and longevity · 2011-01-21T18:08:31.596Z · LW · GW

From the perspective of a biomedical scientist-in-training here. I think you may be underestimating the role that other types of biology research, that's not specifically labeled "longevity" will play in attaining 'immortality.'

For example, it may be necessary to cure cancer before we can safely switch off the cellular aging process. The fact that cancer has such an impact on society makes cancer one of the best funded areas of research, but I don't think you can accurately say that this comes at the opportunity cost of longevity knowledge, because they are really compliments. Most of our knowledge of human cell biology comes from studying cell lines isolated from cancer.

Meanwhile, specialized research increases our general knowledge that, purposeful or not, is leading to longevity if not immortality outright.

Comment by zachary_kurtz on Link: "Top 10 Mistakes in Behavior Change" · 2011-01-19T16:25:44.705Z · LW · GW

every so often I'll decide to stop biting my nails and I can devote lots of mental energy to stop myself whenever I see it starting up again. On a really stressful day though, I can't devote that energy and I wind up chewing them off again. Usually I stay on this wagon for a few weeks before I can re-dedicate myself to the non-nail biting mental effort. On the whole though, stop biting my nails is not that all that difficult, the problem is to be consistent about it.

It's difficult to start doing things when the path of least resistance still takes a lot of mental energy. Checking lesswrong is easy, reading science papers for class is hard. Having a goal (not failing class the next day) is a big help though.

Comment by zachary_kurtz on Statistical Prediction Rules Out-Perform Expert Human Judgments · 2011-01-18T15:37:20.841Z · LW · GW

Does SPR beat prediction markets?

Comment by zachary_kurtz on I want to learn economics · 2011-01-18T02:15:06.731Z · LW · GW

I suppose that's true, though it shouldn't be.

Comment by zachary_kurtz on I want to learn economics · 2011-01-13T23:25:08.594Z · LW · GW

Starting with behavioral economics could be a good place, since the applications to daily life are obvious.

some possible books include:

Predictably irrational by Dan Ariely Why Smart people make Big Money Mistakes - Gary Belsky Nudge - Richard Thaler

Comment by zachary_kurtz on Link: NY Times covers Bayesian statistics · 2011-01-12T21:56:18.322Z · LW · GW

Success story: I posted this link on my facebook and was able to reference 1 friend to EY's "Intuitive Intro to Bayes." He's taking a grad course this semester on Bayesian stats application to forensic psychology and I thought Intuitive Intro would probably prepare him well for the course.

Thanks for sharing.

Comment by zachary_kurtz on NYC Rationalist Diplomacy Post-Game Discussion · 2011-01-12T21:48:15.964Z · LW · GW

England reporting in. I mostly agree with Will/Russia/Cosmos about the game. While I don't think I was as busy as him, my newbishness with the rules (especially convoy rules) really held me back. I got lucky that I was England, and land locked enough that, at the beginning, nobody could take advantage of my blunders.

My favorite part was the diplomacy under anonymity, coordination being a real problem when you can really only use in-game incentives.

My chat logs are also posted as well as the first turn game journal, which I couldn't maintain.

Special thanks to Zvi for the in-game analysis and for staying impartial (as possible) for the running analysis.

Comment by zachary_kurtz on Link: why training a.i. isn’t like training your pets · 2011-01-12T21:38:13.807Z · LW · GW

its not clear to me, though this explanation seems plausible as well. Either way it's not good.

Comment by zachary_kurtz on Link: why training a.i. isn’t like training your pets · 2011-01-12T19:16:32.921Z · LW · GW

"imagined by the author as a combination of whatever a popular science site reported"

I've heard this argument from non-singulatarians from time to time. It bothers me due to the problem conservation of expected evidence. What is the blogger's priors of taking an argument seriously if it seems as if the discussed about topic reminds him of something he's heard about in a pop sci piece?

We all know that popular sci/tech reporting isn't the greatest, but if you low confidence about SIAI-type AI and hearing it reminds you of some second hand pop reporting then discounting it because of the medium that exposed you to it is not an argument! Especially if you priors about the likelihood of pop sci reporting being accurate/useful is already low.

Comment by zachary_kurtz on Less Wrong fanfiction suggestion · 2011-01-12T02:35:04.575Z · LW · GW

I tend to pick my fruit from bonsai trees

Comment by zachary_kurtz on Less Wrong fanfiction suggestion · 2011-01-11T16:01:17.948Z · LW · GW

I'm seriously considering writing a rationalist Ender's Game/Shadow. It's fairly low hanging fruit b/c the Ender and (especially) Bean are obviously intelligent and have excellent priors.

Comment by zachary_kurtz on Spaced Repetition Database for A Human's Guide to Words · 2011-01-11T15:55:00.675Z · LW · GW

I just downloaded Mnemosyne yesterday, so its not too late to test both softwares.

Comment by zachary_kurtz on Spaced Repetition Database for A Human's Guide to Words · 2011-01-11T05:18:29.838Z · LW · GW

are the LW sequence decks available for Mnemosyne?

Comment by zachary_kurtz on A sense of logic · 2010-12-14T02:38:53.558Z · LW · GW's_Wager

Comment by zachary_kurtz on Rational entertainment industry? · 2010-12-11T18:10:04.823Z · LW · GW

Have ticket prices kept up with inflation?

Comment by zachary_kurtz on Why is our sex drive too strong? · 2010-12-11T05:28:59.130Z · LW · GW

from what I remember from my human evolution classes, promiscuity allowances is very much related to resource availability. Google: Robin Hanson's "forager vs farmer" this has some of these ideas.

Comment by zachary_kurtz on Rational entertainment industry? · 2010-12-11T05:26:04.611Z · LW · GW

this post could use some update with tv tropes. Even formulaic stories were innovative at one point or another.

Comment by zachary_kurtz on The term 'altruism' in group selection · 2010-12-11T05:23:55.217Z · LW · GW

is there a real case of (non-human) altruism among non-kin in the animal kingdom? I don't think there is...

Comment by zachary_kurtz on Cheat codes · 2010-12-02T16:01:43.517Z · LW · GW

Any data on how long in advanced for spaced repetition to be effective, say if you're studying for an exam or something ?

Comment by zachary_kurtz on The Boundaries of Biases · 2010-12-01T04:43:12.954Z · LW · GW

related idea: when could seeking to improve our maps could we lose instrumental rationality?

I have an example of this. Was at a meeting at work last year, where a research group was proposing (to get money) for a study to provide genetic "counseling" to poor communities in Harlem. One person raised the objection: (paraphrasing) we can teach people as much as we can about real genetic risk factors for diseases, but without serious education, most people probably won't get it.

They'll hear "genes, risk factor" and probably just overestimate their actual risk and lead to poor decision making based on misunderstanding information. In striving to improve epistemic rationality we could impair true instrumental "winning."

So in this case, being completely naive leads to better outcomes than having more, if incomplete knowledge.

Not sure what the outcome of the actual study was.

Comment by zachary_kurtz on Book Club Update and Chapter 1 · 2010-06-17T19:32:15.375Z · LW · GW

PS - in the free pdf it's 1-8. In the book the problem seems to have been renumbered to 1.13

Comment by zachary_kurtz on Book Club Update and Chapter 1 · 2010-06-17T17:51:36.674Z · LW · GW

A different question about 1-8. I was able to figure out how he got A!B = !B (where ! is bar) but using the Boolean identities he provides, I couldn't get to B!A = !A. Can anyone enlighten me on this?

Comment by zachary_kurtz on Book Club Update and Chapter 1 · 2010-06-15T19:01:38.840Z · LW · GW

I never thought about the connection between logic and probability before, though now it seems obvious. I've read a few introductory logic texts and deductive reasoning always seemed a bit pointless to me (in RL premises are usually inferred from something). -

To draw from a literary example, Sherlock Holmes use of the phrase "deduce" always seemed a bit deceptive. You can say "that color of dirt exists only in spot x in London. Therefore, that Londoner must have come in contact with spot x if I see that dirt on his trouser knee." This is presented as a deduction, but really, the premises are induced and he assumes some things about how people travel.

It seems more likely that we make inferences, not deductions, but convince ourselves that the premises must be true, without bothering to put real information about likelihood into the reasoning. An induction is still a logical statement, but I like the idea of using probability to quantify it.

Comment by zachary_kurtz on Antagonizing Opioid Receptors for (Prevention of) Fun and Profit · 2010-05-05T14:57:57.033Z · LW · GW

This method sounds like it could be useful for unconscious habits. I have a bad one of gnawing on my finger nails. By the time I realize I'm doing it, however, the damage has been done. For whatever reason, I think my brain has connected nail biting with stress release. Taking away that association without having to rely on my poor willpower would be nice.

Comment by zachary_kurtz on NYC Rationalist Community · 2010-04-28T14:13:52.832Z · LW · GW

The NYC group, and olimay in particular, has certainly challenged my thinking. I might be coming from a very different place than you, however.

Comment by zachary_kurtz on Attention Less Wrong: We need an FAQ · 2010-04-27T17:13:32.472Z · LW · GW

Less Wrong needs a general forum, not just an FAQ

Comment by zachary_kurtz on Too busy to think about life · 2010-04-26T16:38:48.335Z · LW · GW

Both really. How much time should we dedicate to making our map fit the territory before we start sacrificing optimality? Spend too long trying to improve epistemic rationality and you begin to sacrifice your ability to get to work on actual goal seeking.

On the other end, if you don't spend long enough to improve your map, you may be inefficiently or ineffectively trying to reach your goals.

We're still thinking of ways to be able to quantify these. Largely it depends on the specific goal and map/territory as well as the person.

Anybody else have some ideas?

Comment by zachary_kurtz on Too busy to think about life · 2010-04-23T19:37:54.773Z · LW · GW

Applying optimal foraging theory to rationality is something we've been discussing at the NYC-LW meetup group for a few months now. I think this is related to this post.

Comment by zachary_kurtz on Friendly AI at the NYC Future Salon · 2010-02-16T18:14:29.532Z · LW · GW

Sorry I won't be able to come to your talk after all. As I suspected, I will still be in Pittsburgh. Good luck!

Comment by zachary_kurtz on The Craigslist Revolution: a real-world application of torture vs. dust specks OR How I learned to stop worrying and create one billion dollars out of nothing · 2010-02-11T15:28:33.478Z · LW · GW

Do you guys think that the 'mainstream' takes the AI problem seriously enough (right now at least) that they'd be willing to donate money to this cause? Especially when there are other apparently worthy charities they could be joining. I'm skeptical.

Comment by zachary_kurtz on My Fundamental Question About Omega · 2010-02-10T19:43:47.232Z · LW · GW

Could Omega microwave a burrito so hot, that he himself could not eat it?

and my personal favorite:

Comment by zachary_kurtz on My Failed Situation/Action Belief System · 2010-02-02T19:54:15.231Z · LW · GW

So do you think there's a human system which includes a closer approximation of reality? (whatever that means)