Posts
Comments
And Yudkowski.net is result #6
Agree with the absurdity bias. For most (even smart) people their exposure to cryonics is things like Woody Allen's Sleeper and Futurama. I almost can't blame them for only seeing the absurd... I'm still trying to come around to it myself.
AWesome, thanks!
Not completely defined at the moment since I'm a 1st year PhD student at NYU, and currently doing rotations. It'll be something like comparative genomics/regulatory networks to study evolution of bacteria or perhaps communities of bacteria.
You'll get more response from the NY group (we don't all check LW and discussion board regularly) by making a post to the google group/listserve:
http://groups.google.com/group/overcomingbiasnyc/topics?start=
Thanks... this should come in handy in my computational research in systems biology
A broken clock is right twice per day. If value theory is incidentally correct, it doesn't make folk theories valuable on the margins - unless of course, if people who hold folk theories do consistently better than rationalists, but then I'd question the rationalist label.
I wish I could take that much time to do this
Is that because if you treat probabilities of (God or not God) as maximum entropy without prior information you'd get 50/50?
Good on them! In my experience, whenever I sneak bayesian updating into the conversation, it's well received by skeptics. When I try to introduce Bayes more formally or start supporting anti-mainstream ideas, such as cryonics, AI, etc, there's much more resistance.
I know a lot of skeptics like this and I try to share with them EY's post on "undiscriminating skepticism." This post 'saved' me from a similar fate when I found myself going down this path.
Again, I like your characters but I think you're missing one. The person who thinks that belief in [a] God is the result of rational and reasonable thought.
I'll be there
could you write the program in your spare time and run the program while you're there, while making it seem like you're working?
this about maps with the issues I noticed. Looking forward to the next 2 days of this.
the archive password is listed before each external link in every example I've seen. Usually the password is either ebooksclub.org or library.nu
instead of buying textbooks check out library.nu
Largest collection of [illegal, mostly] free textbooks I've seen on the net.
My woo-dar is tingling a bit regarding this proposal. Can you refer me to this research?
From the perspective of a biomedical scientist-in-training here. I think you may be underestimating the role that other types of biology research, that's not specifically labeled "longevity" will play in attaining 'immortality.'
For example, it may be necessary to cure cancer before we can safely switch off the cellular aging process. The fact that cancer has such an impact on society makes cancer one of the best funded areas of research, but I don't think you can accurately say that this comes at the opportunity cost of longevity knowledge, because they are really compliments. Most of our knowledge of human cell biology comes from studying cell lines isolated from cancer.
Meanwhile, specialized research increases our general knowledge that, purposeful or not, is leading to longevity if not immortality outright.
every so often I'll decide to stop biting my nails and I can devote lots of mental energy to stop myself whenever I see it starting up again. On a really stressful day though, I can't devote that energy and I wind up chewing them off again. Usually I stay on this wagon for a few weeks before I can re-dedicate myself to the non-nail biting mental effort. On the whole though, stop biting my nails is not that all that difficult, the problem is to be consistent about it.
It's difficult to start doing things when the path of least resistance still takes a lot of mental energy. Checking lesswrong is easy, reading science papers for class is hard. Having a goal (not failing class the next day) is a big help though.
Does SPR beat prediction markets?
I suppose that's true, though it shouldn't be.
Starting with behavioral economics could be a good place, since the applications to daily life are obvious.
some possible books include:
Predictably irrational by Dan Ariely Why Smart people make Big Money Mistakes - Gary Belsky Nudge - Richard Thaler
Success story: I posted this link on my facebook and was able to reference 1 friend to EY's "Intuitive Intro to Bayes." He's taking a grad course this semester on Bayesian stats application to forensic psychology and I thought Intuitive Intro would probably prepare him well for the course.
Thanks for sharing.
England reporting in. I mostly agree with Will/Russia/Cosmos about the game. While I don't think I was as busy as him, my newbishness with the rules (especially convoy rules) really held me back. I got lucky that I was England, and land locked enough that, at the beginning, nobody could take advantage of my blunders.
My favorite part was the diplomacy under anonymity, coordination being a real problem when you can really only use in-game incentives.
My chat logs are also posted as well as the first turn game journal, which I couldn't maintain.
Special thanks to Zvi for the in-game analysis and for staying impartial (as possible) for the running analysis.
its not clear to me, though this explanation seems plausible as well. Either way it's not good.
"imagined by the author as a combination of whatever a popular science site reported"
I've heard this argument from non-singulatarians from time to time. It bothers me due to the problem conservation of expected evidence. What is the blogger's priors of taking an argument seriously if it seems as if the discussed about topic reminds him of something he's heard about in a pop sci piece?
We all know that popular sci/tech reporting isn't the greatest, but if you low confidence about SIAI-type AI and hearing it reminds you of some second hand pop reporting then discounting it because of the medium that exposed you to it is not an argument! Especially if you priors about the likelihood of pop sci reporting being accurate/useful is already low.
I tend to pick my fruit from bonsai trees
I'm seriously considering writing a rationalist Ender's Game/Shadow. It's fairly low hanging fruit b/c the Ender and (especially) Bean are obviously intelligent and have excellent priors.
I just downloaded Mnemosyne yesterday, so its not too late to test both softwares.
are the LW sequence decks available for Mnemosyne?
Have ticket prices kept up with inflation?
from what I remember from my human evolution classes, promiscuity allowances is very much related to resource availability. Google: Robin Hanson's "forager vs farmer" this has some of these ideas.
this post could use some update with tv tropes. Even formulaic stories were innovative at one point or another.
is there a real case of (non-human) altruism among non-kin in the animal kingdom? I don't think there is...
Any data on how long in advanced for spaced repetition to be effective, say if you're studying for an exam or something ?
related idea: when could seeking to improve our maps could we lose instrumental rationality?
I have an example of this. Was at a meeting at work last year, where a research group was proposing (to get money) for a study to provide genetic "counseling" to poor communities in Harlem. One person raised the objection: (paraphrasing) we can teach people as much as we can about real genetic risk factors for diseases, but without serious education, most people probably won't get it.
They'll hear "genes, risk factor" and probably just overestimate their actual risk and lead to poor decision making based on misunderstanding information. In striving to improve epistemic rationality we could impair true instrumental "winning."
So in this case, being completely naive leads to better outcomes than having more, if incomplete knowledge.
Not sure what the outcome of the actual study was.
PS - in the free pdf it's 1-8. In the book the problem seems to have been renumbered to 1.13
A different question about 1-8. I was able to figure out how he got A!B = !B (where ! is bar) but using the Boolean identities he provides, I couldn't get to B!A = !A. Can anyone enlighten me on this?
I never thought about the connection between logic and probability before, though now it seems obvious. I've read a few introductory logic texts and deductive reasoning always seemed a bit pointless to me (in RL premises are usually inferred from something). -
To draw from a literary example, Sherlock Holmes use of the phrase "deduce" always seemed a bit deceptive. You can say "that color of dirt exists only in spot x in London. Therefore, that Londoner must have come in contact with spot x if I see that dirt on his trouser knee." This is presented as a deduction, but really, the premises are induced and he assumes some things about how people travel.
It seems more likely that we make inferences, not deductions, but convince ourselves that the premises must be true, without bothering to put real information about likelihood into the reasoning. An induction is still a logical statement, but I like the idea of using probability to quantify it.
This method sounds like it could be useful for unconscious habits. I have a bad one of gnawing on my finger nails. By the time I realize I'm doing it, however, the damage has been done. For whatever reason, I think my brain has connected nail biting with stress release. Taking away that association without having to rely on my poor willpower would be nice.
The NYC group, and olimay in particular, has certainly challenged my thinking. I might be coming from a very different place than you, however.
Less Wrong needs a general forum, not just an FAQ
Both really. How much time should we dedicate to making our map fit the territory before we start sacrificing optimality? Spend too long trying to improve epistemic rationality and you begin to sacrifice your ability to get to work on actual goal seeking.
On the other end, if you don't spend long enough to improve your map, you may be inefficiently or ineffectively trying to reach your goals.
We're still thinking of ways to be able to quantify these. Largely it depends on the specific goal and map/territory as well as the person.
Anybody else have some ideas?
Applying optimal foraging theory to rationality is something we've been discussing at the NYC-LW meetup group for a few months now. I think this is related to this post.
Sorry I won't be able to come to your talk after all. As I suspected, I will still be in Pittsburgh. Good luck!
Do you guys think that the 'mainstream' takes the AI problem seriously enough (right now at least) that they'd be willing to donate money to this cause? Especially when there are other apparently worthy charities they could be joining. I'm skeptical.
Could Omega microwave a burrito so hot, that he himself could not eat it?
and my personal favorite: http://www.smbc-comics.com/index.php?db=comics&id=1778#comic
So do you think there's a human system which includes a closer approximation of reality? (whatever that means)