Posts
Comments
The blue line suddenly stops because the last comment is posted at that time. I was kind of lazy about this graph and did have labels and a legend, but apparently I was too out of it to realise they didn't show on the png.
As said by gwillen, x axis is minutes.
Sorry about not having units, I added code to set them but apparently it was the wrong code and I wasn't paying enough attention.
Green line is total comments, blue is top level comments. X-axis is minutes, y axis is number of comments.
So I did what you suggested and plotted the number of top level posts and total posts over time. The attached graph is averaged over the last 20 open threads. Code available here: https://gist.github.com/TRManderson/6849ab558d18906ede40
I don't trust myself to do any analysis, so I delegate that task to you lot.
EDIT: Changed GitHub repo to a gist
That's not quite the law of the excluded middle. In your first example, leaving isn't the negation of buying the car but is just another possibility. Tertium non datur would be He will either buy the car or he will not buy the car. It applies outside formal systems, but the possibilities outside a formal system are rarely negations of one another. If I'm wrong, can someone tell me?
Still, planting the "seed of destruction" definitely seems like a good idea, although I'd think caution in specifying only one event where that would happen. This idea is basically ensuring beliefs are falsifiable.
Does the average LW user actually maintain a list of probabilities for their beliefs? Or is Bayesian probabilistic reasoning just some gold standard that no-one here actually does? If the former, what kinds of stuff do you have on your list?
Thanks. Just going to clarify my thoughts below.
Because doing so will lead to worse outcomes on average.
In specific instances, avoiding the negative outcome might be beneficial, but only for that instance. If you're constantly settling for less-than-optimal outcomes because they're less risky, it'll average out to less-than-optimal utility.
The terminology "non-linear valuation" seemed to me to imply some exponential valuation, or logarithmic or something; I think "subjective valuation" or "subjective utility" might be better here.
Is there any reason we don't include a risk aversion factor in expected utility calculations?
If there is an established way of considering risk aversion, where can I find posts/papers/articles/books regarding this?
Just found this in a search for "Brisbane". I'd show up, and maybe bring a friend who is a non-LW rationalist.
It's likely that Eliezer isn't tending towards either side of the nature vs. nurture debate, and as such isn't claiming that nature or nurture is doing the work in generating preferences.
Neither finite differences nor calculus are new to me, but I didn't pick up the correlation between the two until now, and it really is obvious.
This is why I love mathematics - there's always a trick hidden up the sleeve!
Hey there LW!
At least 6 months ago, I stumbled upon a PDF of the sequences (or at least Map and Territory) while randomly browsing a website hosting various PDF ebooks. I read "The Simple Truth" and "What do we mean by Rationality?", but somehow lost the link to the file at some stage. I recalled the name of the website it mentioned (obviously LessWrong) from somewhere, and started trying to find it. After not too long, I came to Methods of Rationality (which a friend of mine had previously linked via Facebook) and began reading, but I forgot about it too after not too long. At some stage about 4 months ago I re-discovered MoR, read about 3/4 of what was available and then started reading LessWrong itself.
It took me about 3 days to get my head around the introduction to Bayes' Theorem (since implementing a basic Bayesian categorisation algorithm), and in the process I realised just how flawed my reasoning potentially was, and found out just how rational one friend of mine in particular was (very). By that stage, I was hooked and have been reading the sequences quite frequently since, finally making an account at here today. There's still plenty more reading to be done though!
A little background (and slight egotism alert, which could probably be applied to everything here); I'm in my final year of school now, vice-captain of the school's robotics program (and the programmer of Australia's champion school-age competitive robot), debating coach to various grades and I've completed a university level "Introduction to Software Engineering" in Python using Tkinter for GUI stuff as I finished the Maths B course a year early. I'm planning to go into university for a Bachelor of Science/Bachelor of Engineering majoring in Mathematics/Software Engineering next year. I've got major side interests in philosophy and psychology which I currently don't plan to explore in any formal sort of way, but LessWrong provides an outlet that addresses with these two.
I look forward to future comments and whatever criticism they attract; learning from mistakes tends to stick rather well.