Posts
Comments
Wow, this is amazing! Both, the idea and your presentation of it.
Very insightful and though-provoking. And, my mind was completely blown by the fact that you have converted. It so doesn't fit into my models that I am quite confused. I would be very curious what's behind it and what would you answer to your own questions (before and after). But, I guess you wrote about it a lot, so I'll just go and read it.
And yes, this definitely deserves a discussion post!
Very interesting list, thanks Louie!
I just randomly clicked on a few links for online courses, and it seems there's at least one issue: The "Probability and Computing" part points to "Analytic Combinatorics, Part I" coursera course, which is not about probability at all. The MIT and CMU links for this part seem wrong too. Someone should carefully go through all the links and fix them.
Awesome summary, thanks!
The funny thing is, that the rationalist Clippy would endorse this article. (He would probably put more emphasis on clippyflurphsness rather than this unclipperiffic notion of "justness", though. :))
You just say: 'For every relation R that works exactly like addition, the following statement S is true about that relation.' It would look like, '∀ relations R: (∀x∀y∀z: R(x, 0, x) ∧ (R(x, y, z)→R(x, Sy, Sz))) → S)', where S says whatever you meant to say about +, using the token R.
I would change the statement to be something other than 'S', say 'Q', as S is already used for 'successor'.
In Hungary this (model theory and co.) is part of the standard curriculum for Mathematics BSc. Or at least was in my time.
(Audiatur et altera pars is the impressive Latin name of the principle that you should clearly state your premises.)
That's not what I thought it means. My understanding was that it's something like: "all parties should be heard", and it's more of a legal thing...
I'm really itching to try this out! ;)
(Consider this as a word of encouragement. I'll to think about my predictions and will post them here if I come up with anything useful. But, in the time being I wanted to say at least this much.)
Who is the intended audience for this?
If someone has a good grasp of Bayes, it's not that informative. (Though I liked the original idea and the story. :)) But, if one doesn't already understand the math behind this, then it's just a bunch of magic numbers, I am afraid. The second half of it for sure.
The link to the "Hamlet" is broken. Not that it's hard to find, but you might still want to fix it.
I would be interested in a sequence like that. Of course, if it only touches rationality tangentially then maybe LessWrong is not the best place for it. But again, I personally would be very interested in it.
Yes, this should work. With (hard) sciency stuff I actually do this. For example, after finishing the Quantum Physics sequence (and some reading of my own afterwards) I did a series of lectures about "the Intuitive Quantum World" here in the office.
I need to find some audience, who would be interested in the more general topics that I learn here on LessWrong. And of course, I would need to read a lot to have a real deep understanding. But yes, this is a very good answer to my question!
Sorry to ask, but is this still on?
Wakefield, could you post some details here if it is? (I've sent you a private message, but maybe you didn't notice it.)
I didn't downvote that comment, but might have, if I followed the conversation live. My thinking when I read it was: "He can't possibly really think that it is a homonym! So, for the sake of the argument he arrogantly (because that all caps spelling does show off some arrogancy) distorts reality and expects us to accept it?!"
But, now I see that this is too much of a correspondence bias. You probably just wanted to show that "explanation" has two different meanings, but in the head of the discussion just found a very bad example for your argument. Because "explanation" does have two slightly different meanings and this is relevant here. But let's be clear, these two meanings are close and in no way homonyms (as opposed to what you stated and what you clearly tried to present with the "fluke" example).
So, I think this comment of yours is bad and the downvotings were valid.
Edit: I didn't read the wikipedia article you linked when I wrote the above. I only ever heard/saw "homonym" being used in the sense of "two identically spelled and pronounced words with different meanings and of unrelated origin"; what Wikipedia calls "true homonym". In the more general sense the two "explanations" might qualify as homonyms (I am definitely not a linguist). But, your "fluke" example strongly indicated the more narrow (and, I think, more common) meaning. So, my reasons still stand.
Thanks!
I think, first and foremost these psychological needs were "to understand how things are". And that's in short why I am here now. :)
Yes, I think this is a pretty good reading of my post. And it makes the issue seem less pressing and more manageable.
Now, this is a much better question! And yes, I am thinking a lot on these. But, in some sense this kind of thing bothers me much less: because it is so clear that the issue is unclear, my mind doesn't try to unconditionally commit it to the belief pool just because I read something exciting about it. And then I know I have to think about it, and look for independent sources etc. (For these two specific problems, I am in a different state of confusion. Cryonics: quite confused; AGI: a bit better, at least I know what my next steps are.)
How do you deal with this?
Yes, I saw it coming. :) Thanks! It does matter to me.
This sounds like a very good piece of advice. A slight problem is that some of the jargon is very useful for expressing things that otherwise would be hard to express. But, I'll try to be conscious about it.
No, it's not too dark, it is useful to see an even stronger expression of caution. But, it misses the point a bit. It's not very helpful to know that Eliezer is probably wrong on some things. Neither is finding a mistake here or there. It just doesn't help.
You see, my goal is to accept and learn fully that which is accurate, and reject (and maybe fix and improve) that which is wrong. Neither one is enough by itself.
I guess, this is similar to the second part of thomblake's comment. Thank you for explaining this!
But, if it really can mean such different things, then that particular in the survey question wasn't formulated very carefully.
So, a person who doesn't believe in god, but still thinks that he has an "immortal soul" or something? Thanks for explaining!
General applicability of Bayesian inference: Judea Pearl, "Probabilistic Reasoning in Intelligent Systems", chapter 2. (Definitely not an explanation suitable for a teenager, but for a college student interested in the topic it is very good, I think.)
I completed the survey. Thanks, Yvain, for doing it!
The option "Atheist but spiritual" gave me a pause. What does it actually mean?
Same here. I had to look them up to understand what they are about and answer the question meaningfully. (But, after looking the options up the choice was actually easy.)
I half-counted it. I counted from the time when I finally created an account at lesswrong.com.
Count me in!
Good advice, I will look into it!
Thanks for the link, looks very relevant!
Yes, of course, I realize that there are all kind of subtleties why one way might be better for some people and something else for others etc.
But, the frightening realization for me was, that in the heat of the debate my brain can come up with all kind of elaborate arguments. But because the reason I came up with them was to win the debate (and not to figure out how the things really are), I am screwed, no matter how clever are my arguments. (http://lesswrong.com/lw/js/the_bottom_line/)
And yeah, it would be cool to come up with ways to figure out how the things really are and how can we test our hypotheses. But, now I think that this is really-really hard: to switch in that mode of thinking in the middle of an argument. The best I could do, was to let it go and walk away. (And write this post; maybe someone else comes up with a better idea. :))
Hmm, it's nice that there is this pretty compact formulation for two coupled but separately "unpolarized" photons. But, this still leaves me with a question of how does one "unpolarized" photon (a photon for which half of the squared amplitude would pass any polarized filter) looks like?
I would guess that there is no such thing. We might be ignorant about
the photon's polarization, but it does have some definite polarization
even before it passes any filter. Otherwise, it has to be in a
similarly tangled state with something (eg. its source).
Hmm, how would I check this?..
This post (together with the previous one) left me in a quite a bit of a confusion. How does this model with polarization vectors correspond to the old "amplitude distribution over a configuration space of «a photon here and a photon there»"? What are the configurations here, and when are they distinct? (And it seems I am not the only one who got confused by this.)
I think, I found the solution; the photons have a distinguishing property: spin. So, if configurations are more like "a photon with a +1 spin here, a photon with a -1 spin there...", then it all fits nicely in the same model. And the amplitude distribution corresponding to the situation described in the article would be:
|a+> |b-> - |a-> |b+>
(modulo a constant factor). Where |p+> (a photon with a +1 spin at P) corresponds to the P=(1 ; i) and |p-> to the P=(1 ; -i) in the article's notation. Of course, the math remains the same, but now I can see a bit more clearly the amplitude distribution and what are the distinct configurations.
This left me totally confused too.
But then, I realized that there is a property of photons that can help with this confusion here: spin. So, the configuration space is not a "a photon here and a photon there...", but a "a photon with a +1 spin here, a photon with -1 spin there..." And then this phase thing arises from the values of the amplitude distribution for the configurations with photons of opposite spins. This makes the math quite a bit easier too.
I might be completely mistaken about this, though.
Yep, Maxwell equations do produce the same results. The fun quantum thing is that this also happens with individual photons.
Two ideas I got after 5 minutes (by the clock :)) thinking.
If the tests are stressful and mentally (and possibly physically) exhausting, then even if it is still possible to prepare just for the test, it will not be as far from preparing for the "real thing". So, something like Initiation Ceremony could be done periodically and not just for initiation.
Give the students "stories" and see if they can make heads or tails of them. (How accurately can they guess the omitted details? Can they predict how it continues? Etc.) But, where can you get real stories? An authored story is very bounded in usefulness for this.
The idea: we have court cases. A lot of them, in all kind of domains, dating back to centuries. And they are very real, even if it's distorted (fake evidence, false testimony), it's done by someone for some concrete reason, which can be analyzed rationally. This might require learning some Law, but even without formal training many non domain-specific cases can be understood with moderate work. And Law is one of the oldest applications of human rationality.
Both of the ideas are mostly applicable to the second use-case: measuring a bunch of students in a school, but not good for comparing schools or designing a standardized "rationality test".