Posts
Comments
How are you and Robin going to decide whether a post is more appropriate for Less Wrong or Overcoming Bias?
Eliezer does a good job of explaining a mechanism by which two investments with negatively correlated returns can switch to having positively correlated returns. But he doesn't do a good job of convincing me that a stock's price has a tendency to go down when it has just gone up, and vice versa.
I can think of an argument against this position. It seems plausible that stock traders see the past movement of a stock as an indicator of it's future movement. If a majority of traders share this belief, this will compel them to buy the stock from those who don't, inflating it's value and reinforcing the cycle. This would indicate that markets are inductive, which is the opposite of what the title suggests.
Seconding Psy-Kosh in being confused & unconvinced.
OK, I get it.
Paolo Freire said, "Washing one's hands of the conflict between the powerful and the powerless means to side with the powerful, not to be neutral."
If the outcome of their conflict is not being affected by your existence, it can be said that you are neutral. If you disagree with me, I would be interested to hear what definition of "neutral" you are using.
A completely unbiased user culture would view anything that was posted (or not posted) as equally valuable. What use is that?
I think your definition of "unbiased" resides on the opposite side of the galaxy from mine.
I also like Daniel Franke's idea.
Once the initial site is up and running, the next items on the agenda include much better support for reading through sequences. And I'll organize more of my old posts (and perhaps some of Robin's) into sequences.
Great! This is an excellent excuse to further put off my sequence-reading!
Barry Schwartz's The Paradox of Choice - which I haven't read, though I've read some of the research behind it
Yay, a book I've read that Eliezer hasn't! That said, I don't actually recommend it; it was kinda tedious and repetitive.
To a degree, it is useful to value truth over happiness in each of the occupations you mention. But humans have the ability to restrict their critical analysis to certain domains. Allow me to rewrite my comment:
As a firm atheist, I am fully open to the possibility that much of the world should stay religious, unless they're doing some sort of important work that's tangentially related to the question of God's existence (like studying humanity's origins), or they are doing some sort of work that requires them to make sure they don't compartmentalize their critical analysis.
@billswift:
The obviously religious ones like Abigail and Richard aren't really worth responding to except with general disdain.
Unless you care about atheism's reputation.
The fact is that religious belief, if fully attained, can be an enormously useful psychological crutch. As a firm atheist, I am fully open to the possibility that much of the world should stay religious, unless they're doing some sort of important work that necessitates them to value truth over happiness.
And if you decide that someone would be better off as an atheist, you shouldn't try to grab their psychological crutch all at once. They'll just grip tighter.
I still haven't read most of the archive, and I'm reading slower than you're posting... So I'm going to second everyone who says a reduce posting rate is A-OK.
Knowing the name of my representative (Mike Honda btw) is not going to increase my political influence. I'd have to be chummy with him before he gave any serious weight to my opinion. All the congressman's question revealed was whether the people in his audience gave any thought to politics, not whether they could be effective politically if they tried.
As for voting, I see it as a waste of time to study issues which I have only a tiny probability of affecting. The best strategy I can see in this situation is to ask your most intelligent and unbiased friend how they're voting and then vote in advance to avoid standing in line. (To make this strategy work on a large scale, I should also spend time studying issues if all my friends say I'm their smartest and least biased friend. In this case my vote has a significantly higher probability of affecting the election, making my study worthwhile.)
As a rationalist, it might be worthwhile to take the one box just so those Omega know-it-alls will be wrong for once.
I don't even know what this blog is supposed to be about anymore.
I think we should rename it "Robin and Eliezers' Varied Thoughts".
I'm planning to come. Does "casual" mean "jeans and t-shirt" or "collared shirt and slacks"?
Here's a link to the Yahoo Yellow Pages with various types of restaurants near Montgomery Theater.
Does anyone who reads Overcoming Bias live in San Jose? If so, can you recommend a restaurant?
Richard, can you think of a better format than a round-table one? I imagine it might be difficult to find a room for people to mill around in.
BTW, there is an important difference between Eliezer and seed AI: Eliezer can't rewrite his own source code.
I'm volunteering to be a relatively pansy gatekeeper: I'll read everything you write, treat you courteously, offer counterarguments, and let you out if I'm convinced. Email john.maxwelliv at the email service Google hosts.
I can also be an AI.
It's not which measurement has the procedural error that matters. It's what you do when you discover it.
Alternatively, what should the robot do if it gets one measurement that's far different than all the others, and it doesn't know why?
A man with one watch knows what time it is; a man with two watches is never sure.
This is related to something I've been thinking about lately. You may or may not be familiar with the concept of significant figures. In a nutshell, they're a way of communicating the precision of a measurement using the number of digits written. This seems to be a pretty good explanation.
Let's say we are building a robot scientist. The robot scientist has no need for significant figures; those are for puny humans. Instead, it stores measurement and uncertainty separately. One way of doing this is to store the uncertainty as ±x. Another more interesting way is to store it as the standard deviation of your measurement. This opens up an entire host of problems.
For example, let's say that the robot screws up when it's measuring something and doesn't realize it this until it measures a second time and gets something completely different. Obviously, there was some procedural error in the first measurement. Does the first completely wrong measurement contribute to the standard deviation? Isn't it possible that the second, third, fourth, and fifth measurements are also completely wrong in some way that the robot has not yet realized? Under what conditions are you allowed to "throw out" a measurement?
This business with nuclear retaliation reminds me of a game we played in microeconomics class. The game goes something like this: Person 1 starts with $10 and offers another Person 2 $A of that amount. Person 2 can choose to accept or reject. If the deal is accepted, Person 2 receives $A and Person 1 receives $10 - A. If the deal is rejected, neither party receives anything.
As far as I can tell, it's never rational to release a nuclear bomb. And it's never rational to reject money in aforementioned game. But in both situations, it is advantageous to trick the other person into thinking there are circumstances where you would do the irrational.
On a related note, perhaps some Overcoming Bias readers who can't think of anything interesting to do with their lives could infiltrate the military and try to get their finger on the proverbial nuclear button, just to make sure it never gets pushed.