Posts

Comments

Comment by john_maxwell_old on Posting now enabled on Less Wrong · 2009-03-05T21:14:37.000Z · LW · GW

How are you and Robin going to decide whether a post is more appropriate for Less Wrong or Overcoming Bias?

Comment by john_maxwell_old on Markets are Anti-Inductive · 2009-02-26T05:36:03.000Z · LW · GW

Eliezer does a good job of explaining a mechanism by which two investments with negatively correlated returns can switch to having positively correlated returns. But he doesn't do a good job of convincing me that a stock's price has a tendency to go down when it has just gone up, and vice versa.

I can think of an argument against this position. It seems plausible that stock traders see the past movement of a stock as an indicator of it's future movement. If a majority of traders share this belief, this will compel them to buy the stock from those who don't, inflating it's value and reinforcing the cycle. This would indicate that markets are inductive, which is the opposite of what the title suggests.

Comment by john_maxwell_old on Markets are Anti-Inductive · 2009-02-26T02:08:47.000Z · LW · GW

Seconding Psy-Kosh in being confused & unconvinced.

Comment by john_maxwell_old on Pretending to be Wise · 2009-02-21T02:13:33.000Z · LW · GW

OK, I get it.

Comment by john_maxwell_old on Pretending to be Wise · 2009-02-21T01:46:33.000Z · LW · GW
Paolo Freire said, "Washing one's hands of the conflict between the powerful and the powerless means to side with the powerful, not to be neutral."

If the outcome of their conflict is not being affected by your existence, it can be said that you are neutral. If you disagree with me, I would be interested to hear what definition of "neutral" you are using.

Comment by john_maxwell_old on OB Status Update · 2009-01-28T00:25:17.000Z · LW · GW

A completely unbiased user culture would view anything that was posted (or not posted) as equally valuable. What use is that?

I think your definition of "unbiased" resides on the opposite side of the galaxy from mine.

I also like Daniel Franke's idea.

Comment by john_maxwell_old on OB Status Update · 2009-01-28T00:19:33.000Z · LW · GW

Once the initial site is up and running, the next items on the agenda include much better support for reading through sequences. And I'll organize more of my old posts (and perhaps some of Robin's) into sequences.

Great! This is an excellent excuse to further put off my sequence-reading!

Comment by john_maxwell_old on Harmful Options · 2008-12-25T22:06:49.000Z · LW · GW
Barry Schwartz's The Paradox of Choice - which I haven't read, though I've read some of the research behind it

Yay, a book I've read that Eliezer hasn't! That said, I don't actually recommend it; it was kinda tedious and repetitive.

Comment by john_maxwell_old on Thanksgiving Prayer · 2008-11-30T05:06:47.000Z · LW · GW

To a degree, it is useful to value truth over happiness in each of the occupations you mention. But humans have the ability to restrict their critical analysis to certain domains. Allow me to rewrite my comment:

As a firm atheist, I am fully open to the possibility that much of the world should stay religious, unless they're doing some sort of important work that's tangentially related to the question of God's existence (like studying humanity's origins), or they are doing some sort of work that requires them to make sure they don't compartmentalize their critical analysis.

Comment by john_maxwell_old on Thanksgiving Prayer · 2008-11-30T02:58:59.000Z · LW · GW

@billswift:

The obviously religious ones like Abigail and Richard aren't really worth responding to except with general disdain.

Unless you care about atheism's reputation.

The fact is that religious belief, if fully attained, can be an enormously useful psychological crutch. As a firm atheist, I am fully open to the possibility that much of the world should stay religious, unless they're doing some sort of important work that necessitates them to value truth over happiness.

And if you decide that someone would be better off as an atheist, you shouldn't try to grab their psychological crutch all at once. They'll just grip tighter.

Comment by john_maxwell_old on Whither OB? · 2008-11-18T02:58:47.000Z · LW · GW

I still haven't read most of the archive, and I'm reading slower than you're posting... So I'm going to second everyone who says a reduce posting rate is A-OK.

Comment by john_maxwell_old on Today's Inspirational Tale · 2008-11-06T01:36:50.000Z · LW · GW

Knowing the name of my representative (Mike Honda btw) is not going to increase my political influence. I'd have to be chummy with him before he gave any serious weight to my opinion. All the congressman's question revealed was whether the people in his audience gave any thought to politics, not whether they could be effective politically if they tried.

As for voting, I see it as a waste of time to study issues which I have only a tiny probability of affecting. The best strategy I can see in this situation is to ask your most intelligent and unbiased friend how they're voting and then vote in advance to avoid standing in line. (To make this strategy work on a large scale, I should also spend time studying issues if all my friends say I'm their smartest and least biased friend. In this case my vote has a significantly higher probability of affecting the election, making my study worthwhile.)

Comment by john_maxwell_old on Newcomb's Problem and Regret of Rationality · 2008-11-06T01:28:00.000Z · LW · GW

As a rationalist, it might be worthwhile to take the one box just so those Omega know-it-alls will be wrong for once.

Comment by john_maxwell_old on Hanging Out My Speaker's Shingle · 2008-11-06T00:47:10.000Z · LW · GW
I don't even know what this blog is supposed to be about anymore.

I think we should rename it "Robin and Eliezers' Varied Thoughts".

Comment by john_maxwell_old on Bay Area Meetup for Singularity Summit · 2008-10-22T04:48:14.000Z · LW · GW

I'm planning to come. Does "casual" mean "jeans and t-shirt" or "collared shirt and slacks"?

Comment by john_maxwell_old on Bay Area Meetup for Singularity Summit · 2008-10-18T01:21:10.000Z · LW · GW

Here's a link to the Yahoo Yellow Pages with various types of restaurants near Montgomery Theater.

Does anyone who reads Overcoming Bias live in San Jose? If so, can you recommend a restaurant?

Comment by john_maxwell_old on Bay Area Meetup for Singularity Summit · 2008-10-12T19:44:29.000Z · LW · GW

Richard, can you think of a better format than a round-table one? I imagine it might be difficult to find a room for people to mill around in.

Comment by john_maxwell_old on AIs and Gatekeepers Unite! · 2008-10-11T01:48:00.000Z · LW · GW

BTW, there is an important difference between Eliezer and seed AI: Eliezer can't rewrite his own source code.

Comment by john_maxwell_old on AIs and Gatekeepers Unite! · 2008-10-11T01:44:00.000Z · LW · GW

I'm volunteering to be a relatively pansy gatekeeper: I'll read everything you write, treat you courteously, offer counterarguments, and let you out if I'm convinced. Email john.maxwelliv at the email service Google hosts.

I can also be an AI.

Comment by john_maxwell_old on Rationality Quotes 18 · 2008-10-04T17:54:05.000Z · LW · GW

It's not which measurement has the procedural error that matters. It's what you do when you discover it.

Alternatively, what should the robot do if it gets one measurement that's far different than all the others, and it doesn't know why?

Comment by john_maxwell_old on Rationality Quotes 18 · 2008-10-04T04:39:39.000Z · LW · GW
A man with one watch knows what time it is; a man with two watches is never sure.

This is related to something I've been thinking about lately. You may or may not be familiar with the concept of significant figures. In a nutshell, they're a way of communicating the precision of a measurement using the number of digits written. This seems to be a pretty good explanation.

Let's say we are building a robot scientist. The robot scientist has no need for significant figures; those are for puny humans. Instead, it stores measurement and uncertainty separately. One way of doing this is to store the uncertainty as ±x. Another more interesting way is to store it as the standard deviation of your measurement. This opens up an entire host of problems.

For example, let's say that the robot screws up when it's measuring something and doesn't realize it this until it measures a second time and gets something completely different. Obviously, there was some procedural error in the first measurement. Does the first completely wrong measurement contribute to the standard deviation? Isn't it possible that the second, third, fourth, and fifth measurements are also completely wrong in some way that the robot has not yet realized? Under what conditions are you allowed to "throw out" a measurement?

Comment by john_maxwell_old on 9/26 is Petrov Day · 2008-09-27T00:43:41.000Z · LW · GW

This business with nuclear retaliation reminds me of a game we played in microeconomics class. The game goes something like this: Person 1 starts with $10 and offers another Person 2 $A of that amount. Person 2 can choose to accept or reject. If the deal is accepted, Person 2 receives $A and Person 1 receives $10 - A. If the deal is rejected, neither party receives anything.

As far as I can tell, it's never rational to release a nuclear bomb. And it's never rational to reject money in aforementioned game. But in both situations, it is advantageous to trick the other person into thinking there are circumstances where you would do the irrational.

On a related note, perhaps some Overcoming Bias readers who can't think of anything interesting to do with their lives could infiltrate the military and try to get their finger on the proverbial nuclear button, just to make sure it never gets pushed.