Posts

Comments

Comment by jslocum on Bayes Academy: Development report 1 · 2014-11-20T15:41:15.991Z · LW · GW

I really like the idea overall.

Serious ideas:

  • games that help explain ideas like 'screening' variables, rules for propagating information up and down different branches of the network, etc.

  • more advanced topics like estimating the normalization constant for a very large hypothesis space?

  • more advanced gameplay mode where you have a scenario and a list of hidden and observable variables, and have to figure out what shape the network should take - you then play out the scenario with the network you made - success requires having constructed the network well!

Bad jokes:

  • A character named Gibbs who runs an energy drink stand and gives out free samples.

  • The Count of Monte Carlo should make an appearance.

  • A face-off against agents of the evil Frequentist, hell bent on destroying all that is (probably) good and (likely to be) held dear.

Comment by jslocum on The Fabric of Real Things · 2013-03-06T21:09:01.889Z · LW · GW

Mathematics is a mental construct created to reliably manipulate abstract concepts. You can describe mathematical statements as elements of the mental models of intelligent beings. A mathematical statement can be considered "true" if, when an intelligent beings use the statement in their reasoning, their predictive power increases. Thus, " '4+4=8' is true" implies statements like "jslocum's model of arithmetic predicts that '4+4=8', which causes him to correctly predict that if he adds four carrots to his basket of four potatoes, he'll have eight vegetables in his basket"

I'm no sure that "use the statement in their reasoning" and "their predictive power increases" are well formed concepts, though, so this might need some refining.

Comment by jslocum on Causal Diagrams and Causal Models · 2013-03-06T17:34:24.953Z · LW · GW

Anecdotes are poisonous data, and it is best to exclude them from your reasoning when possible. They are subject to a massive selection bias. At best they are useful for inferring the existence of something, e.g. "I once saw a plesiosaur in Loch Ness.". Even then the inference is tenuous because all you know is that there is at least once individual who says they saw a plesiosaur. Inferring the existence of a plesiosaur requires that you have additional supporting evidence that assigns a high probability that they are telling the truth, that their memory has not changed significantly since the original event, and that the original experience was genuine.

Comment by jslocum on Causal Diagrams and Causal Models · 2013-03-06T17:19:07.549Z · LW · GW

Here is a spreadsheet with all the numbers for the Exercise example all crunched and the graph reasoning explained in a slightly different manner:

https://docs.google.com/spreadsheet/ccc?key=0ArkrB_7bUPTNdGhXbFd3SkxWUV9ONWdmVk9DcVRFMGc&usp=sharing

Comment by jslocum on Skill: The Map is Not the Territory · 2013-02-27T15:39:50.229Z · LW · GW

I find myself to be particularly susceptible to the pitfalls avoided by skill 4. I'll have to remember to explicitly invoke the Tarski method next time I find myself in the act of attempting to fool myself.

One scenario not listed here in which I find it particularly useful to explicitly think about my own map is in cases where the map is blurry (e.g. low precision knowledge: "the sun will set some time between 5pm and 7pm") or splotchy (e.g. explicit gaps in my knowledge: "I know where the red and blue cups are, but not the green cup"). When I bring my map's flaws explicitly into my awareness, it allows me to make plans which account for the uncertainty of my knowledge, and come up with countermeasures.

Comment by jslocum on Rationality Quotes August 2012 · 2012-08-17T15:11:43.130Z · LW · GW

(Conversely, many fictions are instantiated somewhere, in some infinitesimal measure. However, I deliberately included logical impossibilities into HPMOR, such as tiling a corridor in pentagrams and having the objects in Dumbledore's room change number without any being added or subtracted, to avoid the story being real anywhere.)

In the library of books of every possible string, close to "Harry Potter and the Methods of Rationality" and "Harry Potter and the Methods of Rationalitz" is "Harry Potter and the Methods of Rationality: Logically Consistent Edition." Why is the reality of that books' contents affected by your reticence to manifest that book in our universe?

Comment by jslocum on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-04-23T00:36:13.515Z · LW · GW

I received an email on the 19th asking for additional information about myself. So I'm guessing that as of the 19th they were still not done selecting.

Comment by jslocum on Newcomb's Problem standard positions · 2012-02-21T03:03:31.439Z · LW · GW

I've devised some additional scenarios that I have found to be helpful in contemplating this problem.

Scenario 1: Omega proposes Newcomb's problem to you. However, there is a twist: before he scans you, you may choose on of two robots to perform the box opening for you. Robot A will only open the $1M box; robot B will open both.

Scenario 2: You wake up and suddenly find yourself in a locked room with two boxes, and a note from Omega: "I've scanned a hapless citizen (not you). predicted their course of action, and placed the appropriate amount of money in the two boxes present. Choose one or two and then you may go"

In scenario 1, both evidential and causal decision theories agree that you should one-box. In scenario 2, they both agree that you should two-box. Now, if we replace the robots with your future self and the hapless citizen with your past self, S1 becomes "what should you do prior to being scanned by Omega" and S2 reverts to the original problem. So now, omitting the possibility of fooling Omega as negligible, it can be seen that maximizing the payout from Newcomb's problem is really about finding a way to cause your future self to one-box.

What options are available, to either rational agents or humans, for exerting causal power on their future selves? A human might make a promise to themselves (interesting question: is a promise a precommitment or a self-modification?), ask another person (or other agent) to provide disincentives for two-boxing (e.g. "Hey, Bob, I bet you I'll one-box. If I win, I get $1; if you win, you get $1M), or find some way of modifying the environment to prevent their future self from two-boxing (e.g. drop the second box down a well). A general rational agent has similar options: modify itself into something that will one-box, and/or modify the environment so that one-boxing is the best course of action for its future self.

So now we have two solutions, but can we do better? If rational agent 'Alpha' doesn't want to rely on external mechanisms to coerce it's future's behavior, and also does not want to introduce a hack into its source code, what general solution can it adopt that solves this general class of problem? I have not yet read the Timeless Decision Theory paper; I think I'll ponder this question before doing so, and see if I encounter any interesting thoughts.

Comment by jslocum on How to Not Lose an Argument · 2011-04-16T02:34:06.120Z · LW · GW

It would be better to flip a coin at the beginning of a document to determine which pronoun to use when the gender is unspecified. That way there is no potential for the reader to be confused by two different pronouns referring to the same abstract entity.

Comment by jslocum on Conservation of Expected Evidence · 2011-03-20T21:09:25.935Z · LW · GW

It's worth noting, though, that you can rationally expect your credence in a certain belief "to increase", in the following sense: If I roll a die, and I'm about to show you the result, your credence that it didn't land 6 is now 5/6, and you're 5/6 sure that this credence it about to increase to 1.

No, you can't, because you also expect with 1/6 probability that your credence will go down to zero: 5/6 + (5/6 1/6) + (1/6 -5/6) = 5/6.

In order to fully understand this concept, it helped me to think about it this way: any evidence shifting your expectated change in confidence will necessarily cause a corresponding shift in your actual confidence. Suppose you hold some belief B with confidence C. Now some new experiment is being performed that will produce more data about B. If you had some prior evidence that the new data is expected to shift your confidence to C', that same evidence would already have shifted C to C', thus maintaining the conservation of expected evidence.

Consider the following example: initially, if someone were to ask you to bet on the veracity of B, you would choose odds C:(1-C). Suppose an oracle reveals to you that there is a 1/3 chance of the new data shifting your confidence to C+ and a 2/3 chance of it shifting to C-, giving C'=(C + (C+)/3 - 2C(-)/3). What would you then consider to be fair odds on B's correctness?

Comment by jslocum on Rationalization · 2011-03-20T17:38:11.590Z · LW · GW

You've missed a key point, which is that rationalization refers to a process in which one of many possible hypothesis is arbitrarily selected, which the rationalizer then attempts to support using a fabricated argument. In your query, you are asking that a piece of data be explained. In the first case, one filters the evidence, rejecting any data that too strongly opposes a pre-selected hypothesis. In the second case, one generates a space of hypothesis that all fit the data, and selects the most likely one as a guess. The difference is between choosing data to fit a hypothesis, and finding a hypothesis that best fits the data. Rationalization is pointing to a blank spot on your map and saying, "There must be a lake somewhere around there, because there aren't any other lakes nearby," while ignoring the fact that it's hot and there's sand everywhere.

Comment by jslocum on Welcome to Less Wrong! · 2011-03-03T17:10:00.205Z · LW · GW

Hello, people.

I first found Less Wrong when I was reading sci-fi stories on the internet and stumbled across Three Worlds Collide. As someone who places a high value on the ability to make rational decisions, I decided that this site is definitely relevant to my interests. I started reading through the sequences a few months ago, and I recently decided to make an account so that I could occasionally post my thoughts in the comments. I generally only post things when I think I have something particularly insightful to say, so my posts tend to be infrequent. Since I am still reading through the sequences, you probably won't be seeing me commenting on any of the more recent posts for a while.

I'm 21 years old, and I live in Cambridge, Mass. I'm currently working on getting a master's degree in computer science. My classes for the spring term are in machine vision, and computational cognitive science; I have a decent background in AI-related topics. Hopefully I'll be graduating in August, and I'm not quite sure what I'll be doing after that yet.