Posts
Comments
Yes, thank you for writing this- I've been meaning to write something like it for a while and now I don't need to! I initially brushed Newcomb's Paradox off as an edge case and it took me much longer than I would have liked to realize how universal it was. A discussion of this type should be included with every introduction to the problem to prevent people from treating it as just some pointless philosophical thought experiment.
You may find this tool useful for making nicer drawings of graphs:
As far as I can tell from the evidence given in the talk, contagious spreading of obesity is a plausible but not directly proven idea. Its plausibility comes from the more direct tests that he gives later in the talk, namely the observed spread of cooperation or defection in iterated games.
However, I agree that it's probably important to not too quickly talk about contagious obesity because (a) they haven't done the more direct interventional studies that would show whether this is true, and (b) speculating about contentious social issues in public before you have a solid understanding of what's going on leads to bad things. He could have more explicitly gotten at the point that we're not sure what effects cause the correlations that we see- I caught it but I suspect people paying less attention would come away thinking that they had proved the causal model.
The Moire Eel - move your cursor around and see all the beautfiul, beautiful moire patterns.
Social Networks and Evolution: a great Oxford neuroscience talk. I will also shamelessly push this blog post that I wrote about the connection between the work in the lecture and Jared Diamond's thesis that agriculture was the worst mistake in human history.
This is exactly what I was thinking the whole time. Is there any example of supposed "ambiguity aversion" that isn't explained by this effect?
Can you imagine a human being saying "I'm sorry, I'm too low-level to participate in this discussion"? There may be a tiny handful of people wise enough to try it.
This is precisely why people should be encouraged to do it more. I've found that the more you admit to a lack of ability where you don't have the ability, the more people are willing to listen to you where you do.
I also see interesting parallels to the relationship between skeptics and pseudoscience, where we replace skeptics -> rationalists, pseudoscience -> religion. Namely, "things that look like politics are the mindkiller" works as "things that look like pseudoscience are obviously dumb". It provides an opportunity to view yourself as smarter than other people without thinking too hard about the issue.
1) This is fantastic- I keep meaning to read more on how to actually apply Highly Advanced Epistemology to real data, and now I'm learning about it. Thanks!
2) This should be on Main.
3) Does there exist an alternative in the literature to the notation of Pr(A = a)? I hadn't realized until now how much the use of the equal sign there makes no sense. In standard usage, the equal sign either refers to literal equivalence (or isomorphism) as in functional programming, or variable assignment, as in imperative programming. This operation is obviously not literal equivalence (the set A is not equal to the element a), and it's only sort of like variable assignment. We do not erase our previous data of the set A: we want it to be around when we talk about observing other events from the set A.
In analogy with Pearl's "do" notation, I propose that we have an "observe notation", where Pr(A = a) would be written as Pr(obs_A (a)), and read as "probability that event a is observed from set A," and not overload our precious equal sign. (The overloading with equivalence vs. variable assignment is already stressful enough for the poor piece of notation.)
I'm not proposing that you change your notation for this sequence, but I feel like this notation might serve for clearer pedagogy in general.
That is the general approach I've been taking on the issue so far- basically I'm interested in learning about consciousness, and I've been going about it by reading papers on the subject.
However, part of the issue that I have is that I don't know what I don't know. I can look up terms that I don't know that show up in papers, but in the literature there are presumably unspoken inferences being made based on "obvious" information.
Furthermore, since I have a bias toward novelty or flashiness, I may miss things that blatantly and obviously contradict results that any well-trained neuroscientist or cognitive scientist should know and end up believing something that couldn't be true.
Do you have recommendations for places where non-experts can ask more knowledgeable people about neuro/cog sci? There exists a cognitive sciences stack exchange, but it appears to be poorly trafficked- there's an average of about one posting per week.
(How many different DAGs are possible if you have 600 nodes? Apparently, >2^600.)
Naively, I would expect it to be closer to 600^600 (the number of possible directed graphs with 600 nodes).
And in fact, it is some complicated thing that seems to scale much more like n^n than like 2^n: http://en.wikipedia.org/wiki/Directed_acyclic_graph#Combinatorial_enumeration
I wrote a blog post describing the article, talking about criticisms of Crick and Koch's theory, and describing related research involving salvia:
http://the-lagrangian.blogspot.com/2014/07/epilepsy-consciousness-and-salvia.html
Enjoy.
The 100 Questions link is really nice- I particularly liked this question: "How random are synaptic events? And why (both from a functional as well as from a biophysical point of view)?" I am not sure why this question hadn't already occurred to me, but I'm glad I have it now.
The link to the original paper is broken.
Even better than that is this series of blog posts, which talks about color identification across languages, the way that color-space is in a sense "optimally" divided by basic color words, and how children develop a sense for naming colors:
http://www.wired.com/wiredscience/2012/06/the-crayola-fication-of-the-world-how-we-gave-colors-names-and-it-messed-with-our-brains-part-i/ http://www.wired.com/wiredscience/2012/06/the-crayola-fication-of-the-world-how-we-gave-colors-names-and-it-messed-with-our-brains-part-ii/
Also, this from his summary of Nietzsche's "Thus Spoke Zarathustra":
Humanity isn't an end, it's a fork in the road, and you have two options: "Animal" and "Superman". For some reason, people keep going left, the easy way, the way back to where we came from. Fuck 'em. Other people just stand there, staring at the signposts, as if they're going to come alive and tell them what to do or something. Dude, the sign says fucking "SUPERMAN". How much more of a clue do these assholes want?
I am deeply confused by your statement that the complete class theorem only implies that Bayesian techniques are locally optimal. If for EVERY non-Bayesian method there's a better Bayesian method, then the globally optimal technique must be a Bayesian method.
In section 8.1, your example of the gambler's ruin postulates that both agents have the same starting resources, but this is exactly the case in which the gambler's ruin doesn't apply. That might be worth changing.
According to Wikipedia, there are at least 4 groups currently working on LFTRs, one of which is China: http://en.wikipedia.org/wiki/LFTR#Recent_developments
Even a few years of delay can make a big difference if you are in the middle of a major war. If Galston hadn't published his results and they weren't found until a decade or two later, the US probably wouldn't have used Agent Orange in Vietnam. Similarly with chlorine gas in WWI, atomic bombs in WWII, etc. Granted, delaying the invention doesn't necessarily make the overall outcome better. If the atomic bomb wasn't invented until the 1950s and we didn't have the examples of Hiroshima and Nagasaki, then the US or USSR would probably have been more likely to use them against each other.
Even if the BB and the psychic are in causally disconnected parts of your model, them having the same probability of being correlated with the card doesn't imply that the Causal Markov Condition is broken. In order to show that, you would need to specify all of the parent nodes to the BB in your model, calculate the probability of it being correlated with the card, and then see whether having knowledge of the psychic would change your probability for the BB. Since all physics currently is local in nature, I can't think of anything that would imply this is the case if the psychic is outside of the past light cone of the BB. Larger boundary conditions on the universe as a whole that may or may not make them correlate have no effect on whether the CMC holds.
Can you provide an example? I would claim that for any model in which you have a mathematical truth as a node in a causal graph, you can replace that node by whatever series of physical events caused you to believe that mathematical truth.
The CMC is not strictly violated in physics as far as we know. If you specify the state of the universe for the entire past light cone of some event, then you uniquely specify the event. The example that you gave of the rock shooting out of the pond indeed does not violate the laws of physics- you simply shoved the causality under the rug by claiming that the edge of the pond fluctuated "spontaneously". This is not true. The edge of the pond fluctuating was completely specified by the past light cone of that event. This is the sense in which the CMC runs deeper than the 2nd law of thermodynamics- because the 2nd "law" is probabilistic, you can find counterexamples to it in an infinite universe. If you actually found a counterexample to the CMC, it would make physics essentially impossible.
An omniscient agent could still describe a causal structure over the universe- it would simply be deterministic (which is a special case of a probabilistic causal structure). For instance, consider a being that knew all the worldlines of all particles in the universe. It could deduce a causal structure by re-describing these worldlines as a particular solution to a local differential equation. The key difference between causal vs. acausal descriptions is whether or not they are local.
I think it makes more sense to say that this test rules out ideas that can't actually be tested as hypotheses. An idea can only be tested by observation once it is formulated as a causal network. Once it's formulated as a testable hypothesis, you can simply discard this epiphenomenal example by Solomonoff induction.