Posts
Comments
Opposing Bohr's interpretation.
As does Chesterton, less explicitly:
Mere light sophistry is the thing that I happen to despise most of all things, and it is perhaps a wholesome fact that this is the thing of which I am generally accused. I know nothing so contemptible as a mere paradox; a mere ingenious defence of the indefensible.
and at length.
I get the impression that he (thankfully!) eased off on that particular template as time went on.
I suspect most self-identified communists would baulk at the description of their ideology as "complete state control of many facets of life".
Here's how I think about the distinction on a meta-level:
"It is best to act for the greater good (and acting for the greater good often requires being awesome)."
vs.
"It is best to be an awesome person (and awesome people will consider the greater good)."
where ''acting for the greater good" means "having one's own utility function in sync with the aggregate utility function of all relevant agents" and "awesome" means "having one's own terminal goals in sync with 'deep' terminal goals (possibly inherent in being whatever one is)" (e.g. Sam Harris/Aristotle-style 'flourishing').
Cool; I take that back. Sorry for not reading closely enough.
Ah, good point. It's like the prior, considered as a regularizer, is too "soft" to encode the constraint we want.
A Bayesian could respond that we rarely actually want sparse solutions -- in what situation is a physical parameter identically zero? -- but rather solutions which have many near-zeroes with high probability. The posterior would satisfy this I think. In this sense a Bayesian could justify the Laplace prior as approximating a so-called "slab-and-spike" prior (which I believe leads to combinatorial intractability similar to the fully L0 solution).
Also, without L0 the frequentist doesn't get fully sparse solutions either. The shrinkage is gradual; sometimes there are many tiny coefficients along the regularization path.
[FWIW I like the logical view of probability, but don't hold a strong Bayesian position. What seems most important to me is getting the semantics of both Bayesian (= conditional on the data) and frequentist (= unconditional, and dealing with the unknowns in some potentially nonprobabilistic way) statements right. Maybe there'd be less confusion -- and more use of Bayes in science -- if "inference" were reserved for the former and "estimation" for the latter.]
Many L1 constraint-based algorithms (for example the LASSO) can be interpreted as producing maximum a posteriori Bayesian point estimates with Laplace (= double exponential) priors on the coefficients.
This is just the (intended) critique of utilitarianism itself, which says that the utility functions of others are (in aggregate) exactly what you should care about.
What does "intrinsically teleological" mean?
What about mentioning the St. Petersburg paradox? This is a pretty striking issue for EUM, IMHO.
I have another possible explanation, which I think deserves a far greater "probability mass'': images make scientific articles seem more plausible for (some of) the same reasons they make advertising or magazine articles seem more plausible -- i.e., precognitive reasons which may have little to do with the articles' content being scientific. McCabe and Castel don't control for this, but it is somewhat supported by their comparison of their study with Weisberg's:
The simple addition of cognitive neuroscience explanations may affect people’s conscious deliberation about the quality of scientific explanations, whereas the brain images may influence a less consciously controlled aspect of ratings in the current experiments.
"-Scientific content, -scientific images" includes most advertising, which is pretty obviously made more convincing through images. For an example of "+scientific content, -scientific images", think of the many articles in (say) New Scientist that are made more pleasant (and quite possibly more convincing) by more-or-less purely aesthetic graphics.
I can also think of some "less consciously controlled" reasons that are science-specific. Images of brain scans lend a kind of "hard science" sheen to the articles' claims -- in much the same way that CGI molecules spinning around hair follicles add to shampoo advertising's claims of sheen ("-scientific content, +scientific images"). McCabe & Castel again:
This sort of visual evidence of physical systems at work, which is typical of ‘‘harder’’ sciences like physics and chemistry, is not typically apparent in studies of cognition, where the evidence for cognitive processes is indirect, by nature. Indeed, it is important to note that while brain images give the appearance of direct measurement of the physical substrate of cognitive processes, techniques like fMRI measure changes in relative oxygenation of blood in regions of the brain, which is also indirect. Of course, it is unlikely that this subtlety is appreciated by lay readers.
In other words, images of brain scans create the impression that underlying physical mechanisms are better understood than they actually are. This is also an issue in pop science reporting:
[...] many cognitive neuroscientists have expressed frustration at what they see as the oversimplification of their data, and have suggested that efforts be made to influence media coverage of brain imaging research to include discussion of the limitations of fMRI, in order to reduce the misrepresentation of these data.
So how does this study pertain to physicalism? As I see it, this study underscores the ease with which intelligent people -- including physicalists -- can be fooled into thinking that scientific studies explain more than they do by the use of overly-concrete, hard-science-flavored imagery (and language). It shows how easy it is to jump from an image of a presumed physical substrate for some phenomenon to the belief that we better-understand that phenomenon. In other words, it shows how the impression of reductionism can function as a curiosity-stopper.
As I understand it, that is a common criticism of reductionism in practice.
Also, this is why I'm uncomfortable with the overuse of overly-precise terms from maths and science -- like referring to one's own "probability mass" on Less Wrong, or the Churchlands bemoaning their "seratonin levels" rather than saying they feel horrible (see here, p. 69). Sometimes an unwarranted science-y aesthetic can mislead.
Luke -- your typology of ends reminds me of something I was reading recently by Jonathan Edwards. I know this is not an atheology post, and the Edwards work isn't particularly empirical, but I thought it might be an interesting antecedent besides.