Posts
Comments
This is a field in which the discoverer of the theorem that rational agents cannot disagree was given the highest possible honours...
I can't say I disagree.
Of course experimental design is very important in general. But VAuroch and I agree that when two designs give rise to the same likelihood function, the information that comes in from the data are equivalent. We disagree about the weight to give to the information that comes in from what the choice of experimental design tells us about the experimenter's prior state of knowledge.
you're ignoring critical information
No, it practical terms it's negligible. There's a reason that double-blind trials are the gold standard -- it's because doctors are as prone to cognitive biases as anyone else.
Let me put it this way: recently a pair of doctors looked at the available evidence and concluded (foolishly!) that putting fecal bacteria in the brains of brain cancer patients was such a promising experimental treatment that they did an end-run around the ethics review process -- and after leaving that job under a cloud, one of them was still considered a "star free agent". Well, perhaps so -- but I think this little episode illustrates very well that a doctor's unsupported opinion about the efficacy of his or her novel experimental treatment isn't worth the shit s/he wants to place inside your skull.
Thanks for the sci-hub link. So awesome!
You're going to have a hard time convincing me that... vectors are a necessary precursor for regression analysis...
So you're fitting a straight line. Parameter estimates don't require linear algebra (that is, vectors and matrices). Super. But the immediate next step in any worthwhile analysis of data is calculating a confidence set (or credible set, if you're a Bayesian) for the parameter estimates; good luck teaching that if your students don't know basic linear algebra. In fact, all of regression analysis, from the most basic least squares estimator through multilevel/hierarchical regression models up to the most advanced sparse "p >> n" method, is built on top of linear algebra.
(Why do I have such strong opinions on the subject? I'm a Bayesian statistician by trade; this is how I make my living.)
Consciousness is the most recent module, and that does mean [that drawing causal arrows from consciousness to other modules of human mind design is ruled out, evolutionarily speaking.]
The causes of the fixation of a genotype in a population are distinct from the causal structures of the resulting phenotype instantiated in actual organisms.
Sure, I agree with all of that. I was just trying to get at the root of why "nobody asked [you] to take either vow".
Before I also haven't heard anybody speak about taking those kinds of vows to oneself.
It's not literal. It's an attempt at poetic language, like The Twelve Virtues of Rationality.
I don't disagree with this. A lot of the kind of math Scott lacks is just rather complicated bookkeeping.
(Apropos of nothing, the work "bookkeeping" has the unusual property of containing three consecutive sets of doubled letters: oo,kk,ee.)
I have the sort of math skills that Scott claims to lack. I lack his skill at writing, and I stand in awe (and envy) at how far Scott's variety of intelligence takes him down the path of rationality. I currently believe that the sort of reasoning he does (which does require careful thinking) does not cluster with mathy things in intelligence-space.
Scott's technique for shredding papers' conclusions seem to me to consist mostly of finding alternative stories that account for the data and that the authors have overlooked or downplayed. That's not really a math thing, and it plays right to his strengths.
Maybe for the bit about signalling in the last paragraph...? Just guessing here; perhaps Kawoomba will fill us in.
I like it when I can just point folks to something I've already written.
The upshot is that there are two things going on here that interact to produce the shattering phenomenon. First, the notion of closeness permits some very pathological models to be considered close to sensible models. Second, the optimization to find the worst-case model close to the assumed model is done in a post-data way, not in prior expectation. So what you get is this: for any possible observed data and any model, there is a model "close" to the assumed one that predicts absolute disaster (or any result) just for that specific data set, and is otherwise well-behaved.
As the authors themselves put it:
The mechanism causing this “brittleness” has its origin in the fact that, in classical Bayesian Sensitivity Analysis, optimal bounds on posterior values are computed after the observation of the specific value of the data, and that the probability of observing the data under some feasible prior may be arbitrarily small... This data dependence of worst priors is inherent to this classical framework and the resulting brittleness under finite-information can be seen as an extreme occurrence of the dilation phenomenon (the fact that optimal bounds on prior values may become less precise after conditioning) observed in classical robust Bayesian inference.
It's a rather confusing way of referring to a "biased point of view". Saying that "Person A has privilege" wrt. some issue is a claim that A's overall observations and experiences are unrepresentative, and so she should rely on others' experiences as much as on her own.
That's not quite correct; I think it's best to start with the concept of systematic oppression. Suppose for the sake of argument that some group of people is systematically oppressed, that is, on account of their group identity, the system in which they find themselves denies them access to markets, or subjects them to market power or physical violence, or vilifies them in the public sphere -- you can provide your own examples. The privileged group is just the set complement of the oppressed group. An analogy: systematic oppression is the subject and privilege (in the SJ jargon sense) is the negative space.
The "biased point-of-view" thing follows as a near-corollary because it's human nature to notice one's oppression and to take one's absence-of-oppression for granted as a kind of natural status quo, a background assumption.
Next question: in what way did Aaronson's so-called wealthy white male privilege actually benefit him? To answer this, all we need to do is imagine, say, a similarly terrified poor black trans nerd learning to come out of their shell. Because I've chosen an extreme contrast, it's pretty clear who would have the easier time of it and why. Once you can see it in high contrast, it's pretty easy to relax the contrast and keep track of the relative benefits that privilege conveys.
Someone who really cared about time management wouldn't be reading this site in the first place.
I'm a SSC fan and highly sympathetic to SJ goals and ideals. One of the core LW meetup members in my city can't stand to read SSC on account of what he perceives to be constant bashing of SJ. (I've already checked and verified that his perception of the proportion of SJ bashing in SSC posts is a massive overestimate, probably caused by selection bias.) As a specific example of verbiage that he considers typical of SSC he cited:
And the people who talk about “Nice Guys” – and the people who enable them, praise them, and link to them – are blurring the already rather thin line between “feminism” and “literally Voldemort”.
When I read that line, I didn't take it literally -- in spite of the use of the word "literally". I just kind of skipped over it. But after it was pointed out to me that I ought to take it literally, well... "frothing" is a pretty good description.
I remain a SSC fan, but I'm less likely to just blank out the meaning of these kinds of things now.
Embarrassingly, I didn't have the "who feeds Paris" realization until last year -- well after I thought I had achieved a correct understanding of and appreciation for basic microeconomic thought.
Nice choice of username. :-)
Same special-snowflake level credible limits, but for different reasons. Swimmer963 has an innate drive to seek out and destroy (whatever she judges to be) her personal inadequacies. She wasn't very strategic about it in teenager-hood, but now she has the tools to wield it like a scalpel in the hands of a skilled surgeon. Since she seems to have decided that a standard NPC job is not for her, I predict she'll become a PC shortly.
You're already a PC; your strengths are a refusal to tolerate mediocrity in the long-term (or let us say, in the "indefinite" term, in multiple senses) and your vision for controlling and eradicating disease.
FWIW, in my estimation your special-snowflake-nature is somewhere between "more than slightly, less than somewhat" and "potential world-beater". Those are wide limits, but they exclude zero.
Hikikomori no more? If so (as seems likely what with the girlfriend and all), it gladdens me to hear it.
In the biz we call this selection bias. The most fun example of this is the tale of Abraham Wald and the Surviving Bombers.
I was working in protein structure prediction.
I confess to being a bit envious of this. My academic path after undergrad biochemistry took me elsewhere, alas.
Try it -- the first three chapters are available online here. The first one is discursive and easy; the math of the second chapter is among of most difficult in the book and can be safely skimmed; if you can follow the third chapter (which is the first one to present extensive probability calculations per se) and you understand probability densities for continuous random variables then you'll be able to understand the rest of the book without formal training.
The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.
Kinda... more specifically, a big part of what they are is an attempt at insurance against the possibility that there exists someone out there (probably young) with more innate potential for FAI research than EY himself possesses but who never finds out about FAI research at all.
Lumifer wrote, "Pretty much everyone does that almost all the time." I just figured that given what we know of heuristics and biases, there exists a charitable interpretation of the assertion that makes it true. Since the meat of the matter was about deliberate subversion of a clear-eyed assessment of the evidence, I didn't want to get into the weeds of exactly what Lumifer meant.
But we do run biological computations (assuming that the exercise of human intelligence reduces to computation) to make em technology possible.
Since we're just bouncing short comments off each other at this point, I'm going to wrap up now with a summary of my current position as clarified through this discussion. The original comment posed a puzzle:
Brain emulations seem to represent an unusual possibility for an abrupt jump in technological capability, because we would basically be ‘stealing’ the technology rather than designing it from scratch. ...If this is an unusual situation however, it seems strange that the other most salient route to superintelligence - artificial intelligence designed by humans - is also often expected to involve a discontinuous jump in capability, but for entirely different reasons.
The commonality is that both routes attack a critical aspect of the manifestation of intelligence. One goes straight for an understanding of the abstract computation that implements domain-general intelligence; the other goes at the "interpreter", physics, that realizes that abstract computation.
Making intelligence-implementing computations substrate-independent in practice (rather than just in principle) already expands our capabilities -- being able to run those computations in places pink goo can't go and at speeds pink goo can't manage is already a huge leap.
I'm just struck by how the issue of guilt here turns on mental processes inside someone's mind and not at all on what actually happened in physical reality.
Mental processes inside someone's mind actually happen in physical reality.
Just kidding; I know that's not what you mean. My actual reply is that it seems manifestly obvious that a person in some set of circumstances that demand action can make decisions that careful and deliberate consideration would judge to be the best, or close to the best, possible in prior expectation under those circumstances, and yet the final outcome could be terrible. Conversely, that person might make decisions that that careful and deliberate consideration would judge to be terrible and foolish in prior expectation, and yet through uncontrollable happenstance the final outcome could be tolerable.
Because the solution has an immediate impact on the exercise of intelligence, I guess? I'm a little unclear on what other problems you have in mind.
That's because we live in a world where... it's not great, but better than speculating on other people's psychological states.
I wanted to put something like this idea into my own response to Lumifer, but I couldn't find the words. Thanks for expressing the idea so clearly and concisely.
I wasn't talking about faster progress as such, just about a predictable single large discontinuity in our capabilities at the point in time when the em approach first bears fruit. It's not a continual feedback, just an application of intelligence to the problem of making biological computations (including those that implement intelligence) run on simulated physics instead of the real thing.
I would say that I don't do that, but then I'd pretty obviously be allowing the way I desire the world to be to influence my assessment of that actual state of the world. I'll make a weaker claim -- when I'm engaging conscious effort in trying to figure out how the world is and I notice myself doing it, I try to stop. Less Wrong, not Absolute Perfection.
Pretty much everyone does that almost all the time. So, is everyone blameworthy? Of course, if everyone is blameworthy then no one is.
That's a pretty good example of the Fallacy of Gray right there.
Hmm.. let me think...
The materialist thesis implies that a biological computation can be split into two parts: (i) a specification of a brain-state; (ii) a set of rules for brain-state time evolution, i.e., physics. When biological computations run in base reality, brain-state maps to program state and physics is the interpreter, pushing brain-states through the abstract computation. Creating an em then becomes analogous to using Futamura's first projection to build in the static part of the computation -- physics -- thereby making the resulting program substrate-independent. The entire process of creating a viable emulation strategy happens when we humans run a biological computation that (i) tells us what is necessary to create a substrate-independent brain-state spec and (ii) solves a lot of practical physics simulation problems, so that to generate an em, the brain-state spec is all we need. This is somewhat analogous to Futamura's second projection: we take the ordered pair (biological computation, physics), run a particular biological computation on it, and get a brain-state-to-em compiler.
So intelligence is acting on itself indirectly through the fact that an "interpreter", physics, is how reality manifests intelligence. We aim to specialize physics out of the process of running the biological computations that implement intelligence, and by necessity, we're use a biological computation that implements intelligence to accomplish that goal.
It won't have source code per se, but one can posit the existence of a halting oracle without generating an inconsistency.
My intuition -- and it's a Good one -- is that the discontinuity is produced by intelligence acting to increase itself. It's built into the structure of the thing acted upon that it will feed back to the thing doing the acting. (Not that unique an insight around these parts, eh?)
Okay, here's a metaphor(?) to put some meat on the bones of this comment. Suppose you have an interpreter for some computer language and you have a program written in that language that implements partial evaluation. With just these tools, you can make the partial evaluator (i) act as a compiler, by running it on an interpreter and a program; (ii) build a compiler, by running it on itself and an interpreter; (iii) build a generic interpreter-to-compiler converter, by running it on itself and itself. So one piece of technology "telescopes" by acting on itself. These are the Three Projections of Doctor Futamura.
Fungible. The term is still current within economics, I believe. If something is fungible, it stands to reason that one can funge it, nu?
As Vaniver mentioned, it relates to exploring trade-offs among the various goals one has / things one values. A certain amount of it arises naturally in the planning of any complex project, but it seems like the deliberate practice of introspecting on how one's goals decompose into subgoals and on how they might be traded off against one another to achieve a more satisfactory state of things is an idea that is novel, distinct, and conceptually intricate enough to deserve its own label.
Yeesh. These people shouldn't let feelings or appearances influence their opinions of EY's trustworthiness -- or "morally repulsive" ideas like justifications for genocide. That's why I feel it's perfectly rational to dismiss their criticisms -- that and the fact that there's no evidence backing up their claims. How can there be? After all, as I explain here, Bayesian epistemology is central to LW-style rationality and related ideas like Friendly AI and effective altruism. Frankly, with the kind of muddle-headed thinking those haters display, they don't really deserve the insights that LW provides.
There, that's 8 out of 10 bullet points. I couldn't get the "manipulation" one in because "something sinister" is underspecified; as to the "censorship" one, well, I didn't want to mention the... thing... (ooh, meta! Gonna give myself partial credit for that one.)
Ab, V qba'g npghnyyl ubyq gur ivrjf V rkcerffrq nobir; vg'f whfg n wbxr.
He had doubts, he extinguished them, and that's what makes him guilty.
This is not the whole story. In the quote
He had acquired his belief not by honestly earning it in patient investigation, but by stifling his doubts.
you're paying too much heed to the final clause and not enough to the clause that precedes it. The shipowner had doubts that, we are to understand, were reasonable on the available information. The key to the shipowner's... I prefer not to use the word "guilt", with its connotations of legal or celestial judgment -- let us say, blameworthiness, is that he allowed the way he desired the world to be to influence his assessment of the actual state of the world.
In your "optimistic fellow" scenario, the shipowner would be as blameworthy, but in that case, the blame would attach to his failure to give serious consideration to the doubts that had been expressed to him.
And going beyond what is in the passage, in my view, he would be equally blameworthy if the ship had survived the voyage! Shitty decision-making is shitty-decision-making, regardless of outcome. (This is part of why I avoided the word "guilt" -- too outcome-dependent.)
tl;dr: No, the subject of the site is wider than that.
Long version: IIRC, EY originally conceived of rationality as comprising two relatively distinct domains: epistemic rationality, the art and science of ensuring the map reflects the territory, and instrumental rationality, the art and science of making decisions and taking actions that constrain the future state of the universe according to one's goals. Around the time of the fork of CFAR off of SIAI-that-was, EY had expanded his conception of rationality to include a third domain: human rationality, the art and science of coping with ape-brain.
In my view, these three domains have core subject matter and interfacial subject matter: the core of epistemic rationality is Bayesian epistemology; the core of instrumental rationality is expected utility optimization; the core of human rationality is Thinking, Fast and Slow and construal level theory. At the interface of epistemic and instrumental rationality sit topics like explore/exploit trade-offs and value-of-information calculations; at the interface of epistemic rationality and human rationality sit topics like belief vs. alief, heuristics and biases, and practical techniques for updating on and responding to new information in ways large and small; at the interface of instrumental rationality and human rationality sit topics like goal factoring/funging and habit formation; and right at the intersection of all three, I would locate techniques like implementing tight feedback loops.
clearly advertising propaganda
It's not clear to me -- I'm not even sure what you think it's advertising!
( ETA: I wrote a bunch of irrelevant stuff, but then I scrolled up and saw (again, but it somehow slipped my mind even though I friggin' quoted it in the grandparent, I'm going senile at the tender age of 36) that you specifically think it's advertising for CFAR, so I've deleted the irrelevant stuff. )
Advertising for CFAR seems like a stretch, because -- although very nice things are said about Anna Salamon -- the actual product CFAR sells isn't mentioned at all.
My conclusion: there might be an interesting and useful post to be written about how epistemic rationality and techniques for coping with ape-brain intersect, and ShannonFriedman might be capable of writing it. Not there yet, though.
...a long advertisement for CFAR...
...containing an immediately useful (or at least, immediately practicable) suggestion, as, er, advertised.
No, we totally do... in principle.
Awesome, thanks!
Meh. That's only a problem in practice, not in principle. In principle, all prediction problems can be reduced to binary sequence prediction. (By which I mean, in principle there's only one "area".)
I invite you to spell out the prediction that you drew about the evolution of human intelligence from your theory of humor and how the recently published neurology research verified it.
What if it was very hard to produce an intelligence that was of high performance across many domains?... There are a few strong counters to this - for instance, you could construct good generalists by networking together specialists...
In fact, we already know the minimax optimal algorithm for combining "expert" predictions (here "expert" denotes an online sequence prediction algorithm of any variety); it's the weighted majority algorithm.