Posts

Rationality Quotes December 2013 2013-12-17T20:43:49.506Z
[LINK] Bets do not (necessarily) reveal beliefs 2013-05-27T20:13:15.166Z
[LINK] Raw Story: US seizes operator accounts of a subsidiary of Mt. Gox 2013-05-24T18:14:34.898Z
Meetup : LessWrong Ottawa 2013-05-09T18:18:34.466Z
[Link] Machiavelli in historical context 2012-07-31T19:41:57.276Z
[Video] Presentation on metacognition contains good intro to basic LW ideas 2012-06-12T13:12:13.283Z
Meetup : Less Wrong Ottawa 2011-07-05T01:19:28.276Z
Meetup : Less Wrong Ottawa 2011-06-28T02:59:18.305Z
Ottawa LW meetup, June 23, 7pm; two Bayesian Conspiracy sessions 2011-06-18T03:58:00.633Z
Ottawa LW meetup, June 16, 7pm; two Bayesian Conspiracy sessions 2011-06-11T18:53:29.586Z
Ottawa LW meetup, June 9, 7pm; two Bayesian Conspiracy sessions 2011-06-05T19:40:02.077Z
Ottawa LW meetup, June 2, 7pm; two Bayesian Conspiracy sessions 2011-05-28T03:22:53.260Z
Ottawa LW meetup, May 26, 7pm; Bayes study group, May 26, 9am 2011-05-19T19:43:58.873Z
Group of Latter Day Roleplayers 2011-05-15T21:31:46.969Z
Ottawa LW meetup, May 9, 7pm; Bayes study group, May 5, 9pm 2011-05-02T01:04:07.300Z
Ottawa LW Meetup: Thursday April 28, 7:00pm (ADDED: Bayes study group satellite meeting) 2011-04-21T21:35:31.080Z
Ottawa LW Meetup Saturday April 16th 2011-04-06T16:20:21.315Z
Trip from Ottawa, Canada to NYC on weekend of April 2 2011-03-20T21:21:35.021Z
[LINK] Discovery shuttle launch viewed from an airliner 2011-03-04T16:04:00.331Z
Alt text of today's xkcd addresses akrasia 2011-02-19T02:03:20.482Z
Easy Intelligence Augmentation or Internet Wackaloonery? 2011-01-23T02:17:14.029Z
Intelligence explosion in plain, vanilla, mixed berry, and coffee flavors 2010-10-09T20:07:13.435Z
Rationality quotes: August 2010 2010-08-03T00:16:45.738Z
TED Talks: Daniel Kahneman 2010-03-06T01:45:39.377Z
Case study: abuse of frequentist statistics 2010-02-21T06:35:24.216Z
Calibration for continuous quantities 2009-11-21T04:53:32.443Z
My God! It's full of Nash equilibria! 2009-08-16T19:59:33.959Z
Are calibration and rational decisions mutually exclusive? (Part two) 2009-07-24T00:49:14.505Z
Are calibration and rational decisions mutually exclusive? (Part one) 2009-07-23T05:15:45.853Z
Open Thread: June 2009 2009-06-01T18:46:09.791Z
TED Talks for Less Wrong 2009-05-02T03:32:09.269Z
Eternal Sunshine of the Rational Mind 2009-04-07T15:10:27.725Z

Comments

Comment by Cyan on Rationality Quotes Thread September 2015 · 2015-09-27T19:36:56.172Z · LW · GW

This is a field in which the discoverer of the theorem that rational agents cannot disagree was given the highest possible honours...

Comment by Cyan on Beautiful Probability · 2015-09-02T21:28:16.177Z · LW · GW

I can't say I disagree.

Comment by Cyan on Beautiful Probability · 2015-08-31T17:28:39.940Z · LW · GW

Of course experimental design is very important in general. But VAuroch and I agree that when two designs give rise to the same likelihood function, the information that comes in from the data are equivalent. We disagree about the weight to give to the information that comes in from what the choice of experimental design tells us about the experimenter's prior state of knowledge.

Comment by Cyan on Beautiful Probability · 2015-08-31T02:22:41.084Z · LW · GW

you're ignoring critical information

No, it practical terms it's negligible. There's a reason that double-blind trials are the gold standard -- it's because doctors are as prone to cognitive biases as anyone else.

Let me put it this way: recently a pair of doctors looked at the available evidence and concluded (foolishly!) that putting fecal bacteria in the brains of brain cancer patients was such a promising experimental treatment that they did an end-run around the ethics review process -- and after leaving that job under a cloud, one of them was still considered a "star free agent". Well, perhaps so -- but I think this little episode illustrates very well that a doctor's unsupported opinion about the efficacy of his or her novel experimental treatment isn't worth the shit s/he wants to place inside your skull.

Comment by Cyan on LessWrong Help Desk - free paper downloads and more (2014) · 2015-08-20T21:56:19.893Z · LW · GW

Thanks for the sci-hub link. So awesome!

Comment by Cyan on Thinking like a Scientist · 2015-07-26T17:38:44.683Z · LW · GW

You're going to have a hard time convincing me that... vectors are a necessary precursor for regression analysis...

So you're fitting a straight line. Parameter estimates don't require linear algebra (that is, vectors and matrices). Super. But the immediate next step in any worthwhile analysis of data is calculating a confidence set (or credible set, if you're a Bayesian) for the parameter estimates; good luck teaching that if your students don't know basic linear algebra. In fact, all of regression analysis, from the most basic least squares estimator through multilevel/hierarchical regression models up to the most advanced sparse "p >> n" method, is built on top of linear algebra.

(Why do I have such strong opinions on the subject? I'm a Bayesian statistician by trade; this is how I make my living.)

Comment by Cyan on Ephemeral correspondence · 2015-04-28T20:11:04.254Z · LW · GW

Consciousness is the most recent module, and that does mean [that drawing causal arrows from consciousness to other modules of human mind design is ruled out, evolutionarily speaking.]

The causes of the fixation of a genotype in a population are distinct from the causal structures of the resulting phenotype instantiated in actual organisms.

Comment by Cyan on The path of the rationalist · 2015-04-14T14:48:16.451Z · LW · GW

Sure, I agree with all of that. I was just trying to get at the root of why "nobody asked [you] to take either vow".

Comment by Cyan on The path of the rationalist · 2015-04-14T13:46:54.619Z · LW · GW

Before I also haven't heard anybody speak about taking those kinds of vows to oneself.

It's not literal. It's an attempt at poetic language, like The Twelve Virtues of Rationality.

Comment by Cyan on The Truth About Mathematical Ability · 2015-02-19T01:06:24.863Z · LW · GW

I don't disagree with this. A lot of the kind of math Scott lacks is just rather complicated bookkeeping.

(Apropos of nothing, the work "bookkeeping" has the unusual property of containing three consecutive sets of doubled letters: oo,kk,ee.)

Comment by Cyan on The Truth About Mathematical Ability · 2015-02-13T16:25:56.228Z · LW · GW

I have the sort of math skills that Scott claims to lack. I lack his skill at writing, and I stand in awe (and envy) at how far Scott's variety of intelligence takes him down the path of rationality. I currently believe that the sort of reasoning he does (which does require careful thinking) does not cluster with mathy things in intelligence-space.

Comment by Cyan on The Truth About Mathematical Ability · 2015-02-12T20:29:20.462Z · LW · GW

Scott's technique for shredding papers' conclusions seem to me to consist mostly of finding alternative stories that account for the data and that the authors have overlooked or downplayed. That's not really a math thing, and it plays right to his strengths.

Comment by Cyan on Rationality Quotes January 2015 · 2015-01-23T19:47:51.288Z · LW · GW

Maybe for the bit about signalling in the last paragraph...? Just guessing here; perhaps Kawoomba will fill us in.

Comment by Cyan on Comments on "When Bayesian Inference Shatters"? · 2015-01-08T18:05:09.999Z · LW · GW

I like it when I can just point folks to something I've already written.

The upshot is that there are two things going on here that interact to produce the shattering phenomenon. First, the notion of closeness permits some very pathological models to be considered close to sensible models. Second, the optimization to find the worst-case model close to the assumed model is done in a post-data way, not in prior expectation. So what you get is this: for any possible observed data and any model, there is a model "close" to the assumed one that predicts absolute disaster (or any result) just for that specific data set, and is otherwise well-behaved.

As the authors themselves put it:

The mechanism causing this “brittleness” has its origin in the fact that, in classical Bayesian Sensitivity Analysis, optimal bounds on posterior values are computed after the observation of the specific value of the data, and that the probability of observing the data under some feasible prior may be arbitrarily small... This data dependence of worst priors is inherent to this classical framework and the resulting brittleness under finite-information can be seen as an extreme occurrence of the dilation phenomenon (the fact that optimal bounds on prior values may become less precise after conditioning) observed in classical robust Bayesian inference.

Comment by Cyan on Open thread, Dec. 29, 2014 - Jan 04, 2015 · 2014-12-31T21:44:00.497Z · LW · GW

It's a rather confusing way of referring to a "biased point of view". Saying that "Person A has privilege" wrt. some issue is a claim that A's overall observations and experiences are unrepresentative, and so she should rely on others' experiences as much as on her own.

That's not quite correct; I think it's best to start with the concept of systematic oppression. Suppose for the sake of argument that some group of people is systematically oppressed, that is, on account of their group identity, the system in which they find themselves denies them access to markets, or subjects them to market power or physical violence, or vilifies them in the public sphere -- you can provide your own examples. The privileged group is just the set complement of the oppressed group. An analogy: systematic oppression is the subject and privilege (in the SJ jargon sense) is the negative space.

The "biased point-of-view" thing follows as a near-corollary because it's human nature to notice one's oppression and to take one's absence-of-oppression for granted as a kind of natural status quo, a background assumption.

Next question: in what way did Aaronson's so-called wealthy white male privilege actually benefit him? To answer this, all we need to do is imagine, say, a similarly terrified poor black trans nerd learning to come out of their shell. Because I've chosen an extreme contrast, it's pretty clear who would have the easier time of it and why. Once you can see it in high contrast, it's pretty easy to relax the contrast and keep track of the relative benefits that privilege conveys.

Comment by Cyan on Has LessWrong Ever Backfired On You? · 2014-12-18T21:08:46.178Z · LW · GW

Someone who really cared about time management wouldn't be reading this site in the first place.

Comment by Cyan on Neo-reactionaries, why are you neo-reactionary? · 2014-11-25T01:24:06.818Z · LW · GW

I'm a SSC fan and highly sympathetic to SJ goals and ideals. One of the core LW meetup members in my city can't stand to read SSC on account of what he perceives to be constant bashing of SJ. (I've already checked and verified that his perception of the proportion of SJ bashing in SSC posts is a massive overestimate, probably caused by selection bias.) As a specific example of verbiage that he considers typical of SSC he cited:

And the people who talk about “Nice Guys” – and the people who enable them, praise them, and link to them – are blurring the already rather thin line between “feminism” and “literally Voldemort”.

When I read that line, I didn't take it literally -- in spite of the use of the word "literally". I just kind of skipped over it. But after it was pointed out to me that I ought to take it literally, well... "frothing" is a pretty good description.

I remain a SSC fan, but I'm less likely to just blank out the meaning of these kinds of things now.

Comment by Cyan on Open Thread: What are your important insights or aha! moments? · 2014-11-10T18:55:34.463Z · LW · GW

Embarrassingly, I didn't have the "who feeds Paris" realization until last year -- well after I thought I had achieved a correct understanding of and appreciation for basic microeconomic thought.

Comment by Cyan on 2014 Less Wrong Census/Survey · 2014-11-02T02:36:43.075Z · LW · GW

Nice choice of username. :-)

Comment by Cyan on A discussion of heroic responsibility · 2014-10-30T12:36:35.959Z · LW · GW

Same special-snowflake level credible limits, but for different reasons. Swimmer963 has an innate drive to seek out and destroy (whatever she judges to be) her personal inadequacies. She wasn't very strategic about it in teenager-hood, but now she has the tools to wield it like a scalpel in the hands of a skilled surgeon. Since she seems to have decided that a standard NPC job is not for her, I predict she'll become a PC shortly.

You're already a PC; your strengths are a refusal to tolerate mediocrity in the long-term (or let us say, in the "indefinite" term, in multiple senses) and your vision for controlling and eradicating disease.

Comment by Cyan on A discussion of heroic responsibility · 2014-10-30T01:57:25.245Z · LW · GW

FWIW, in my estimation your special-snowflake-nature is somewhere between "more than slightly, less than somewhat" and "potential world-beater". Those are wide limits, but they exclude zero.

Comment by Cyan on Stupid Questions (10/27/2014) · 2014-10-29T06:25:12.371Z · LW · GW

Hikikomori no more? If so (as seems likely what with the girlfriend and all), it gladdens me to hear it.

Comment by Cyan on Anthropic signature: strange anti-correlations · 2014-10-22T03:50:15.549Z · LW · GW

In the biz we call this selection bias. The most fun example of this is the tale of Abraham Wald and the Surviving Bombers.

Comment by Cyan on Applications of logical uncertainty · 2014-10-19T01:39:26.336Z · LW · GW

I was working in protein structure prediction.

I confess to being a bit envious of this. My academic path after undergrad biochemistry took me elsewhere, alas.

Comment by Cyan on The Level Above Mine · 2014-10-06T04:33:02.915Z · LW · GW

Try it -- the first three chapters are available online here. The first one is discursive and easy; the math of the second chapter is among of most difficult in the book and can be safely skimmed; if you can follow the third chapter (which is the first one to present extensive probability calculations per se) and you understand probability densities for continuous random variables then you'll be able to understand the rest of the book without formal training.

Comment by Cyan on The Future of Humanity Institute could make use of your money · 2014-10-03T05:53:31.918Z · LW · GW

The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.

Kinda... more specifically, a big part of what they are is an attempt at insurance against the possibility that there exists someone out there (probably young) with more innate potential for FAI research than EY himself possesses but who never finds out about FAI research at all.

Comment by Cyan on Rationality Quotes September 2014 · 2014-10-02T15:52:48.406Z · LW · GW

Lumifer wrote, "Pretty much everyone does that almost all the time." I just figured that given what we know of heuristics and biases, there exists a charitable interpretation of the assertion that makes it true. Since the meat of the matter was about deliberate subversion of a clear-eyed assessment of the evidence, I didn't want to get into the weeds of exactly what Lumifer meant.

Comment by Cyan on Superintelligence Reading Group 3: AI and Uploads · 2014-10-02T13:51:36.633Z · LW · GW

But we do run biological computations (assuming that the exercise of human intelligence reduces to computation) to make em technology possible.

Since we're just bouncing short comments off each other at this point, I'm going to wrap up now with a summary of my current position as clarified through this discussion. The original comment posed a puzzle:

Brain emulations seem to represent an unusual possibility for an abrupt jump in technological capability, because we would basically be ‘stealing’ the technology rather than designing it from scratch. ...If this is an unusual situation however, it seems strange that the other most salient route to superintelligence - artificial intelligence designed by humans - is also often expected to involve a discontinuous jump in capability, but for entirely different reasons.

The commonality is that both routes attack a critical aspect of the manifestation of intelligence. One goes straight for an understanding of the abstract computation that implements domain-general intelligence; the other goes at the "interpreter", physics, that realizes that abstract computation.

Comment by Cyan on Superintelligence Reading Group 3: AI and Uploads · 2014-10-01T23:18:27.356Z · LW · GW

Making intelligence-implementing computations substrate-independent in practice (rather than just in principle) already expands our capabilities -- being able to run those computations in places pink goo can't go and at speeds pink goo can't manage is already a huge leap.

Comment by Cyan on Rationality Quotes September 2014 · 2014-10-01T20:00:14.337Z · LW · GW

I'm just struck by how the issue of guilt here turns on mental processes inside someone's mind and not at all on what actually happened in physical reality.

Mental processes inside someone's mind actually happen in physical reality.

Just kidding; I know that's not what you mean. My actual reply is that it seems manifestly obvious that a person in some set of circumstances that demand action can make decisions that careful and deliberate consideration would judge to be the best, or close to the best, possible in prior expectation under those circumstances, and yet the final outcome could be terrible. Conversely, that person might make decisions that that careful and deliberate consideration would judge to be terrible and foolish in prior expectation, and yet through uncontrollable happenstance the final outcome could be tolerable.

Comment by Cyan on Superintelligence Reading Group 3: AI and Uploads · 2014-10-01T19:46:21.531Z · LW · GW

Because the solution has an immediate impact on the exercise of intelligence, I guess? I'm a little unclear on what other problems you have in mind.

Comment by Cyan on Rationality Quotes September 2014 · 2014-10-01T19:41:26.972Z · LW · GW

That's because we live in a world where... it's not great, but better than speculating on other people's psychological states.

I wanted to put something like this idea into my own response to Lumifer, but I couldn't find the words. Thanks for expressing the idea so clearly and concisely.

Comment by Cyan on Superintelligence Reading Group 3: AI and Uploads · 2014-10-01T19:26:42.976Z · LW · GW

I wasn't talking about faster progress as such, just about a predictable single large discontinuity in our capabilities at the point in time when the em approach first bears fruit. It's not a continual feedback, just an application of intelligence to the problem of making biological computations (including those that implement intelligence) run on simulated physics instead of the real thing.

Comment by Cyan on Rationality Quotes September 2014 · 2014-10-01T19:08:17.043Z · LW · GW

I would say that I don't do that, but then I'd pretty obviously be allowing the way I desire the world to be to influence my assessment of that actual state of the world. I'll make a weaker claim -- when I'm engaging conscious effort in trying to figure out how the world is and I notice myself doing it, I try to stop. Less Wrong, not Absolute Perfection.

Pretty much everyone does that almost all the time. So, is everyone blameworthy? Of course, if everyone is blameworthy then no one is.

That's a pretty good example of the Fallacy of Gray right there.

Comment by Cyan on Superintelligence Reading Group 3: AI and Uploads · 2014-10-01T17:29:57.841Z · LW · GW

Hmm.. let me think...

The materialist thesis implies that a biological computation can be split into two parts: (i) a specification of a brain-state; (ii) a set of rules for brain-state time evolution, i.e., physics. When biological computations run in base reality, brain-state maps to program state and physics is the interpreter, pushing brain-states through the abstract computation. Creating an em then becomes analogous to using Futamura's first projection to build in the static part of the computation -- physics -- thereby making the resulting program substrate-independent. The entire process of creating a viable emulation strategy happens when we humans run a biological computation that (i) tells us what is necessary to create a substrate-independent brain-state spec and (ii) solves a lot of practical physics simulation problems, so that to generate an em, the brain-state spec is all we need. This is somewhat analogous to Futamura's second projection: we take the ordered pair (biological computation, physics), run a particular biological computation on it, and get a brain-state-to-em compiler.

So intelligence is acting on itself indirectly through the fact that an "interpreter", physics, is how reality manifests intelligence. We aim to specialize physics out of the process of running the biological computations that implement intelligence, and by necessity, we're use a biological computation that implements intelligence to accomplish that goal.

Comment by Cyan on Newcomblike problems are the norm · 2014-10-01T12:21:02.330Z · LW · GW

It won't have source code per se, but one can posit the existence of a halting oracle without generating an inconsistency.

Comment by Cyan on Superintelligence Reading Group 3: AI and Uploads · 2014-10-01T03:52:56.720Z · LW · GW

My intuition -- and it's a Good one -- is that the discontinuity is produced by intelligence acting to increase itself. It's built into the structure of the thing acted upon that it will feed back to the thing doing the acting. (Not that unique an insight around these parts, eh?)

Okay, here's a metaphor(?) to put some meat on the bones of this comment. Suppose you have an interpreter for some computer language and you have a program written in that language that implements partial evaluation. With just these tools, you can make the partial evaluator (i) act as a compiler, by running it on an interpreter and a program; (ii) build a compiler, by running it on itself and an interpreter; (iii) build a generic interpreter-to-compiler converter, by running it on itself and itself. So one piece of technology "telescopes" by acting on itself. These are the Three Projections of Doctor Futamura.

Comment by Cyan on The Puzzle of Faith and Belief · 2014-09-30T23:33:34.203Z · LW · GW

Fungible. The term is still current within economics, I believe. If something is fungible, it stands to reason that one can funge it, nu?

Comment by Cyan on The Puzzle of Faith and Belief · 2014-09-30T23:04:17.323Z · LW · GW

As Vaniver mentioned, it relates to exploring trade-offs among the various goals one has / things one values. A certain amount of it arises naturally in the planning of any complex project, but it seems like the deliberate practice of introspecting on how one's goals decompose into subgoals and on how they might be traded off against one another to achieve a more satisfactory state of things is an idea that is novel, distinct, and conceptually intricate enough to deserve its own label.

Comment by Cyan on Rationality Quotes September 2014 · 2014-09-30T21:36:32.354Z · LW · GW

Yeesh. These people shouldn't let feelings or appearances influence their opinions of EY's trustworthiness -- or "morally repulsive" ideas like justifications for genocide. That's why I feel it's perfectly rational to dismiss their criticisms -- that and the fact that there's no evidence backing up their claims. How can there be? After all, as I explain here, Bayesian epistemology is central to LW-style rationality and related ideas like Friendly AI and effective altruism. Frankly, with the kind of muddle-headed thinking those haters display, they don't really deserve the insights that LW provides.

There, that's 8 out of 10 bullet points. I couldn't get the "manipulation" one in because "something sinister" is underspecified; as to the "censorship" one, well, I didn't want to mention the... thing... (ooh, meta! Gonna give myself partial credit for that one.)

Ab, V qba'g npghnyyl ubyq gur ivrjf V rkcerffrq nobir; vg'f whfg n wbxr.

Comment by Cyan on Rationality Quotes September 2014 · 2014-09-30T21:09:43.503Z · LW · GW

He had doubts, he extinguished them, and that's what makes him guilty.

This is not the whole story. In the quote

He had acquired his belief not by honestly earning it in patient investigation, but by stifling his doubts.

you're paying too much heed to the final clause and not enough to the clause that precedes it. The shipowner had doubts that, we are to understand, were reasonable on the available information. The key to the shipowner's... I prefer not to use the word "guilt", with its connotations of legal or celestial judgment -- let us say, blameworthiness, is that he allowed the way he desired the world to be to influence his assessment of the actual state of the world.

In your "optimistic fellow" scenario, the shipowner would be as blameworthy, but in that case, the blame would attach to his failure to give serious consideration to the doubts that had been expressed to him.

And going beyond what is in the passage, in my view, he would be equally blameworthy if the ship had survived the voyage! Shitty decision-making is shitty-decision-making, regardless of outcome. (This is part of why I avoided the word "guilt" -- too outcome-dependent.)

Comment by Cyan on The Puzzle of Faith and Belief · 2014-09-29T13:12:42.164Z · LW · GW

tl;dr: No, the subject of the site is wider than that.

Long version: IIRC, EY originally conceived of rationality as comprising two relatively distinct domains: epistemic rationality, the art and science of ensuring the map reflects the territory, and instrumental rationality, the art and science of making decisions and taking actions that constrain the future state of the universe according to one's goals. Around the time of the fork of CFAR off of SIAI-that-was, EY had expanded his conception of rationality to include a third domain: human rationality, the art and science of coping with ape-brain.

In my view, these three domains have core subject matter and interfacial subject matter: the core of epistemic rationality is Bayesian epistemology; the core of instrumental rationality is expected utility optimization; the core of human rationality is Thinking, Fast and Slow and construal level theory. At the interface of epistemic and instrumental rationality sit topics like explore/exploit trade-offs and value-of-information calculations; at the interface of epistemic rationality and human rationality sit topics like belief vs. alief, heuristics and biases, and practical techniques for updating on and responding to new information in ways large and small; at the interface of instrumental rationality and human rationality sit topics like goal factoring/funging and habit formation; and right at the intersection of all three, I would locate techniques like implementing tight feedback loops.

Comment by Cyan on The Puzzle of Faith and Belief · 2014-09-29T05:36:31.841Z · LW · GW

clearly advertising propaganda

It's not clear to me -- I'm not even sure what you think it's advertising!

( ETA: I wrote a bunch of irrelevant stuff, but then I scrolled up and saw (again, but it somehow slipped my mind even though I friggin' quoted it in the grandparent, I'm going senile at the tender age of 36) that you specifically think it's advertising for CFAR, so I've deleted the irrelevant stuff. )

Advertising for CFAR seems like a stretch, because -- although very nice things are said about Anna Salamon -- the actual product CFAR sells isn't mentioned at all.

Comment by Cyan on The Puzzle of Faith and Belief · 2014-09-29T00:12:42.037Z · LW · GW

My conclusion: there might be an interesting and useful post to be written about how epistemic rationality and techniques for coping with ape-brain intersect, and ShannonFriedman might be capable of writing it. Not there yet, though.

Comment by Cyan on The Puzzle of Faith and Belief · 2014-09-29T00:03:33.375Z · LW · GW

...a long advertisement for CFAR...

...containing an immediately useful (or at least, immediately practicable) suggestion, as, er, advertised.

Comment by Cyan on The metaphor/myth of general intelligence · 2014-08-20T14:33:30.277Z · LW · GW

No, we totally do... in principle.

Comment by Cyan on A "Holy Grail" Humor Theory in One Page. · 2014-08-19T20:44:41.504Z · LW · GW

Awesome, thanks!

Comment by Cyan on The metaphor/myth of general intelligence · 2014-08-19T20:35:33.146Z · LW · GW

Meh. That's only a problem in practice, not in principle. In principle, all prediction problems can be reduced to binary sequence prediction. (By which I mean, in principle there's only one "area".)

Comment by Cyan on A "Holy Grail" Humor Theory in One Page. · 2014-08-18T19:16:51.006Z · LW · GW

I invite you to spell out the prediction that you drew about the evolution of human intelligence from your theory of humor and how the recently published neurology research verified it.

Comment by Cyan on The metaphor/myth of general intelligence · 2014-08-18T18:43:58.174Z · LW · GW

What if it was very hard to produce an intelligence that was of high performance across many domains?... There are a few strong counters to this - for instance, you could construct good generalists by networking together specialists...

In fact, we already know the minimax optimal algorithm for combining "expert" predictions (here "expert" denotes an online sequence prediction algorithm of any variety); it's the weighted majority algorithm.