Recent MIRI workshop results?

post by johnswentworth · 2013-07-16T01:25:02.704Z · LW · GW · Legacy · 32 comments

Contents

32 comments

So I hear MIRI had another math workshop this past week. Given the recent results, I'm on the edge of my seat to hear how it went. Has anything been written up? Would anyone in the know like to comment on how it went?

32 comments

Comments sorted by top scores.

comment by lukeprog · 2013-07-16T03:56:16.644Z · LW(p) · GW(p)

Here's an anecdote from the workshop I shared on Facebook:

Me, to the MIRI research workshop participants: "So, you guys ready to go out to dinner?"

Researcher: "What would we talk about at a restaurant, where there are no whiteboards? Our lives, or something?"

We got delivery instead.

Replies from: Will_Sawin, johnswentworth
comment by Will_Sawin · 2013-07-16T07:02:06.397Z · LW(p) · GW(p)

This sample may be unrepresentative. At least one researcher would have been perfectly happy talking about the researchers' lives.

comment by johnswentworth · 2013-07-16T16:30:48.301Z · LW(p) · GW(p)

Nice. One my explicit plans for after redesigning the world has been to start a restaurant with whiteboard tables.

comment by Qiaochu_Yuan · 2013-07-16T02:59:53.337Z · LW(p) · GW(p)

It went well. Some cool stuff happened. Eliezer wants us to be more cautious than we've been about making workshop work public so I don't want to say more than that for now.

Replies from: TrE, lukeprog
comment by TrE · 2013-07-16T19:33:30.868Z · LW(p) · GW(p)

What's the lesson we learn from this thread? I'd say that if you have something that you're not sure you want public, you shouldn't even talk about that fact.

Replies from: gothgirl420666
comment by lukeprog · 2013-07-16T03:55:23.431Z · LW(p) · GW(p)

Not sure "more cautious" is true. More like: "one result from the July workshop is something we should debate publishing, just like MIRI debated publishing the earlier probabilistic logic result."

Replies from: drethelin, JoshuaZ
comment by drethelin · 2013-07-16T04:06:44.227Z · LW(p) · GW(p)

debate publishing because you're unsure about the results being correct or unsure about the safety of them being widespread?

comment by JoshuaZ · 2013-07-16T04:04:41.760Z · LW(p) · GW(p)

More like: "one result from the July workshop is something we should debate publishing, just like MIRI debated publishing the earlier probabilistic logic result."

Both Qiaochu's comment and yours give me some mix of both confusion and concern. Was the cause of such debate an issue of thinking that these ideas are too close to actually useful for making an AGI? If so, this makes me worried that MIRI is overestimating the importance of its current results.

Replies from: CarlShulman, Qiaochu_Yuan, Eliezer_Yudkowsky, GuySrinivasan, None
comment by CarlShulman · 2013-07-16T14:03:45.776Z · LW(p) · GW(p)

If so, this makes me worried that MIRI is overestimating the importance of its current results.

Plausible, but the sign of an impact is different from its magnitude. Is a modest result a small net positive, or a small net negative relative to the world, taking into account the long-term as well as short? If one rounds down to zero impacts that are not individually world-shaking, one should do the same for both positives and negatives, and then one wouldn't ever do anything small, even though aggregate impacts of such acts are large.

comment by Qiaochu_Yuan · 2013-07-16T05:11:46.026Z · LW(p) · GW(p)

Was the cause of such debate an issue of thinking that these ideas are too close to actually useful for making an AGI? If so, this makes me worried that MIRI is overestimating the importance of its current results.

Not quite. I shared a version of this concern initially, but making something public is an irreversible decision. Given a reasonable amount of uncertainty about whether making a given thing public is a good idea, there's very little downside to hesitating about making it public before some kind of further discussion. The risk of overestimating the importance of our current results is pretty minor. (The risk of appearing to overestimate the importance of our current results is maybe worse, so possibly I shouldn't have brought this up at all.)

Replies from: JoshuaZ
comment by JoshuaZ · 2013-07-16T21:34:43.073Z · LW(p) · GW(p)

That seems quite reasonable. Having a specific period of time to wait on announcing anything might make sense in that sort of context. I agree that some of the issues here do seem to be potential bad signaling. But this is a context where MIRI really should be careful about the easier to deal with signaling issues.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-16T18:09:13.834Z · LW(p) · GW(p)

Was the cause of such debate an issue of thinking that these ideas are too close to actually useful for making an AGI?

No.

comment by SarahNibs (GuySrinivasan) · 2013-07-16T04:19:32.361Z · LW(p) · GW(p)

Is the probabilistic logic result something so obviously something they should have published that you would recommend they don't take time to recover and then consider before announcing similar results? I'm fine with them drawing the line at e.g. "if something raises a red flag to anyone, we take a step back and consider first, even if it might be kind of embarrassing to have done so".

Replies from: JoshuaZ
comment by JoshuaZ · 2013-07-16T04:38:47.654Z · LW(p) · GW(p)

Sure, if there logic is something like that, that might make sense especially since then if the status of how relevant results really does change in a few years, one also will be less likely to notice the immediate discrepancy. Hopefully Luke can clarify what sort of thought process is going on here.

comment by [deleted] · 2013-07-16T04:18:29.734Z · LW(p) · GW(p)

Being careful about discussing solved-but-not-officially-published results is completely standard in academia. Judging MIRI negatively for it is absurd.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-07-16T04:37:27.195Z · LW(p) · GW(p)

Being careful about discussing solved-but-not-officially-published results is completely standard in academia.

The context that's important here is the motivation. Generally when one doesn't publish a result in academia one does so because some combination of a) the result isn't notable enough b) the result is one that one hopes to improve soon c) one has other higher priorities to publish. There's a definite concern here that if MIRI's motivation for this sort of result is due to other concerns (in particular, due to concerns about the relevance to FAI and AGI in general), that this would indicate an overestimate of the importance of the work, which doesn't speak well for their general calibration. This is related to the issue that if MIRI is going to be successful at either getting mainstream academics interested in their work or willing to join in and answer problems raised by MIRI, or to take MIRI's concerns about AI seriously, then it is going to help a lot to have well-done, substantial published results. So yes, if MIRI cares about its goals, being overly cautious is a potential negative if one is being overcautious for bad reasons.

GurSrinivasan's reply is a more interesting, and frankly, far stronger argument, if one did have some degree of reassurance. All of these things and what was being thought are things that luke is more than capable of clarifying.

Replies from: lukeprog
comment by lukeprog · 2013-07-16T06:00:53.162Z · LW(p) · GW(p)

luke is more than capable of clarifying.

Well, not quite. I was a few doors down from the workshop, so I don't actually know anything (yet) about the result that is being considered carefully before being published. In our past safety meetings, though, the concern raised has not been something like "we think this thing here is maybe 3% of the AGI puzzle." The strategic issues at play are more subtle than that.

Re: calibration. Remember that, conditioning on no general halt to science, my median estimate for AGI is about 2065, and MIRI's recent work hasn't budged my estimate at all. In fact, no published AI progress has changed my timelines, since things are moving along about as I'd expect. The only thing that has budged my timelines (since I've had timelines) was learning more about the full extent of the 2008 financial crisis and the near-total failure of elites to address the sources of the problem. That budged back my estimate a bit, since I now expect less economic growth than I originally did.

Replies from: None, None
comment by [deleted] · 2013-07-25T05:24:31.386Z · LW(p) · GW(p)

I find this startling and disturbing, as my own prediction would be closer to 2025. To be sure, I have no expectation that our predictions would agree exactly, but 10 years vs 50 is a huge discrepancy. Even Ray Kurzweil's curve extrapolations show direct simulation of the human braing being possible in the 2040's, and Eliezer has argued that if anything these are too conservative.

Given that I am currently considering donating during the summer fundraising drive, perhaps you can explain your prediction of 2065. It may seem like making a mountain out of a molehill, but the issue is priorities. If the executive director of MIRI believes that AGI is five decades out, I'm not confident that MIRI will focus on the things necessary to develop FAI theory before it is needed in the next decade, including pragmatic compromises - studying FAI as it applies to existing AGI architectures, not just abstract utility theory or MIRI's preferred approach.

Replies from: lukeprog
comment by lukeprog · 2013-07-25T05:49:59.024Z · LW(p) · GW(p)

I gave some preliminaries on AI timelines here. Re: WBE in particular, I expect "de novo" AGI sooner. Note that Eliezer's AI timelines are sooner than mine, with a median around 2035 IIRC.

Given your model of the world, what is the plausibly superior alternative to funding MIRI? The sooner your AI timelines, the more critical it is to fund whoever is doing actual FAI work, and right now that means MIRI. Other causes only make sense if we've got more time than you're predicting.

That said, you could still make gains (by your model) by trying to persuade myself and others at MIRI that we should put a greater portion of our probability masses on AI-soon, since such things do have policy implications that we're acting on in some ways. Like, maybe you think you've identified a particular line of research that will lead to AGI by 2025 with medium-high confidence?

Replies from: None, None
comment by [deleted] · 2013-07-25T18:21:23.657Z · LW(p) · GW(p)

Let me address the risks identified in your article:

An end to Moore's law. The only part I agree with is that parallel software is fundamentally more difficult, although I think we would disagree with how much more difficult, given the right tools. We have plenty of experience in scaling parallel software stacks in data centers, giving rise to solutions like NoSQL, message queues, and software transactional memory. These tools are still crude, but if you look at what's being done in academia with automated parallelization and simple concurrency frameworks, the prospects look bright indeed. The concurrency problem will be solved (and indeed, pretty much is solved by organizations like Google, although their tools are not public). There is no reason to suppose that Moore's law formulated as computations per dollar will not continue until physical limits are reached, which is still a very long way off, indeed, and there are tools on the horizon that will slay the concurrency dragon.

(To be clear, concurrency adds a suppressing term to Moore's law expressed as observed performance per dollar, due to performance no longer scaling linearly without limit. But it's still recognizably an exponential.)

Depletion of low-hanging fruit. What fruit is low-hanging depends on what capabilities you have. To stretch this analogy, as hardware capacity increases, you grow taller. In the 80's chess playing was a very difficult research problem. Today it would definitely be low hanging fruit if it wasn't already claimed by Deep Blue. It was new hardware and software capabilities which shifted the playing field, and there is no reason to expect this won't continue to happen in the future so long as Moore's law holds.

Societal collapse. Maybe, but this wouldn't affect the “basement hacker” hard takeoff scenario, and may even facilitate it in the same way that the economic collapse of 2008 gave a boost to bitcoin which was released shortly thereafter. I think you'd have to posit total global societal collapse - bringing an end to Moore's law - before you start affecting the probability of the basement hacker scenario. You could also come up with other scenarios whose timelines would be unaffected, such as AI for national defense / surveillance which may receive more funding in the case of societal collapse.

Disinclination. I'd love to see analogous examples of this from other industries, where there is a revolutionary change around the corner that everyone knows about but no one pursues because what they have is “good enough.” I suspect you won't find it; it goes against the grain of entrepreneurial capitalism and human nature.

In my own judgement, none of these negative scenarios have much probability mass. My own forecasting is more limited by unknown unknowns, which practical AGI research will uncover. Your positive factors (cognitive neuroscience breakthroughs, human enhancement, funding growth, etc.) are reasonable, so compared to me your judgement is more pessimistic.

I think that your own analysis is somewhat contradictory. You can either trust expert opinions or not, and your 2065 number is the midpoint of acceptable expert opinion. In the case of AGI, I am very skeptical of expert opinion, as there are very few experts in computer science or AI which are familiar with the very specific challenges of AGI, yet it is a topic that everyone has an opinion on if you ask them. I recognize that this excludes my own opinion from consideration as well, but nevertheless here it is: we are more likely than not to experience a hard-takeoff scenario in the 2020's.

Why am I confident of this? Because we understand the problem. Read the proceeds of the sometimes-annual AGI conference. There are a number of AGI architectures which no longer make ridiculous predictions of “emergent” features, which was once a clear sign of things we didn't understand. This was not the case just 10 years ago. You can open any psychological development textbook and match up descriptions of human thinking to operating modes of, say, the OpenCog Prime architecture. We simply need to do the work to implement that architecture - which only exists on paper at this point - and solve the unknown unknowns which pop up along the way.

But why the 2020's, specifically? Well with the resources we have I think we could implement one of these AGI architectures in about 10 years with current resources, and at that time computational resources will be developed enough to run it on commercial datacenter or high-end consumer hardware. This is simply my informed judgement as a systems engineer and my understanding of the complexity of the as-yet unimplemented portions of the OpenCog Prime architecture (you could substitute your own favorite architecture, however). Just look at the OpenCog Prime documents, look at the current state of OpenCog implementation, look at the current level of funding and available developers, and do normal, everyday project planning analysis. I predict you will arrive at a similar number.

Add 50% to that timeline, a common software engineering rule-of-thumb, and you get 15 years. However I predict 2025 / 10 years because that judgement was based on current efforts, but we are way under capacity in terms of AGI research - researcher time and money is the limiting factor. By comparison, with infinite budget we could do it in a couple of years. I think there's a good chance that an “AI sputnik” moment in the next couple of years could change the funding outlook, accelerating that schedule significantly, and I allocate a fair chunk of probability mass to that happening.

EDIT: So my question to you, Luke and MIRI, is: if you take at face value my prediction of hard-takeoff in the 2020's, how does this change MIRI's priorities? Would you being doing things differently if you expected a hard-takeoff in ~2025? How would it be different?

Replies from: lukeprog
comment by lukeprog · 2013-07-25T20:09:58.850Z · LW(p) · GW(p)

Thanks for your detailed thoughts!

The only part I agree with is that parallel software is fundamentally more difficult

What about the linked paper on dark silicon?

we are more likely than not to experience a hard-takeoff scenario in the 2020's

Wow. I really need to figure out a practical way to make apocalypse bets. Maybe I can get longbets.org to add that feature.

Just look at the OpenCog Prime documents, look at the current state of OpenCog implementation, look at the current level of funding and available developers, and do normal, everyday project planning analysis.

OpenCog looks pretty unpromising to me. What's the #1 document I should read that has the best chance of changing my mind, even if it's only a tiny chance?

Would you [be] doing things differently if you expected a hard-takeoff in ~2025? How would it be different?

For example if we had believed this 1.5 years ago, I suppose we would have (1) scuttled longer-term investments like CFAR and "strategic" research (e.g. AI forecasting, intelligence explosion microeconomics), (2) tried our best to build up a persuasive case that AI was very near, so we could present it to all the wealthy individuals we know, (3) used that case and whatever money we could quickly raise to hire all the best mathematicians willing to do FAI work, and (4) done almost nothing else but in-house FAI research. (Maybe Eliezer and others have different ideas about what we'd do if we thought AI was coming in ~2025; I don't know.)

Replies from: None
comment by [deleted] · 2013-07-25T22:06:51.748Z · LW(p) · GW(p)

What about the linked paper on dark silicon?

As far as I can tell, it completely ignores 3D chip design, or multi-chip solutions in the interim. If there are power limits on the number of transistors in a single chip, then expect to have more and more, smaller and smaller chips, or a completely different chip design which invalidates the assumptions underlying the dark silicon paper (3D chips, for example).

Generalizing, this is a very common category of paper. It identifies some hard wall that prevents continuation of Moore's law. The abstract and conclusion contains much doom and gloom. In practice, it merely spurs creative thinking, resulting in a modification of the manufacturing or design process which invalidates the assumptions which led to the scaling limit. (In this case, the assumption that to get higher application performance, you need more transitors or smaller feature sizes, or that chips consist of a 2D grid of silicon transistors.)

OpenCog looks pretty unpromising to me. What's the #1 document I should read that has the best chance of changing my mind, even if it's only a tiny chance?

I read a preprint of Ben Goertzel's upcoming “Building Better Minds” book, most of which is also available as part of the OpenCog wiki. When I said the “OpenCog Prime documents,” this is what I was referring to. But it's not structured as an argument for this approach, and as a 1,000 page document it's hard to recommend as an introduction if you're uncertain about its value. My own confidence in OpenCog Prime comes from doing my own analysis of its capabilities and weaknesses as I read this document (being unconvinced beforehand), and my own back-of-the-envelope calculations of where it could go, with the proper hardware and software optimizations. There is a high-level overview of CogPrime on the OpenCog wiki. It's a little outdated and incomplete, but the overall structure is still the same.

Also you might be interested in Ben's write-up of the rise and fall of WebMind (which became Novamente, which became OpenCog), as it provides a good context for why OpenCog is structured the way that it is. That is to say how what problems they tried to solve, what difficulties they encountered, and why they ended up where they are in the design space of AGI minds, and why they are confident in that design. It's an interesting people-story anyhow: “Waking up from the economy of dreams

OpenCog gets a lot of flack for being an “everything and the kitchen sink” approach. This is unfair and undue criticism, I believe. Rather I would say that the CogPrime architecture recognizes that human cognition is complex, and that while it is reducible, an accurate model would nevertheless still contain a lot of complexity. For example, there are many different kinds of memory (perceptual, declarative, episodic, procedural, etc.), and therefore it makes sense to have different mechanisms for handling each of these memory systems. Maybe in the future some of them can be theoretically unified, but that doesn't mean it wouldn't still be beneficial to implement them separately -- the model is not the territory.

However what CogPrime does do which doesn't get harped enough is provide a base representation format (hypergraphs) which are capable of encoding the entire architecture in the same homoiconic medium. A good description of why this is important is the following blog post, “Why hypergraphs?

For example if we had believed this 1.5 years ago, I suppose we would have (1) scuttled longer-term investments like CFAR and "strategic" research (e.g. AI forecasting, intelligence explosion microeconomics),

I hope y'all wouldn't have scrapped the most excellent HPMoR ;)

(2) tried our best to build up a persuasive case that AI was very near, so we could present it to all the wealthy individuals we know, (3) used that case and whatever money we could quickly raise to hire all the best mathematicians willing to do FAI work, and (4) done almost nothing else but in-house FAI research. (Maybe Eliezer and others have different ideas about what we'd do if we thought AI was coming in ~2025; I don't know.)

I hope that you (again, addressing both Luke and MIRI) take a look at existing, active AGI projects, and evaluate for each of them (1) if it could possibly lead to a hard-takeoff scenario, (2) what the timeframe for a hard-takeoff would be^1, and (3) enumeration of FAI risk factors and possible mitigation strategies. And of course, publish the results of this study.

^1: Both your own analysis and the implementor's estimation - but be sure to ask them when the necessary features X, Y, and Z will be implemented, not about the hard-takeoff specifically, so as to not bias the data.

Replies from: lukeprog
comment by lukeprog · 2013-07-26T00:18:25.470Z · LW(p) · GW(p)

Thanks for the reading links.

comment by [deleted] · 2013-07-25T07:04:47.342Z · LW(p) · GW(p)

Luke, thank you for your honest response, and pointing me towards your article. I will read it and formulate a response RE: timelines and viable lines of research that will lead to AGI by 2025 with medium-high confidence.

I disagree that MIRI is the only one working on Friendly AI, although certainly you may be the only ones working on your vision of FAI. Ben Goertzel has taken a very pragmatic approach to both AGI and Friendliness, so his OpenCog Foundation would be a clear alternative for my dollars. I know that he has taken potshots at MIRI for promoting what he calls the Scary Idea, but nevertheless he does appear to be genuinely concerned with Friendliness; he is just taking a different approach. As an AI developer myself, there are various other self-directed options that I could invest my time and money into as well.

To be clear, I would rather have a fully fleshed out and proven Friendly AGI design before anyone writes a single self-modifying piece of code. I would also like a pony and a perpetual motion machine. I have absolutely no faith in anyone who would claim to be able to slow down or stop AGI research - the component pieces of AGI are simply too profitable and too far reaching in their consequences^1. Because of the short timescales, I believe it does the most good to study Friendliness in the context of actual AGI designs, and to devise safety procedures for running AGI projects that bias us towards Friendliness as much as possible, even if Friendliness cannot be absolutely guaranteed at this point. Work on provably safe AGI should proceed in parallel in case it pays off, but abstract work on a provably safe design has zero utility until it is finished, and I am not confident it would be finished before a hard takeoff occurs.

^1 Not just AGI itself, but various pieces of machine learning, hierarchical planning, goal forming, etc. - it'd be like trying out outlaw fertilizer because it could be used to make a fertilizer bomb. How then do you grow your food?

comment by [deleted] · 2013-07-16T13:21:27.129Z · LW(p) · GW(p)

Unfortunately, there's now a group of people who read this thread between 0437 and 0600, who were lead to think that MIRI is miscalibrated, doesn't care about its goals, but still somehow takes their work too seriously.

That's what happens when one speculates without evidence.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-07-16T13:47:44.212Z · LW(p) · GW(p)

Unfortunately, there's now a group of people who read this thread between 0437 and 0600, who were lead to think that MIRI is miscalibrated, doesn't care about its goals, but still somehow takes their work too seriously.

That's what happens when one speculates without evidence.

There wasn't speculation without evidence. A specific set of comments was made. It raised concerns. Those concerns were addressed (in a fairly satisfactory fashion). What you are saying seems very close to saying that we shouldn't raise concerns because if those concerns are responded to then there's a chance someone will happen to only see the initial issue and not the response. It should be clear why that's not a great approach if one wants open discussion and doesn't want any sort of evaporative cooling of beliefs or similar problems.

Replies from: None
comment by [deleted] · 2013-07-16T14:24:36.533Z · LW(p) · GW(p)

There wasn't speculation without evidence.

Of course there was! Why would you have asked Luke for clarification otherwise? The next three sentences don't support this claim, either -- being "a specific set of comments" does not contradict those comments being vapid speculation.

What you are saying seems very close to saying that we shouldn't raise concerns because if those concerns are responded to then there's a chance someone will happen to only see the initial issue and not the response.

Why should there be a response? Does Luke have a moral responsibility to traverse the internet, answering every random NPC who has "concerns"? And in a timely fashion?

There was a whole world of sensible reasons why MIRI would wait and discuss their work more before publishing it, but you went needle-like for an uncharitable one and demanded a response to be proven wrong. That's not acceptable to me.

It should be clear why that's not a great approach if one wants open discussion and doesn't want any sort of evaporative cooling of beliefs or similar problems.

You're simply wrong. LW does want some evaporative cooling -- of trolls and the like.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-07-16T14:36:37.084Z · LW(p) · GW(p)

There wasn't speculation without evidence.

Of course there was! Why would you have asked Luke for clarification otherwise?

There may be a language distinction here but I'd try to distinguish between speculation as something like "this is probably happening", and "this sounds like this might be happening, could you please clarify that it isn't"? It may also help to note based on the comment thread that I apparently wasn't the only person with this concern. Drethelin made a very similar point. I'm curious what you think we should have done with our concerns. In general, having something discussed in the open is far more useful: you can be pretty sure in any internet conversation that if multiple people interpreted something as potentially implying something, that the set of people who shared a similar interpretation but didn't post is likely substantially larger.

Why should there be a response? Does Luke have a moral responsibility to traverse the internet, answering every random NPC who has "concerns"? And in a timely fashion?

There's no question of moral responsibility here or about "traversing the internet" talking to "NPCs". Luke was active in this thread, made a comment, and the question was directed to the specific details of that comment. I'm also not sure why you are bringing up an issue of what is "timely-fashion"- if someone (either Luke or Qiaochu or someone else who was present at the workshop) responded sometime after, say a day or two later, what damage are you imagining? What is your imagined scenario that leaves you so concerned? And moreover, how would you prefer that concerns be addressed?

Incidentally it may also help to keep in mind that as a matter of politeness and as a matter of actually convincing people effectively, comparing them to NPCs is probably not a great tactic. Not everything Harry James Potter Evans-Verres does is optimal.

It should be clear why that's not a great approach if one wants open discussion and doesn't want any sort of evaporative cooling of beliefs or similar problems.

You're simply wrong. LW does want some evaporative cooling -- of trolls and the like.

This may come down to a language issue, but I don't think that the way evaporative cooling is used refers to all people who leave a community.

Replies from: None
comment by [deleted] · 2013-07-16T14:51:21.288Z · LW(p) · GW(p)

Honestly, if you can't see the distinction between you and drethelin, and think my criticisms of your tone are "language issues," then I think we're more or less done. I get that you've dropped into formal diction for the karma grab, but I'm allergic to walls of text.

Replies from: komponisto
comment by komponisto · 2013-07-17T04:39:26.474Z · LW(p) · GW(p)Replies from: None
comment by [deleted] · 2013-07-17T06:44:53.925Z · LW(p) · GW(p)

I don't particularly care about your own model of me, for various and sundry reasons.

However, I would like to note that you're gravely underestimating the community. On those moments I become caustic and bitter (typically when people wax ridiculous; see above) I tend to be reasonably downvoted. And as well I should -- I just value speaking freely over caring how the populace will vote.

On the other hand, I doubt I am "consistently" so. Independent assessments have suggested it is an infrequent vicious cycle exacerbated by the phases of the moon (p > 0.015).

On the gripping hand, I finally have an explanation for why you never e-mailed me about MoreRight! Though that ultimately failed to matter.