Posts

Typical Sneer Fallacy 2015-09-01T03:13:53.781Z · score: 10 (17 votes)
Scott Aaronson's cautious optimism for the MWI 2012-08-19T02:35:52.503Z · score: 5 (10 votes)
Waterfall Ethics 2012-01-30T21:14:28.774Z · score: 9 (13 votes)

Comments

Comment by calef on Unusual medical event led to concluding I was most likely an AI in a simulated world · 2017-09-18T17:32:25.158Z · score: 6 (6 votes) · LW · GW

If you haven't already, you might consider speaking with a doctor. Sudden, intense changes to one's internal sense of logic are often explainable by an underlying condition (as you yourself have noted). I'd rather not play the "diagnose a person over the internet" game, nor encourage anyone else here to do so. You should especially see a doctor if you actually think you've had a stroke. It is possible to recover from many different sorts of brain trauma, and the earlier you act, the better odds you have of identifying the problem (if it exists!).

Comment by calef on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-27T02:35:07.735Z · score: 7 (1 votes) · LW · GW

What can a "level 5 framework" do, operationally, that is different than what can be done with a Bayes net?

I admit that I don't understand what you're actually trying to argue, Christian.

Comment by calef on John Nash's Ideal Money: The Motivations of Savings and Thrift · 2017-01-18T05:53:26.145Z · score: 4 (4 votes) · LW · GW

Hi Flinter (and welcome to LessWrong)

You've resorted to a certain argumentative style in some of your responses, and I wanted to point it out to you. Essentially, someone criticizes one of your posts, and your response is something like:

"Don't you understand how smart John Nash is? How could you possibly think your criticism is something that John Nash hadn't thought of already?"

The thing about ideas, notwithstanding the brilliance of those ideas or where they might have come from, is that communicating those ideas effectively is just as important as the idea itself. Even if Nash's Ideal Money scheme is the most important thing in the universe, if you can't communicate the idea effectively, and if you can't convincingly respond to criticism without hostility, no one will ever understand that idea but you.

A great modern example of this is Mochizuki's interuniversal Teichmuller theory, which he singlehandedly developed over the course of a decade in near complete isolation. It's an extremely technically dense new way of doing number theory that he claims resolves several outstanding conjectures in number theory (including the ABC Conjecture, among a couple others). And it's taken over four years for some very high profile mathematicians to start verifying that it's probably correct. This required workshops and hundreds of communications between Mochizuki and other mathematicians.

Point being: Progress is sociological as much as it is empirical. If you aren't able to effectively communicate the importance of an idea, it might be because the community at large is hostile to new ideas, even when represented in the best way possible. But if a community--a community which is, nominally, dedicated to rationally evaluating ideas--is unable to understand your representation, or see the importance of it, it might just be because you're bad at explaining it, the idea isn't all that great, or both.

Comment by calef on Open thread, Mar. 14 - Mar. 20, 2016 · 2016-03-16T02:33:38.829Z · score: 1 (1 votes) · LW · GW

I've found that I only ever get something sort of like sleep paralysis when I sleep flat on my back, so +1 for sleeping orientation mattering for some reason.

Comment by calef on Open thread, Mar. 14 - Mar. 20, 2016 · 2016-03-16T02:31:26.653Z · score: 2 (2 votes) · LW · GW

This is essentially what username2 was getting at, but I'll try a different direction.

It's entirely possible that "what caused the big bang" is a nonsensical question. 'Causes' and 'Effects' only exist insofar as there are things which exist to cause causes and effect effects. The "cause and effect" apparatus could be entirely contained within the universe, in the same way that it's not really sensible to talk about "before" the universe.

Alternatively, it could be that there's no "before" because the universe has always existed. Or that our universe nucleated from another universe, and that one could follow the causal chain of universes nucleating within universe backwards forever. Or that time is circular.

I suspect that the reason I'm not religious is that I'm not at all bothered by the question "Why is there a universe, rather than not a universe?" not having a meaningful answer. Or rather, it feels overwhelmingly anthropocentric to expect that the answer to that question, if there even was one, would be comprehensible to me. Worse, if the answer really was "God did it," I think I would just be disappointed.

Comment by calef on Typical Sneer Fallacy · 2015-09-04T01:12:47.331Z · score: 0 (0 votes) · LW · GW

If you aren't interested in engaging with me, then why did you respond to my thread? Especially when the content of your post seems to be "No you're wrong, and I don't want to explain why I think so."?

Comment by calef on Typical Sneer Fallacy · 2015-09-03T23:28:28.626Z · score: 1 (1 votes) · LW · GW

What precisely is Eliezer basically correct about on the physics?

It is true that non-unitary gates allow you to break physics in interesting ways. It is absolutely not true that violating conservation of energy will lead to a nonunitary gate. Eliezer even eventually admits (or at least admits that he 'may have misunderstood') an error in the physics here. (see this subthread).

This isn't really a minor physics mistake. Unitarity really has nothing at all to do with energy conservation.

Comment by calef on Typical Sneer Fallacy · 2015-09-01T20:35:41.803Z · score: 1 (1 votes) · LW · GW

Haha fair enough!

Comment by calef on Typical Sneer Fallacy · 2015-09-01T20:34:26.703Z · score: 1 (1 votes) · LW · GW

I never claimed whether he was or not wasn't Important. I just didn't focus on that aspect of the argument because it's been discussed at length elsewhere (the reddit thread, for example). And I've repeatedly offered to talk about the object level point if people were interested.

I'm not sure why someone's sense of fairness would be rankled when I directly link to essentially all of the evidence on the matter. It would be different if I was just baldly claiming "Eliezer done screwed up" without supplying any evidence.

Comment by calef on Typical Sneer Fallacy · 2015-09-01T19:52:07.501Z · score: 0 (2 votes) · LW · GW

I never said that determining the sincerity of criticism would be easy. I can step through the argument with links, I'd you'd like!

Comment by calef on Typical Sneer Fallacy · 2015-09-01T19:39:45.401Z · score: 4 (4 votes) · LW · GW

Yes, I wrote this article because Eliezer very publicly committed the typical sneering fallacy. But I'm not trying to character-assassinate Eliezer. I'm trying to identify a poisonous sort of reasoning, and indicate that everyone does it, even people that spends years of their life writing about how to be more rational.

I think Eliezer is pretty cool. I aso don't think he's immune from criticism, nor do I think he's an inappropriate target of this sort of post.

Comment by calef on Typical Sneer Fallacy · 2015-09-01T19:27:44.403Z · score: 1 (1 votes) · LW · GW

Which makes for a handy immunizing strategy against criticisms of your post, n'est-ce pas?

It's my understanding that your criticism of my post was that the anecdote would be distracting. One of the explicit purposes of my post was to examine a polarizing example of [the fallacy of not taking criticism seriously] in action--an example which you proceed to not take seriously in your very first post in this thread simply because of a quote you have of Eliezer blowing the criticism off.

The ultimate goal here is to determine how to evaluate criticism. Learning how to do that when the criticism comes from across party lines is central.

Comment by calef on Typical Sneer Fallacy · 2015-09-01T14:37:26.450Z · score: 2 (4 votes) · LW · GW

I mean, if you'd like to talk about the object level point of "was the criticism of Eliezer actually true", we can do that. The discussion elsewhere is kind of extensive, which is why I tried to focus on the meta-level point of the Typical Sneer Fallacy.

Comment by calef on Typical Sneer Fallacy · 2015-09-01T04:14:30.846Z · score: 2 (4 votes) · LW · GW

I suspect how reader's respond to my anecdote about Eliezer will fall along party lines, so to speak.

Which is kind of the point of the whole post. How one responds to the criticism shouldn't be a function of one's loyalty to Eliezer. Especially when su3su2u1 explicitly isn't just "making up most of" his criticism. Yes, his series of review-posts are snarky, but he does point out legitimate science errors. That he chooses to enjoy HPMOR via (c) rather than (a) shouldn't have any bearing on the true-or-false-ness of his criticism.

I've read su3su2u1's reviews. I agree with them. I also really enjoyed HPMOR. This doesn't actually require cognitive dissonance.

(I do agree, though, that snarkiness isn't really useful in trying to get people to listen to criticism, and often just backfires)

Comment by calef on Leaving LessWrong for a more rational life · 2015-05-23T23:30:05.406Z · score: 2 (4 votes) · LW · GW

I mean, sure, but this observation (i.e., "We have tools that allow us to study the AI") is only helpful if your reasoning techniques allow you to keep the AI in the box.

Which is, like, the entire point of contention, here (i.e., whether or not this can be done safely a priori).

I think that you think MIRI's claim is "This cannot be done safely." And I think your claim is "This obviously can be done safely" or perhaps "The onus is on MIRI to prove that this cannot be done safely."

But, again, MIRI's whole mission is to figure out the extent to which this can be done safely.

Comment by calef on Leaving LessWrong for a more rational life · 2015-05-21T23:06:11.394Z · score: 1 (3 votes) · LW · GW

As far as I can tell, you're responding to the claim, "A group of humans can't figure out complicated ideas given enough time." But this isn't my claim at all. My claim is, "One or many superintelligences would be difficult to predict/model/understand because they have a fundamentally more powerful way to reason about reality." This is trivially true once the number of machines which are "smarter" than humans exceeds the total number of humans. The extent to which it is difficult to predict/model the "smarter" machines is a matter of contention. The precise number of "smarter" machines and how much "smarter" they need be before we should be "worried" is also a matter of contention. (How "worried" we should be is a matter of contention!)

But all of these points of contention are exactly the sorts of things that people at MIRI like to think about.

Comment by calef on Leaving LessWrong for a more rational life · 2015-05-21T20:51:29.782Z · score: 9 (9 votes) · LW · GW

This argument is, however, nonsense. The human capacity for abstract reasoning over mathematical models is in principle a fully general intelligent behaviour, as the scientific revolution has shown: there is no aspect of the natural world which has remained beyond the reach of human understanding, once a sufficient amount of evidence is available. The wave-particle duality of quantum physics, or the 11-dimensional space of string theory may defy human intuition, i.e. our built-in intelligence. But we have proven ourselves perfectly capable of understanding the logical implications of models which employ them. We may not be able to build intuition for how a super-intelligence thinks. Maybe—that's not proven either. But even if that is so, we will be able to reason about its intelligent behaviour in advance, just like string theorists are able to reason about 11-dimensional space-time without using their evolutionarily derived intuitions at all.

This may be retreating to the motte's bailey, so to speak, but I don't think anyone seriously thinks that a superintelligence would be literally impossible to understand. The worry is that there will be such a huge gulf between how superintelligences reason versus how we reason that it would take prohibitively long to understand them.

I think a laptop is a good example. There probably isn't any single human on earth that knows how to build a modern laptop from scratch. There's are computer scientists that know how the operating system is put together--how the operating system is programmed, how memory is written and retrieved from the various buses; there are other computer scientists and electrical engineers who designed the chips themselves, who arrayed circuits efficiently to dissipate heat and optimize signal latency. Even further, there are material scientists and physicists who designed the transistors and chip fabrication processes, and so on.

So, as an individual human, I don't know what it's like to know everything about a laptop all at once in my head, at a glance. I can zoom in on an individual piece and learn about it, but I don't know all the nuances for each piece--just a sort of executive summary. The fundamental objects with which I can reason have a sort of characteristic size in mindspace--I can imagine 5, maybe 6 balls moving around with distinct trajectories (even then, I tend to group them into smaller subgroups). But I can't individually imagine a hundred (I could sit down and trace out the paths of a hundred balls individually, of course, but not all at once).

This is the sense in which a superintelligence could be "dangerously" unpredictable. If the fundamental structures it uses for reasoning greatly exceed a human's characteristic size of mindspace, it would be difficult to tease out its chain of logic. And this only gets worse the more intelligent it gets.

Now, I'll grant you that the lesswrong community likes to sweep under the rug the great competition of timescales and "size"scales that are going on here. It might be prohibitively difficult, for fundamental reasons, to move from working-mind-RAM of size 5 to size 10. It may be that artificial intelligence research progresses so slowly that we never even see an intelligence explosion--just a gently sloped intelligence rise over the next few millennia. But I do think it's a maybe not a mistake but certainly naiive to just proclaim, "Of course we'll be able to understand them, we are generalized reasoners!".

Edit: I should add that this is already a problem for, ironically, computer-assisted theorem proving. If a computer produces a 10,000,000 page "proof" of a mathematical theorem (i.e., something far longer than any human could check by hand), you're putting a huge amount of trust in the correctness of the theorem-proving-software itself.

Comment by calef on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 112 · 2015-02-25T21:54:36.535Z · score: -1 (1 votes) · LW · GW

Perhaps because this might all be happening within the mirror, thus realizing both Harry!Riddle's and Voldy!Riddle's CEVs simultaneously.

Comment by calef on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 110 · 2015-02-24T20:16:46.279Z · score: 5 (5 votes) · LW · GW

It seems like Mirror-Dumbledore acted in accordance with exactly what Voldemort wanted to see. In fact, Mirror-Dumbledore didn't even reveal any information that Voldemort didn't already know or suspect.

Odds of Dumbledore actually being dead?

Comment by calef on How to debate when authority is questioned, but really not needed? · 2015-02-23T03:34:16.015Z · score: 17 (17 votes) · LW · GW

Honestly, the only "winning" strategy here is to not argue with people on the comments sections of political articles.

If you must, try and cast the argument in a way that avoids the standard red tribe / blue tribe framing. Doing this can be hard because people generally aren't in the business of having politics debate with an end goal of dissolving an issue--they just want to signal their tribe--hence why arguing on the internet is often a waste of time.

As to the question of authority: how would you expect the conversation to go if you were an economist?

Me: I think money printing by the Fed will cause inflation if they continue like this.

Random commenter: Are you an economist?

Me: Yes actually, I have a PhD in The Economy from Ivy League University.

Random commenter (possible response 1): I don't believe you, and continue to believe what I believe.

Random commenter (possible response 2): Oh well that's one of the (Conservative / Liberal) (pick one) schools, they're obviously wrong and don't know what they're talking about.

Random commenter (possible response 3): Economists obviously don't know what they're talking about.

Again, it's a mix of Dunning-Kruger and tribal signalling. There's not actually any direction an appeal-to-authority debate can go that's productive because the challenger has already made up their mind about the facts being discussed.

For a handful of relevant lesswrong posts: http://lesswrong.com/lw/axn/6_tips_for_productive_arguments/ http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/ http://lesswrong.com/lw/3k/how_to_not_lose_an_argument/

Comment by calef on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2015-02-17T05:22:02.844Z · score: 3 (3 votes) · LW · GW

Yeah, it's already been changed:

A blank-eyed Professor Sprout had now risen from the ground and was pointing her own wand at Harry.

Comment by calef on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 104 · 2015-02-16T02:32:50.335Z · score: 7 (7 votes) · LW · GW

So when Dumbledore asked the Marauder's Map to find Tom Riddle, did it point to Harry?

Comment by calef on Comments on "When Bayesian Inference Shatters"? · 2015-01-07T23:52:44.546Z · score: 2 (2 votes) · LW · GW

Here's a discussion of the paper by the authors. For a sort of critical discussion of the result, see the comments in this blog post.

Comment by calef on Entropy and Temperature · 2014-12-18T20:31:04.267Z · score: 1 (1 votes) · LW · GW

This is a good point. The negative side gives good intuition for the "negative temperatures are hotter than any positive temperature" argument.

Comment by calef on Entropy and Temperature · 2014-12-18T20:28:16.915Z · score: 1 (1 votes) · LW · GW

The distinction here goes deeper than calling a whale a fish (I do agree with the content of the linked essay).

If a layperson asks me what temperature is, I'll say something like, "It has to do with how energetic something is" or even "something's tendency to burn you". But I would never say "It's the average kinetic energy of the translational degrees of freedom of the system" because they don't know what most of those words mean. That latter definition is almost always used in the context of, essentially, undergraduate problem sets as a convenient fiction for approximating the real temperature of monatomic ideal gases--which, again, is usually a stepping stone to the thermodynamic definition of temperature as a partial derivative of entropy.

Alternatively, we could just have temperature(lay person) and temperature(precise). I will always insist on temperature(precise) being the entropic definition. And I have no problem with people choosing whatever definition they want for temperature(lay person) if it helps someone's intuition along.

Comment by calef on Entropy and Temperature · 2014-12-18T08:17:19.702Z · score: 0 (2 votes) · LW · GW

Because one is true in all circumstances and the other isn't? What are you actually objecting to? That physical theories can be more fundamental than each other?

Comment by calef on Entropy and Temperature · 2014-12-18T05:06:13.935Z · score: 0 (2 votes) · LW · GW

I just mean as definitions of temperature. There's temperature(from kinetic energy) and temperature(from entropy). Temperature(from entropy) is a fundamental definition of temperature. Temperature(from kinetic energy) only tells you the actual temperature in certain circumstances.

Comment by calef on Entropy and Temperature · 2014-12-18T03:18:03.471Z · score: 1 (1 votes) · LW · GW

Only one of them actually corresponds with temperature for all objects. They are both equal for one subclass of idealized objects, in which case the "average kinetic energy" definition follows from the the entropic definition, not the other way around. All I'm saying is that it's worth emphasizing that one definition is strictly more general than the other.

Comment by calef on Entropy and Temperature · 2014-12-17T23:51:38.806Z · score: 4 (4 votes) · LW · GW

I think more precisely, there is such a thing as "the average kinetic energy of the particles", and this agrees with the more general definition of temperature "1 / (derivative of entropy with respect to energy)" in very specific contexts.

That there is a more general definition of temperature which is always true is worth emphasizing.

Comment by calef on Entropy and Temperature · 2014-12-17T23:46:26.734Z · score: 1 (1 votes) · LW · GW

I'm don't see the issue in saying [you don't know what temperature really is] to someone working with the definition [T = average kinetic energy]. One definition of temperature is always true. The other is only true for idealized objects.

Comment by calef on Stupid Questions December 2014 · 2014-12-08T22:59:42.852Z · score: 14 (14 votes) · LW · GW

According to http://arxiv.org/abs/astro-ph/0503520 we would need to be able to boost our current orbital radius to about 7 AU.

This would correspond to a change in specific orbital energy of 132712440018/(2(1 AU)) to 132712440018 / (2(7 AU)). (where the 12-digit constant is the standard gravitational parameter of the sun. This is like 5.6 10^10 in Joules / Kilogram, or about 3.4 10^34 Joules when we restore the reduced mass of the earth/sun (which I'm approximating as just the mass of the earth).

Wolframalpha helpfully supplies that this is 28 times the total energy released by the sun in 1 year.

Or, if you like, it's equivalent to the total mass energy of ~3.7 * 10^18 Kilograms of matter (about 1.5% the mass of the asteroid Vespa).

So until we're able to harness and control energy on the order of magnitude of the total energetic output of the sun for multiple years, we won't be able to do this any time soon.

There might be an exceedingly clever way to do this by playing with orbits of nearby asteroids to perturb the orbit of the earth over long timescales, but the change in energy we're talking about here is pretty huge.

Comment by calef on Why is the A-Theory of Time Attractive? · 2014-11-01T00:15:13.957Z · score: 9 (11 votes) · LW · GW

I feel like this is just a really obnoxious argument about definitions.

I especially feel like this is a really obnoxious argument about definitions when the wiki article quotes things like:

"Take the supposed illusion of change. This must mean that something, X, appears to change when in fact it does not change at all. That may be true about X; but how could the illusion occur unless there were change somewhere? If there is no change in X, there must be a change in the deluded mind that contemplates X. The illusion of change is actually a changing illusion. Thus the illusion of change implies the reality of some change. Change, therefore, is invincible in its stubbornness; for no one can deny the appearance of change."

So, to taboo a bunch of words, and to try and state my take on the actual issue as I understand it (including some snark):

B theory: Let there be this thing called spacetime which encodes all moments of time (past,present, future) and space (i.e., the universe). The phenomenal experience of existence is akin to tracking a very particular slice of spacetime move along at the speed that time inches forward, as observed by me.

A theory: My mind is the fundamental metaphysical object, and moments of "time" can only be oriented with respect to my immediate phenomenal experience of reality. Trying to say something about a grand catalog of time (including the future) robs me of this phenomenal experience because I know what I'm feeling, and I'm feeling the phenomenal experience of existing right now, dammit! Point to that on your fancy spacetime chart!

Read this way, I suppose the most succinct objection of the A-theorist is: "If all of spacetime exists, all reference frames are equivalent, etc. etc., why am I, in this moment, existing right now?" To which, I imagine, a B-theorist would respond by saying, "Because you're right here," and would then point to their location on the spacetime chart.

But this isn't actually an argument about what time is like. It's an argument about how whether or not we should privilege the phenomenal experience of existing--of experiencing the now. That is, does me experiencing life right at this moment mean that this moment is special?

I suppose I can see why people that aren't computationalists would be bothered by the B theory, because it does rob you of that special-ness.

Comment by calef on Wikipedia articles from the future · 2014-10-31T05:54:22.057Z · score: 2 (2 votes) · LW · GW

Not that I actually believe most of what I wrote above (just that it hasn't yet been completely excluded), if QG introduced small nonlinearities to quantum mechanics, fun things could happen, like superluminal signaling as well as the ability to solve NP-Complete and P#-Complete problems in polynomial time (which is probably better seen as a reason to believe that QG won't have a nonlinearity).

Comment by calef on Academic papers · 2014-10-31T00:06:57.837Z · score: 5 (5 votes) · LW · GW

You might be asking the wrong question. For example, the set of papers satisfying your first question:

What are the most important or personally influential academic papers you've ever read? (call this set A)

has almost no overlap with what I would consider the set of papers satisfying:

Which ones are essential (or just good) for an informed person to have read? (call this set B)

And this is for a couple of reasons. Scientific papers are written to communicate, "We have evidence of a result--here is our evidence, here is our result." with fairly minimal grounding of where that result stands within the broader scientific literature. Yes, there's an introduction section usually filled with a bunch of citations, and yes there's a conclusion section, but papers are (at least in my field) usually directed at people that are already experts in what the paper is being written about (unless that paper is a review article).

And this is okay. Scientific papers are essentially rapid communications. They're a condensed, "I did this!". Sometimes they're particularly well written and land in category A above. But I can't think of a single paper in my A column that I'd want a layman to read. None of them would make any sense to an "informed" layman.

My B column would probably have really good popular books written by experts--something like Quantum Computing Since Democritus, or, like others have said, introductory level textbooks.

Comment by calef on Wikipedia articles from the future · 2014-10-30T01:19:18.220Z · score: 14 (14 votes) · LW · GW

This article is marked as controversial and has been locked, see talk page for details.

Quantum computing winter

The Quantum computing winter was the period from 1995 to approximately October 2031 when experimental progress on the creation of fault tolerant quantum computers stalled despite significant effort at constructing the machines. The era ended with the publication of the Kitaev-Kalai-Alicki-Preskill (KKAP) theorem in early 2030 which purported to show that the construction of fault-tolerant quantum computers was in fact impossible due to fundamental constraints. The theorem was not widely accepted until experiments performed by Mikhail Lukin's group in early 2031 verified the bounds provided in the KKAP theorem.

Early history

Quantum computing technology looked promising in the late 20th and early 21st century due to the celebrated Fault Tolerance theorems, as well as the rapid experimental progress towards satisfying the fault tolerance threshold. The Fault Tolerance theorem, which at the time was thought to be based on reasonable assumptions, guaranteed scalable, fault tolerant quantum computation could be performed--provided an architecture could be built that had an error rate smaller than a known bound.

In the early 2010s, superconducting qubit architectures designed by John Martinis' group at Google, and then HYPER Inc., looked poised to satisfy the threshold theorems, and considerable work was done to build scaled architectures with many millions of physical qubits by the mid 2020s.

However, despite what seemed to be guarantees via threshold theorems for their architectures, the Martinis group was never able to report large concurrences for more than 12 (disputed) logical qubits.

The scalability wall

Parallel to the development of the scalable, silicon architectures, many groups continued work on other traditional schemes like neutral atoms, trapped ions, and Nuclear Magnetic Resonance (NMR) based devices. These devices, in turn, ran into the now named Scalability Wall of 12 (disputed) entangled encoded qubits. For a discussion on the difference between encoded and physical qubits, see the discussion in Quantum error correction.

The Martinis group hoped that polishing their hardware, and scaling the size of their error correction schemes would allow them to surpass the limit, but progress stalled for more than a decade.

Correlated noise catastrophe

Alexei Kitaev, building on earlier work by Gil Kalai, Robert Alicki, and John Preskill published a series of papers in the late 2020s, culminating in the 2030 theorem now known as the KKAP Theorem, or the Noise Catastrophe Theorem. This proof traced how fundamental limits on the noise experienced by quantum mechanical objects irretrievably destroys the controllability of quantum systems beyond only a few qubits. Uncontrollable correlations were shown to arise in any realistic noise model, essentially disproving the possibility of large scale quantum computation.

Aftermath (This section has been marked as controversial, see the talk page for details)

The immediate aftermath of the publication of the proof was disbelief. Almost all indications pointed towards scalable quantum computation being possible, and that only engineering problems stood in the way of truly scalable quantum computation. The Nobel Prize (2061) winning work of Mikhail Lukin's team at Harvard only reinforced the shock felt by the Quantum Information community when the bounds provided in the KKAP Theorem's proof were explicitly saturated via cold atom experiments. Funding in quantum information science rapidly dwindled in the following years, and the field of Quantum Information was nearly abandoned. The field has since been reinvigorated by Kitaev's recent proof of the possibility of Quantum Gravitational computers in 2061.

Comment by calef on Is this paper formally modeling human (ir)rational decision making worth understanding? · 2014-10-24T00:19:10.728Z · score: 8 (8 votes) · LW · GW

Not being in the field, but having experience in making the judgement "Should I read this paper", here are a handful of observations:

For:

  1. The paper has a handful of citations not entirely from the author (http://scholar.google.com/scholar?cites=8141802968877948536&as_sdt=2005&sciodt=0,5&hl=en) but by no means a huge number of citations.

  2. The abstract is remarkably clear (it's clear that this is a slight extension of other author's work), and the jargon-y words are easily figured out based on gentle perusal of the paper.

  3. It looks like this paper is actually also a chapter in a textbook (http://link.springer.com/chapter/10.1007/978-3-642-11876-0_8)

Against:

  1. Nearly half of the paper's (very few) references in its reference section are self-citations.

I'd say it's worth reading if you're interested in it. Even the against-point above is more of a general heuristic and not necessarily a bad thing.

Comment by calef on The Great Filter is early, or AI is hard · 2014-08-30T08:38:41.838Z · score: 2 (4 votes) · LW · GW

Fusion is technologically possible (c.f., the sun). It just might not be technologically easy.

Comment by calef on [LINK] Could a Quantum Computer Have Subjective Experience? · 2014-08-29T02:39:58.131Z · score: -3 (3 votes) · LW · GW

I disagree that "giving answers is an irreversible operation". My setup explicitly doesn't "forget" the calculation (the calculation being simulating someone proving the Riemann hypothesis, and us extracting that proof from the simulation), and my setup is explicitly reversible (because we have the full density matrix of the system at all times, and can in principle perform unitary time evolution backwards from the final state if we wanted to).

Nothing is ever being forgotten. I'm not sure where that came from, because I've never claimed that anything is being forgotten at any step. I'm not sure why you're insisting that things be forgotten to satisfy reversibility, either.

Comment by calef on [LINK] Could a Quantum Computer Have Subjective Experience? · 2014-08-29T01:50:27.294Z · score: 0 (0 votes) · LW · GW

I'm suggesting that the person running the simulation knows the state of the simulation at all times. If this bothers you, pretend everything is being done digitally, on a classical computer, with exponential slowdown.

Such a calculation can be done reversibly without ever passing information into the system.

Comment by calef on [LINK] Could a Quantum Computer Have Subjective Experience? · 2014-08-28T23:34:35.600Z · score: 0 (0 votes) · LW · GW

I'm not sure who you're talking about because I'm the person above referring to someone writing on paper--and the paper was meant to also be within the simulation. The simulator is "reading the paper" by nature of having perfect information about the system.

"Reversible" in this context is only meant to describe the contents of the simulation. Computation can occur completely reversibly.

Comment by calef on [LINK] Could a Quantum Computer Have Subjective Experience? · 2014-08-28T22:52:31.809Z · score: 0 (0 votes) · LW · GW

"Reading it" is akin to "having perfect information about the full density matrix of the system". You don't have to perturb the system to get information out of it.

Comment by calef on [LINK] Could a Quantum Computer Have Subjective Experience? · 2014-08-28T22:41:10.920Z · score: 0 (0 votes) · LW · GW

I think the confusion here is about what "fully quantum whole brain emulation" actually means.

The idea is that you have a box (probably large), within which is running a closed system calculation which is equivalent to simulating someone sitting in a room trying to write a theorem (all the way down to the quantum level). You are not interacting with the simulation, you are running the simulation. At every stage of the simulation, you have perfect information about the full density matrix of the system (i.e., the person being simulated, the room, the atoms in the person's brain, the movements of the pencil, etc.)

If you have this level of control, then you are implementing the full unitary time evolution of the system. The time evolution operator is reversible. Thus, you can just run the calculation backwards.

So, to the person in the room writing the proof, as far as they know, the photon flying from the paper hitting their eye and being registered by their brain is an irreversible interaction--they don't have complete control over their environment. But to you, the simulation runner, this action is perfectly reversible.

Now, the contention may be that this simulated person wasn't actually ever conscious during the course of this ultra-high-fidelity experiment. Answering that question either way seems to have strange philosophical implications.

Comment by calef on [LINK] Could a Quantum Computer Have Subjective Experience? · 2014-08-28T17:46:05.100Z · score: 0 (0 votes) · LW · GW

You could set up a fully quantum whole brain emulation of a person sitting in a room with a piece of paper that says "Prove the Riemann Hypothesis". Once they've finished the proof, you record what's written on their paper, and reverse the entire simulation (as it was fully quantum mechanical, thus, in principle, fully unitarily reversible).

Looking at what they wrote on the paper doesn't mean you have to communicate with them.

Comment by calef on Career prospects for physics majors · 2014-04-04T21:11:25.831Z · score: 0 (0 votes) · LW · GW

I'm not sure. Although I can say with reasonable confidence that I don't want to go into academia.

Then definitely don't go into Physics. You will be much better served by engineering or computer science.

Comment by calef on Career prospects for physics majors · 2014-04-04T18:02:54.635Z · score: 5 (5 votes) · LW · GW

Disclosure: I have a degree in Physics (and Mathematics), and I'm in graduate school at a top 3 institution.

The hardest thing about getting a degree in Physics is that you don't actually learn what it even means to be a Physicist until probably your junior year (or often later).

But this is a fairly broadly applicable statement--most college courses for your first two years are fundamentals, irrespective of degree. And The Real World (tm) is essentially nothing like an introductory college course.

But Physics is particularly bad in that almost the entire degree is comprised of 'fundamentals'. You'll be hella pro at solving differential equations and calculating forces, but not in any immediately marketable way.

Unless you independently seek out research opportunities, you won't actually be exposed to what it's like to be a physicist.

I'm saying all of this because "It's intellectually stimulating" will only be a true statement if you're intellectually stimulated by what Physics actually is. For example: are you comfortable with struggling to even come up with a problem to solve, struggling to actually solve that problem, struggling to make sense of the answer to that problem, and often finding that the answer to your initially posed problem is uninteresting?

Are you comfortable with probably having to complete more than one post-doc appointment before even starting a tenure track position (with extremely limited ability to choose where that position might be)?

Are you comfortable having your funding be at the mercy of the often schizophrenic grant process?

Because the practice of physics is a struggle. And it is manifestly unlike anything taught in a physics course.

Personally, I find this process intellectually stimulating. But that doesn't stop me from intensely eyeing industry jobs as I get older. The nice thing about Physics is that you absolutely can get any number of different non-academic jobs, but you do have to be proactive about this--you have to develop programming skills, pursue internships, diversify your course-load. There's no traditional non-academic path for a Physics major, and it's often the case (as this article points out), you would have saved yourself a lot of trouble if you'd just majored in Engineering.

Comment by calef on Double-thick transistors and other subjective phenomena · 2014-01-13T07:48:01.161Z · score: 2 (2 votes) · LW · GW

Was this motivated by Nick Bostrom's paper? If not, you might enjoy it--it also discusses the idea of splitting a consciousness into copies by duplicating identical computational processes.

Comment by calef on Open thread for December 17-23, 2013 · 2013-12-19T19:31:11.504Z · score: 43 (43 votes) · LW · GW

A full half (20/40) of the posts currently under discussion are meetup threads.

Can we please segregate these threads to another forum tab (in the vein of the Main/Discussion split)?

Edit: And only 5 or so of them actually have any comments in them.

Comment by calef on an ethical puzzle about brain emulation · 2013-12-18T02:52:28.754Z · score: 0 (0 votes) · LW · GW

Just for reference, this has been discussed on less wrong before in my post Waterfall ethics.

Comment by calef on Open Thread, October 27 - 31, 2013 · 2013-10-29T01:09:52.937Z · score: 0 (0 votes) · LW · GW

This is a little chicken-or-the-egg in terms of "what's more fundamental?", but nonrelativistic QFT really is just the Schrodinger equation with some sparkles.

For example, the language electronic structure theorists use to talk about electronic excitations in insert-your-favorite-solid-state-system-here really is quantum field theoretic--excited electronic states are just quantized excitations about some vacuum (usually, the many-body ground state wavefunction).

Another example: http://en.wikipedia.org/wiki/Kondo_model

You could switch to a purely Schrodinger-Equation-motivated way of writing everything out, but you would quickly find that it's extremely cumbersome, and it's not terribly straightforward how to treat creation and annihilation of particles by hand.

Comment by calef on Open Thread, October 27 - 31, 2013 · 2013-10-28T06:24:00.495Z · score: 4 (6 votes) · LW · GW

Does the Schrodinger equation tell us how to increase the relative probability of interacting with an almost completely orthogonal Everett Branch?

"Almost completely orthogonal" here bears qualifying: In classical thermodynamics, the concept of entropy is sometimes taught by appealing to the probability of all of the gas in a room happening to end up in a configuration where one half of the room is vacuum, and the other half of the room contains gas. After some calculation, we see that the probability of this happening ends up being (effectively) on the order of 10^(-10^23), give or take several orders of magnitude (not like it matters at that point).

Now, that said, how confident are you that different Everettian earths are even at the same point of space time we are, given a branching, say, 10 seconds ago? Pick an atom before the split and pick its two copies after. Are they still within a Bohr radii of each other after after even a nanosecond? Their phases are already scrambled all to hell, so that's a fun unitary transformation to figure out.

Sure, you can prepare highly quantum mechanical sources and demonstrate interference effects, but "interuniversal travel" for any meaningful sense of the word, is about as hard as simply transforming the universe itself, subatomically, atom for atom, controllably into a different reality.

So in that sense, Schrodinger's equation tells us as much about trans-universe physics as the second law of thermodynamics tells us about building large scale Maxwell's Demons.