Posts

Typical Sneer Fallacy 2015-09-01T03:13:53.781Z
Scott Aaronson's cautious optimism for the MWI 2012-08-19T02:35:52.503Z
Waterfall Ethics 2012-01-30T21:14:28.774Z

Comments

Comment by calef on Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky · 2023-03-31T02:13:23.127Z · LW · GW

I’ve seen pretty uniform praise from rationalist audiences, so I thought it worth mentioning that the prevailing response I’ve seen from within a leading lab working on AGI is that Eliezer came off as an unhinged lunatic.

For lack of a better way of saying it, folks not enmeshed within the rat tradition—i.e., normies—do not typically respond well to calls to drop bombs on things, even if such a call is a perfectly rational deduction from the underlying premises of the argument. Eliezer either knew that the entire response to the essay would be dominated by people decrying his call for violence, and this was tactical for 15 dimensional chess reasons, or he severely underestimated people’s ability to identify that the actual point of disagreement is around p(doom), and not with how governments should respond to incredibly high p(doom).

This strikes me as a pretty clear failure to communicate.

Comment by calef on Carrying the Torch: A Response to Anna Salamon by the Guild of the Rose · 2022-07-10T01:47:38.090Z · LW · GW

Probably one of the core infohazards of postmodernism is that “moral rightness” doesn’t really exist outside of some framework. Asking about “rightness” of change is kind of a null pointer in the same way self-modifying your own reward centers can’t be straightforwardly phrased in terms of how your reward centers “should” feel about such rewiring.

Comment by calef on Failing to fix a dangerous intersection · 2022-07-01T21:39:40.632Z · LW · GW

For literally “just painting the road”, cost of materials of paint would be $50, yes. Doing it “right” in a way that’s indistinguishable from if the state of a California did it would almost certainly require experimenting with multiple paints, time spent measuring the intersection/planning out a new paint pattern that matches a similar intersection template, and probably even signage changes (removing the wrong signs (which is likely some kind of misdemeanor if not a felony)), and replacing the signage with the correct form. Even in opportunity costs loss, this is looking like tens of hours of work, and hundreds-to-thousands in costs of materials / required tools.

Comment by calef on Failing to fix a dangerous intersection · 2022-07-01T18:35:40.499Z · LW · GW

You could probably implement this change for less than $5,000 and with minimal disruption to the intersection if you (for example) repainted the lines over night / put authoritative cones around the drying paint.

Who will be the hero we need?

Comment by calef on [Linkpost] Solving Quantitative Reasoning Problems with Language Models · 2022-07-01T16:40:49.852Z · LW · GW

Google doesn’t seem interested in serving large models until it has a rock solid solution to the “if you ask the model to say something horrible, it will oblige” problem.

Comment by calef on autonomy: the missing AGI ingredient? · 2022-05-25T07:03:12.870Z · LW · GW

The relevant sub-field of RL interested in this calls this “lifelong learning”, though I actually prefer your framing because it makes pretty crisp what we actually want.

I also think that solving this problem is probably closer to “something like a transformer and not very far away”, considering, e.g. memorizing transformers work (https://arxiv.org/abs/2203.08913)

Comment by calef on What are the numbers in mind for the super-short AGI timelines so many long-termists are alarmed about? · 2022-04-22T01:01:49.409Z · LW · GW

I think the difficulty with answering this question is that many of the disagreements boil down to differences in estimates for how long it will take to operationalize lab-grade capabilities. Say we have intelligences that are narrowly human / superhuman on every task you can think of (which, for what it’s worth, I think will happen within 5-10 years). How long before we have self-replicating factories? Until foom? Until things are dangerously out of our control? Until GDP doubles within one year? In what order do these things happen? Etc. etc.

If I got anything out of the thousands of words of debate on the site in the last couple of months, it’s the answers to these questions that folks seem to disagree about (though I think I only actually have a good sense of Paul’s answers to these). Also curious to see more specific answers / timelines.

Comment by calef on [Link] Training Compute-Optimal Large Language Models · 2022-03-31T18:20:39.272Z · LW · GW

Something worth reemphasizing for folks not in the field is that these benchmarks are not like usual benchmarks where you train the model on the task, and then see how good it does on a held-out set. Chinchilla was not explicitly trained on any of these problems. It’s typically given some context like: “Q: What is the southernmost continent? A: Antarctica Q: What is the continent north of Africa? A:” and then simply completes the prompt until a stop token is emitted, like a newline character.

And it’s performing above-average-human on these benchmarks.

Comment by calef on Ngo and Yudkowsky on scientific reasoning and pivotal acts · 2022-02-21T23:44:02.803Z · LW · GW

That got people to, I dunno, 6 layers instead of 3 layers or something? But it focused attention on the problem of exploding gradients as the reason why deeply layered neural nets never worked, and that kicked off the entire modern field of deep learning, more or less.

This might be a chicken or egg thing.  We couldn't train big neural networks until we could initialize them correctly, but we also couldn't train them until we had hardware that wasn't embarrassing / benchmark datasets that were nontrivial.

While we figured out empirical init strategies fairly early, like Glorot init in 2010, it took until much later that we developed initialization schemes that really Just Worked (He init in 2015 , Dynamical Isometry from Xiao et al 2018)

If I had to blame something, I'd blame GPUs and custom kernel writing getting to the point that small research labs could begin to tinker with ~few million parameter models on essentially single machines + a few GPUs.  (The AlexNet model from 2012 was only 60 million parameters!)

Comment by calef on Shulman and Yudkowsky on AI progress · 2021-12-04T01:28:46.727Z · LW · GW

For what it's worth, the most relevant difficult-to-fall-prey-to-Goodheartian-tricks measure is probably cross entropy validation loss, as shown in this figure from the GPT-3 paper:  

Serious scaling efforts are much more likely to emphasize progress here over Parameter Count Number Bigger clickbait.

Further, while this number will keep going down, we're going to crash into the entropy of human generated text at some point.  Whether that's within 3 OOM or ten is anybody's guess, though.

Comment by calef on Biology-Inspired AGI Timelines: The Trick That Never Works · 2021-12-02T05:22:01.479Z · LW · GW

By the standards of “we will have a general intelligence”, Moravec is wrong, but by the standards of “computers will be able to do anything humans can do”, Moravec’s timeline seems somewhat uncontroversially prescient? For essentially any task that we can define a measurable success metric, we more or less* know how to fashion a function approximator that’s as good as or better than a human.

*I’ll freely admit that this is moving the goalposts, but there’s a slow, boring path to “AGI” where we completely automate the pipeline for “generate a function approximator that is good at [task]”. The tasks that we don’t yet know how to do this for are increasingly occupying the narrow space of [requires simulating social dynamics of other humans], which, just on computational complexity grounds, may be significantly harder than [become superhuman at all narrowly defined tasks].

Relatedly, do you consider [function approximators for basically everything becoming better with time] to also fail to be a good predictor of AGI timelines for the same reasons that compute-based estimates fail?

Comment by calef on larger language models may disappoint you [or, an eternally unfinished draft] · 2021-11-27T02:59:02.206Z · LW · GW

In defense of shot-ness as a paradigm:

Shot-ness is a nice task-ambiguous interface for revealing capability that doesn’t require any cleverness from the prompt designer. Said another way, If you needed task-specific knowledge to construct the prompt that makes GPT-3 reveal it can do the task, it’s hard to compare “ability to do that task” in a task-agnostic way to other potential capabilities.

For a completely unrealistic example that hyperbolically gestures at what I mean: you could spend a tremendous amount of compute to come up with the magic password prompt that gets GPT-3 to reveal that it can prove P!=NP, but this is worthless if that prompt itself contains a proof that P!=NP, or worse, is harder to generate than the original proof.

This is not what if “feels like” when GPT-3 suddenly demonstrates it is able to do something, of course—it’s more like it just suddenly knows what you meant, and does it, without your hinting really seeming like it provided anything particularly clever-hans-y. So it’s not a great analogy. But I can’t help but feel that a “sufficiently intelligent” language model shouldn’t need to be cajoled into performing a task you can demonstrate to it, thus I personally don’t want to have to rely on cajoling.

Regardless, it’s important to keep track of both “can GPT-n be cajoled into this capability?” as well as “how hard is it to cajole GPT-n into demonstrating this capability?”. But I maintain that shot-prompting is one nice way of probing this while holding “cajoling-ness” relatively fixed.

This is of course moot if all you care about is demonstrating that GPT-n can do the thing. Of course you should prompt tune. Go bananas. But it makes a particular kind of principled comparison hard.

Edit: wanted to add, thank you tremendously for posting this—always appreciate your LLM takes, independent of how fully fleshed they might be

Comment by calef on Christiano, Cotra, and Yudkowsky on AI progress · 2021-11-26T20:10:20.126Z · LW · GW

Honestly, at this point, I don’t remember if it’s inferred or primary-sourced. Edited the above for clarity.

Comment by calef on Christiano, Cotra, and Yudkowsky on AI progress · 2021-11-26T19:41:06.524Z · LW · GW

This is based on:

  1. The Q&A you mention
  2. GPT-3 not being trained on even one pass of its training dataset
  3. “Use way more compute” achieving outsized gains by training longer than by most other architectural modifications for a fixed model size (while you’re correct that bigger model = faster training, you’re trading off against ease of deployment, and models much bigger than GPT-3 become increasingly difficult to serve at prod. Plus, we know it’s about the same size, from the Q&A)
  4. Some experience with undertrained enormous language models underperforming relative to expectation

This is not to say that GPT-4 wont have architectural changes. Sam mentioned a longer context at the least. But these sorts of architectural changes probably qualify as “small” in the parlance of the above conversation.

Comment by calef on Christiano, Cotra, and Yudkowsky on AI progress · 2021-11-26T07:36:25.040Z · LW · GW

I believe Sam Altman implied they’re simply training a GPT-3-variant for significantly longer for “GPT-4”. The GPT-3 model in prod is nowhere near converged on its training data.

Edit: changed to be less certain, pretty sure this follows from public comments by Sam, but he has not said this exactly

Comment by calef on Yudkowsky and Christiano discuss "Takeoff Speeds" · 2021-11-25T02:20:37.480Z · LW · GW

OpenAI is still running evaluations.

Comment by calef on Ngo and Yudkowsky on AI capability gains · 2021-11-19T02:03:28.759Z · LW · GW

This was frustrating to read.

There’s some crux hidden in this conversation regarding how much humanity’s odds depend on the level of technology (read: GDP) increase we’ll be able to achieve with pre-scary-AGI. It seems like Richard thinks we could be essentially post-scarcity, thus radically changing the geopolitical climate (and possibly making collaboration on an X-risk more likely? (this wasn’t spelled out clearly)). I actually couldn’t suss out what Eliezer thinks from this conversation—possibly that humanity’s odds are basically independent of the achieved level of technology, or that the world ends significantly sooner than we’ll be able to deploy transformative tech, so the point is moot. I wish y’all had nailed this down further.

Despite the frustration, this was fantastic content, and I’m excited for future installments.

Comment by calef on Ngo and Yudkowsky on alignment difficulty · 2021-11-17T02:34:59.476Z · LW · GW

Sure, but you have essentially no guarantee that such a model would remain contained to that group, or that the insights gleaned from that group could be applied unilaterally across the world before a “bad”* actor reimplemented the model and started asking it unsafe prompts.

Much of the danger here is that once any single lab on earth can make such a model, state actors probably aren’t more than 5 years behind, and likely aren’t more than1 year behind based on the economic value that an AGI represents.

  • “bad” here doesn’t really mean evil in intent, just an actor that is unconcerned with the safety of their prompts, and thus likely to (in Eliezer’s words) end the world
Comment by calef on Ngo and Yudkowsky on alignment difficulty · 2021-11-16T19:54:35.095Z · LW · GW

I don’t think the issue is the existence of safe prompts, the issue is proving the non-existence of unsafe prompts. And it’s not at all clear that a GPT-6 that can produce chapters from 2067EliezerSafetyTextbook is not already past the danger threshold.

Comment by calef on Unusual medical event led to concluding I was most likely an AI in a simulated world · 2017-09-18T17:32:25.158Z · LW · GW

If you haven't already, you might consider speaking with a doctor. Sudden, intense changes to one's internal sense of logic are often explainable by an underlying condition (as you yourself have noted). I'd rather not play the "diagnose a person over the internet" game, nor encourage anyone else here to do so. You should especially see a doctor if you actually think you've had a stroke. It is possible to recover from many different sorts of brain trauma, and the earlier you act, the better odds you have of identifying the problem (if it exists!).

Comment by calef on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-27T02:35:07.735Z · LW · GW

What can a "level 5 framework" do, operationally, that is different than what can be done with a Bayes net?

I admit that I don't understand what you're actually trying to argue, Christian.

Comment by calef on John Nash's Ideal Money: The Motivations of Savings and Thrift · 2017-01-18T05:53:26.145Z · LW · GW

Hi Flinter (and welcome to LessWrong)

You've resorted to a certain argumentative style in some of your responses, and I wanted to point it out to you. Essentially, someone criticizes one of your posts, and your response is something like:

"Don't you understand how smart John Nash is? How could you possibly think your criticism is something that John Nash hadn't thought of already?"

The thing about ideas, notwithstanding the brilliance of those ideas or where they might have come from, is that communicating those ideas effectively is just as important as the idea itself. Even if Nash's Ideal Money scheme is the most important thing in the universe, if you can't communicate the idea effectively, and if you can't convincingly respond to criticism without hostility, no one will ever understand that idea but you.

A great modern example of this is Mochizuki's interuniversal Teichmuller theory, which he singlehandedly developed over the course of a decade in near complete isolation. It's an extremely technically dense new way of doing number theory that he claims resolves several outstanding conjectures in number theory (including the ABC Conjecture, among a couple others). And it's taken over four years for some very high profile mathematicians to start verifying that it's probably correct. This required workshops and hundreds of communications between Mochizuki and other mathematicians.

Point being: Progress is sociological as much as it is empirical. If you aren't able to effectively communicate the importance of an idea, it might be because the community at large is hostile to new ideas, even when represented in the best way possible. But if a community--a community which is, nominally, dedicated to rationally evaluating ideas--is unable to understand your representation, or see the importance of it, it might just be because you're bad at explaining it, the idea isn't all that great, or both.

Comment by calef on Open thread, Mar. 14 - Mar. 20, 2016 · 2016-03-16T02:33:38.829Z · LW · GW

I've found that I only ever get something sort of like sleep paralysis when I sleep flat on my back, so +1 for sleeping orientation mattering for some reason.

Comment by calef on Open thread, Mar. 14 - Mar. 20, 2016 · 2016-03-16T02:31:26.653Z · LW · GW

This is essentially what username2 was getting at, but I'll try a different direction.

It's entirely possible that "what caused the big bang" is a nonsensical question. 'Causes' and 'Effects' only exist insofar as there are things which exist to cause causes and effect effects. The "cause and effect" apparatus could be entirely contained within the universe, in the same way that it's not really sensible to talk about "before" the universe.

Alternatively, it could be that there's no "before" because the universe has always existed. Or that our universe nucleated from another universe, and that one could follow the causal chain of universes nucleating within universe backwards forever. Or that time is circular.

I suspect that the reason I'm not religious is that I'm not at all bothered by the question "Why is there a universe, rather than not a universe?" not having a meaningful answer. Or rather, it feels overwhelmingly anthropocentric to expect that the answer to that question, if there even was one, would be comprehensible to me. Worse, if the answer really was "God did it," I think I would just be disappointed.

Comment by calef on Typical Sneer Fallacy · 2015-09-04T01:12:47.331Z · LW · GW

If you aren't interested in engaging with me, then why did you respond to my thread? Especially when the content of your post seems to be "No you're wrong, and I don't want to explain why I think so."?

Comment by calef on Typical Sneer Fallacy · 2015-09-03T23:28:28.626Z · LW · GW

What precisely is Eliezer basically correct about on the physics?

It is true that non-unitary gates allow you to break physics in interesting ways. It is absolutely not true that violating conservation of energy will lead to a nonunitary gate. Eliezer even eventually admits (or at least admits that he 'may have misunderstood') an error in the physics here. (see this subthread).

This isn't really a minor physics mistake. Unitarity really has nothing at all to do with energy conservation.

Comment by calef on Typical Sneer Fallacy · 2015-09-01T20:35:41.803Z · LW · GW

Haha fair enough!

Comment by calef on Typical Sneer Fallacy · 2015-09-01T20:34:26.703Z · LW · GW

I never claimed whether he was or not wasn't Important. I just didn't focus on that aspect of the argument because it's been discussed at length elsewhere (the reddit thread, for example). And I've repeatedly offered to talk about the object level point if people were interested.

I'm not sure why someone's sense of fairness would be rankled when I directly link to essentially all of the evidence on the matter. It would be different if I was just baldly claiming "Eliezer done screwed up" without supplying any evidence.

Comment by calef on Typical Sneer Fallacy · 2015-09-01T19:52:07.501Z · LW · GW

I never said that determining the sincerity of criticism would be easy. I can step through the argument with links, I'd you'd like!

Comment by calef on Typical Sneer Fallacy · 2015-09-01T19:39:45.401Z · LW · GW

Yes, I wrote this article because Eliezer very publicly committed the typical sneering fallacy. But I'm not trying to character-assassinate Eliezer. I'm trying to identify a poisonous sort of reasoning, and indicate that everyone does it, even people that spends years of their life writing about how to be more rational.

I think Eliezer is pretty cool. I aso don't think he's immune from criticism, nor do I think he's an inappropriate target of this sort of post.

Comment by calef on Typical Sneer Fallacy · 2015-09-01T19:27:44.403Z · LW · GW

Which makes for a handy immunizing strategy against criticisms of your post, n'est-ce pas?

It's my understanding that your criticism of my post was that the anecdote would be distracting. One of the explicit purposes of my post was to examine a polarizing example of [the fallacy of not taking criticism seriously] in action--an example which you proceed to not take seriously in your very first post in this thread simply because of a quote you have of Eliezer blowing the criticism off.

The ultimate goal here is to determine how to evaluate criticism. Learning how to do that when the criticism comes from across party lines is central.

Comment by calef on Typical Sneer Fallacy · 2015-09-01T14:37:26.450Z · LW · GW

I mean, if you'd like to talk about the object level point of "was the criticism of Eliezer actually true", we can do that. The discussion elsewhere is kind of extensive, which is why I tried to focus on the meta-level point of the Typical Sneer Fallacy.

Comment by calef on Typical Sneer Fallacy · 2015-09-01T04:14:30.846Z · LW · GW

I suspect how reader's respond to my anecdote about Eliezer will fall along party lines, so to speak.

Which is kind of the point of the whole post. How one responds to the criticism shouldn't be a function of one's loyalty to Eliezer. Especially when su3su2u1 explicitly isn't just "making up most of" his criticism. Yes, his series of review-posts are snarky, but he does point out legitimate science errors. That he chooses to enjoy HPMOR via (c) rather than (a) shouldn't have any bearing on the true-or-false-ness of his criticism.

I've read su3su2u1's reviews. I agree with them. I also really enjoyed HPMOR. This doesn't actually require cognitive dissonance.

(I do agree, though, that snarkiness isn't really useful in trying to get people to listen to criticism, and often just backfires)

Comment by calef on Leaving LessWrong for a more rational life · 2015-05-23T23:30:05.406Z · LW · GW

I mean, sure, but this observation (i.e., "We have tools that allow us to study the AI") is only helpful if your reasoning techniques allow you to keep the AI in the box.

Which is, like, the entire point of contention, here (i.e., whether or not this can be done safely a priori).

I think that you think MIRI's claim is "This cannot be done safely." And I think your claim is "This obviously can be done safely" or perhaps "The onus is on MIRI to prove that this cannot be done safely."

But, again, MIRI's whole mission is to figure out the extent to which this can be done safely.

Comment by calef on Leaving LessWrong for a more rational life · 2015-05-21T23:06:11.394Z · LW · GW

As far as I can tell, you're responding to the claim, "A group of humans can't figure out complicated ideas given enough time." But this isn't my claim at all. My claim is, "One or many superintelligences would be difficult to predict/model/understand because they have a fundamentally more powerful way to reason about reality." This is trivially true once the number of machines which are "smarter" than humans exceeds the total number of humans. The extent to which it is difficult to predict/model the "smarter" machines is a matter of contention. The precise number of "smarter" machines and how much "smarter" they need be before we should be "worried" is also a matter of contention. (How "worried" we should be is a matter of contention!)

But all of these points of contention are exactly the sorts of things that people at MIRI like to think about.

Comment by calef on Leaving LessWrong for a more rational life · 2015-05-21T20:51:29.782Z · LW · GW

This argument is, however, nonsense. The human capacity for abstract reasoning over mathematical models is in principle a fully general intelligent behaviour, as the scientific revolution has shown: there is no aspect of the natural world which has remained beyond the reach of human understanding, once a sufficient amount of evidence is available. The wave-particle duality of quantum physics, or the 11-dimensional space of string theory may defy human intuition, i.e. our built-in intelligence. But we have proven ourselves perfectly capable of understanding the logical implications of models which employ them. We may not be able to build intuition for how a super-intelligence thinks. Maybe—that's not proven either. But even if that is so, we will be able to reason about its intelligent behaviour in advance, just like string theorists are able to reason about 11-dimensional space-time without using their evolutionarily derived intuitions at all.

This may be retreating to the motte's bailey, so to speak, but I don't think anyone seriously thinks that a superintelligence would be literally impossible to understand. The worry is that there will be such a huge gulf between how superintelligences reason versus how we reason that it would take prohibitively long to understand them.

I think a laptop is a good example. There probably isn't any single human on earth that knows how to build a modern laptop from scratch. There's are computer scientists that know how the operating system is put together--how the operating system is programmed, how memory is written and retrieved from the various buses; there are other computer scientists and electrical engineers who designed the chips themselves, who arrayed circuits efficiently to dissipate heat and optimize signal latency. Even further, there are material scientists and physicists who designed the transistors and chip fabrication processes, and so on.

So, as an individual human, I don't know what it's like to know everything about a laptop all at once in my head, at a glance. I can zoom in on an individual piece and learn about it, but I don't know all the nuances for each piece--just a sort of executive summary. The fundamental objects with which I can reason have a sort of characteristic size in mindspace--I can imagine 5, maybe 6 balls moving around with distinct trajectories (even then, I tend to group them into smaller subgroups). But I can't individually imagine a hundred (I could sit down and trace out the paths of a hundred balls individually, of course, but not all at once).

This is the sense in which a superintelligence could be "dangerously" unpredictable. If the fundamental structures it uses for reasoning greatly exceed a human's characteristic size of mindspace, it would be difficult to tease out its chain of logic. And this only gets worse the more intelligent it gets.

Now, I'll grant you that the lesswrong community likes to sweep under the rug the great competition of timescales and "size"scales that are going on here. It might be prohibitively difficult, for fundamental reasons, to move from working-mind-RAM of size 5 to size 10. It may be that artificial intelligence research progresses so slowly that we never even see an intelligence explosion--just a gently sloped intelligence rise over the next few millennia. But I do think it's a maybe not a mistake but certainly naiive to just proclaim, "Of course we'll be able to understand them, we are generalized reasoners!".

Edit: I should add that this is already a problem for, ironically, computer-assisted theorem proving. If a computer produces a 10,000,000 page "proof" of a mathematical theorem (i.e., something far longer than any human could check by hand), you're putting a huge amount of trust in the correctness of the theorem-proving-software itself.

Comment by calef on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 112 · 2015-02-25T21:54:36.535Z · LW · GW

Perhaps because this might all be happening within the mirror, thus realizing both Harry!Riddle's and Voldy!Riddle's CEVs simultaneously.

Comment by calef on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 110 · 2015-02-24T20:16:46.279Z · LW · GW

It seems like Mirror-Dumbledore acted in accordance with exactly what Voldemort wanted to see. In fact, Mirror-Dumbledore didn't even reveal any information that Voldemort didn't already know or suspect.

Odds of Dumbledore actually being dead?

Comment by calef on How to debate when authority is questioned, but really not needed? · 2015-02-23T03:34:16.015Z · LW · GW

Honestly, the only "winning" strategy here is to not argue with people on the comments sections of political articles.

If you must, try and cast the argument in a way that avoids the standard red tribe / blue tribe framing. Doing this can be hard because people generally aren't in the business of having politics debate with an end goal of dissolving an issue--they just want to signal their tribe--hence why arguing on the internet is often a waste of time.

As to the question of authority: how would you expect the conversation to go if you were an economist?

Me: I think money printing by the Fed will cause inflation if they continue like this.

Random commenter: Are you an economist?

Me: Yes actually, I have a PhD in The Economy from Ivy League University.

Random commenter (possible response 1): I don't believe you, and continue to believe what I believe.

Random commenter (possible response 2): Oh well that's one of the (Conservative / Liberal) (pick one) schools, they're obviously wrong and don't know what they're talking about.

Random commenter (possible response 3): Economists obviously don't know what they're talking about.

Again, it's a mix of Dunning-Kruger and tribal signalling. There's not actually any direction an appeal-to-authority debate can go that's productive because the challenger has already made up their mind about the facts being discussed.

For a handful of relevant lesswrong posts: http://lesswrong.com/lw/axn/6_tips_for_productive_arguments/ http://lesswrong.com/lw/gz/policy_debates_should_not_appear_onesided/ http://lesswrong.com/lw/3k/how_to_not_lose_an_argument/

Comment by calef on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2015-02-17T05:22:02.844Z · LW · GW

Yeah, it's already been changed:

A blank-eyed Professor Sprout had now risen from the ground and was pointing her own wand at Harry.

Comment by calef on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 104 · 2015-02-16T02:32:50.335Z · LW · GW

So when Dumbledore asked the Marauder's Map to find Tom Riddle, did it point to Harry?

Comment by calef on Comments on "When Bayesian Inference Shatters"? · 2015-01-07T23:52:44.546Z · LW · GW

Here's a discussion of the paper by the authors. For a sort of critical discussion of the result, see the comments in this blog post.

Comment by calef on Entropy and Temperature · 2014-12-18T20:31:04.267Z · LW · GW

This is a good point. The negative side gives good intuition for the "negative temperatures are hotter than any positive temperature" argument.

Comment by calef on Entropy and Temperature · 2014-12-18T20:28:16.915Z · LW · GW

The distinction here goes deeper than calling a whale a fish (I do agree with the content of the linked essay).

If a layperson asks me what temperature is, I'll say something like, "It has to do with how energetic something is" or even "something's tendency to burn you". But I would never say "It's the average kinetic energy of the translational degrees of freedom of the system" because they don't know what most of those words mean. That latter definition is almost always used in the context of, essentially, undergraduate problem sets as a convenient fiction for approximating the real temperature of monatomic ideal gases--which, again, is usually a stepping stone to the thermodynamic definition of temperature as a partial derivative of entropy.

Alternatively, we could just have temperature(lay person) and temperature(precise). I will always insist on temperature(precise) being the entropic definition. And I have no problem with people choosing whatever definition they want for temperature(lay person) if it helps someone's intuition along.

Comment by calef on Entropy and Temperature · 2014-12-18T08:17:19.702Z · LW · GW

Because one is true in all circumstances and the other isn't? What are you actually objecting to? That physical theories can be more fundamental than each other?

Comment by calef on Entropy and Temperature · 2014-12-18T05:06:13.935Z · LW · GW

I just mean as definitions of temperature. There's temperature(from kinetic energy) and temperature(from entropy). Temperature(from entropy) is a fundamental definition of temperature. Temperature(from kinetic energy) only tells you the actual temperature in certain circumstances.

Comment by calef on Entropy and Temperature · 2014-12-18T03:18:03.471Z · LW · GW

Only one of them actually corresponds with temperature for all objects. They are both equal for one subclass of idealized objects, in which case the "average kinetic energy" definition follows from the the entropic definition, not the other way around. All I'm saying is that it's worth emphasizing that one definition is strictly more general than the other.

Comment by calef on Entropy and Temperature · 2014-12-17T23:51:38.806Z · LW · GW

I think more precisely, there is such a thing as "the average kinetic energy of the particles", and this agrees with the more general definition of temperature "1 / (derivative of entropy with respect to energy)" in very specific contexts.

That there is a more general definition of temperature which is always true is worth emphasizing.

Comment by calef on Entropy and Temperature · 2014-12-17T23:46:26.734Z · LW · GW

I'm don't see the issue in saying [you don't know what temperature really is] to someone working with the definition [T = average kinetic energy]. One definition of temperature is always true. The other is only true for idealized objects.

Comment by calef on Stupid Questions December 2014 · 2014-12-08T22:59:42.852Z · LW · GW

According to http://arxiv.org/abs/astro-ph/0503520 we would need to be able to boost our current orbital radius to about 7 AU.

This would correspond to a change in specific orbital energy of 132712440018/(2(1 AU)) to 132712440018 / (2(7 AU)). (where the 12-digit constant is the standard gravitational parameter of the sun. This is like 5.6 10^10 in Joules / Kilogram, or about 3.4 10^34 Joules when we restore the reduced mass of the earth/sun (which I'm approximating as just the mass of the earth).

Wolframalpha helpfully supplies that this is 28 times the total energy released by the sun in 1 year.

Or, if you like, it's equivalent to the total mass energy of ~3.7 * 10^18 Kilograms of matter (about 1.5% the mass of the asteroid Vespa).

So until we're able to harness and control energy on the order of magnitude of the total energetic output of the sun for multiple years, we won't be able to do this any time soon.

There might be an exceedingly clever way to do this by playing with orbits of nearby asteroids to perturb the orbit of the earth over long timescales, but the change in energy we're talking about here is pretty huge.