## Posts

an ethical puzzle about brain emulation 2013-12-13T21:53:10.881Z
What would defuse unfriendly AI? 2011-06-10T07:27:12.623Z

Comment by asr on Open thread, Aug. 03 - Aug. 09, 2015 · 2015-08-04T03:26:10.716Z · LW · GW

It's a tempting thought. But I think it's hard to make the math work that way.

I have a lovely laptop here that I am going to give you. Suppose you assign some utility U to it. Now instead of giving you the laptop, I give you a lottery ticket or the like. With probability P I give you the laptop, and with probability 1 - P you get nothing. (The lottery drawing will happen immediately, so there's no time-preference aspect here.) What utility do you attach to the lottery ticket? The natural answer is P * U, and if you accept some reasonable assumptions about preferences, you are in fact forced to that answer. (This is the basic intuition behind the von Neumann-Morgenstern Expected Utility Theorem.)

Given that probabilities are real numbers, it's hard to avoid utilities being real numbers too.

Comment by asr on Recovering the past · 2015-03-13T03:32:00.872Z · LW · GW

This is because the current position, direction, and speed of an atom (and all other measurements that can be done physically) are only possible with one and only one specific history of everything else in the universe.

This seems almost certainly false. You can measure those things to only finite precision -- there is a limit to the number of bits you can get out of such a measurement. Suppose you measure position and velocity to one part in a billion in each of three dimensions. That's only around 200 bits -- hardly enough to distinguish all possible universal histories.

Comment by asr on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 119 · 2015-03-11T14:11:30.789Z · LW · GW

Good point. A time limit of 3:54 does seem too arbitrary to be hard-coded.

Hrm. Maybe it's exactly one Atlantean time unit? Unsafe to assume that the units we are used to are the same units that the Stone's maker would find natural.

Comment by asr on Harry Potter and the Methods of Rationality discussion thread, March 2015, chapter 116 · 2015-03-04T22:03:13.882Z · LW · GW

I bet Hermione is just going to love being the center of all the attention and scrutiny this will bring on her.

She came back from the dead. Gonna be a lot of attention and scrutiny regardless.

Comment by asr on Towards a theory of nerds... who suffer. · 2015-03-03T15:42:37.168Z · LW · GW

I have this impression - parenting hardly ever discussed on LW - that most of the community has no children.

Let me give you an alternate explanation. Being a parent is very time-consuming. It also tends to draw one's interest to different topics than are typically discussed here. In consequence, LW readers aren't a random sample of nerds or even of people in the general social orbit of the LW crowd. I would not draw any adverse inferences from the fact that a non-parenting-related internet forum tend to be depleted of parents.

Comment by asr on Natural Selection of Government Systems · 2015-02-09T05:55:17.637Z · LW · GW

This graph would be more interesting and persuasive with a better caption.

Comment by asr on Purchasing research effectively open thread · 2015-01-21T21:24:53.705Z · LW · GW

data scientists / statisticians mostly need access to computing power, which is fairly cheap these days.

This is true for each marginal data scientist. But there's a catch, which is that those folks need data. Collecting and promulgating that data, in the application domains we care about, can sometimes be very costly. You might want to consider some of those as part of the cost for the data science.

For example, many countries are spending a huge amount of money on electronic health records, in part to allow better data mining. The health records aren't primarily for scientific purposes, but making them researcher-friendly is a big indirect cost. Similarly, the census is a very expensive data-collection process that enables a lot of "cheap" analytics downstream.

While each data scientist might be cheap, there was a big up-front investment, at the national level, to enable them.

Comment by asr on What topics are appropriate for LessWrong? · 2015-01-21T18:27:45.447Z · LW · GW

Um, yes for most definitions of "rational". That's why [autism] is considered a disability.

Hrm? A disability is a thing that is limits the disabled individual from a socially-recognized set of normal actions. The term 'disability' alone doesn't imply anything about reasoning or cognitive skills. It seems at best un-obvious, and more likely false, that "rationality" encompasses all cognitive functions.

Some people have dyslexia; that is certainly a cognitive disability. It would be strange (not to say offensive) to describe dyslexic individuals as per se irrational. I suspect similarly for, say, dyscalculia. Or for that matter, short-term memory problems.

Autism is a big complicated bundle of traits and behaviors. Why are those behaviors "irrational" in a way that dyslexia isn't?

Comment by asr on The Unique Games Conjecture and FAI: A Troubling Obstacle · 2015-01-20T22:22:09.412Z · LW · GW

One of the unfortunate limitations of modern complexity theory is that a set of problems that look isomorphic sometimes have very different complexity properties. Another awkwardness is that worst-case complexity isn't a reliable guide to practical difficulty. "This sorta feels like a coloring problem" isn't enough to show it's intractable on the sort of instances we care about.

Separately, it's not actually clear to me whether complexity is good or bad news. If you think that predicting human desires and motivations is infeasible computationally, you should probably worry less about super intelligent AI, since that complexity barrier will prevent the AI from being radically effective at manipulating us.

It would seem to require an unusually malicious universe for a superhuman AI to be feasible, for that AI to be able to manipulate us efficiently, but for it to be infeasible for us to write a program to specify constraints that we would be happy with in retrospect.

Comment by asr on Some recent evidence against the Big Bang · 2015-01-09T07:10:38.426Z · LW · GW

I just observe that a lot of cosmology seems to be riding on the theory that the red shift is caused by an expanding universe.

This seems wrong to be. There's at least two independent lines of evidence for the Big Bang theory besides redshifts -- isotope abundances (particularly for light elements) and the cosmic background radiation.

What if it light just loses energy as it travels, so that the frequency shifts lower?

We would have to abandon our belief in energy conservation. And we would then wonder why energy seems to be conserved exactly in every interaction we can see. Also we would wonder why we see spontaneous redshifts not spontaneous blue shifts. Every known micro-scale physical process in the universe is reversible [1], and by the CPT theorem, we expect this to be true always. A lot would have to be wrong with our notions of physics to have light "just lose energy."

That seems like a perfectly natural solution. How do we know it isn't true?

This solution requires light from distant galaxies to behave in ways totally different from every other physical process we know about -- including physical processes in distant galaxies. It seems unnatural to say "the redshift is explained by a totally new physical process, and this process violates a lot of natural laws that hold everywhere else."

[1] I should say, reversible assuming you also flip the charges and parities. That's irrelevant here, though, since photons are uncharged and don't have any special polarization.

Comment by asr on Exams and Overfitting · 2015-01-08T22:16:37.100Z · LW · GW

Speaking as a former algorithms-and-complexity TA --

Proving something is in NP is usually trivial, but probably would be worth a point or two. The people taking complexity at a top-tier school have generally mastered the art of partial credit and know to write down anything plausibly relevant that occurs to them.

Comment by asr on Some recent evidence against the Big Bang · 2015-01-08T07:04:23.728Z · LW · GW

What if it light just loses energy as it travels, so that the frequency shifts lower? That seems like a perfectly natural solution. How do we know it isn't true?

As gjm mentions, the general name for this sort of theory is "tired light." And these theories have been studied extensively and they are broken.

We have a very accurate, very well-tested theory that describes the way photons behave, quantum electrodynamics. It predicts that photons in the vacuum have a constant frequency and don't suddenly vanish. Nor do photons have any sort of internal "clock" for how long they have been propagating. As near as I can tell, any sort of tired light model means giving up QED in fairly fundamental ways, and the evidentiary bar to overturn that theory is very high.

Worse, tired light seems to break local energy conservation. If photons just vanish or spontaneously redshift, where does the energy go?

I can conceive of there being a tired light model that isn't ruled out by experiment, but I would like to see that theory before I junk all of 20th century cosmology and fundamental physics.

Most scientific theories, most of the time, have a whole bunch of quirky observations that they don't explain well. Mostly these anomalies gradually go away as people find bugs in the experiments, or take into account various effects they hadn't considered. The astronomical anomalies you point to don't seem remotely problematic enough to give up on modern physics.

Comment by asr on Open thread, Dec. 22 - Dec. 28, 2014 · 2014-12-24T05:16:49.983Z · LW · GW

"Falling in love" isn't this sudden thing that just happens, it's a process and it's a process that is assisted if the other person is encouraging and feels likewise. Put another way, when the object of your affection is uninterested, that's often a turnoff, and so one then looks elsewhere.

Comment by asr on Entropy and Temperature · 2014-12-19T06:06:32.832Z · LW · GW

There is a peculiar consequence of this, pointed out by Cosma Shalizi. Suppose we have a deterministic physical system S, and we observe this system carefully over time. We are steadily gaining information about its microstates, and therefore by this definition, its entropy should be decreasing.

You might say, "the system isn't closed, because it is being observed." But consider the system "S plus the observer." Saying that entropy is nondecreasing over time seems to require that the observer is in doubt about its own microstates. What does that mean?

Comment by asr on Stuart Russell: AI value alignment problem must be an "intrinsic part" of the field's mainstream agenda · 2014-12-04T04:19:02.019Z · LW · GW

Russell is an entirely respectable and mainstream researcher, at one of the top CS departments. It's striking that he's now basically articulating something pretty close to the MIRI view. Can somebody comment on whether Russell has personally interacted with MIRI?

If MIRI's work played a role in convincing people like Russell, that seems like an major accomplishment and demonstration that they have arrived as part of the academic research community. If Russell came to that conclusion on his own, MIRI should still get a fair bit of praise for getting there first and saying it before it was respectable.

In either case, my congratulations to the folks at MIRI and I will up my credence in them, going forwards. (They've been rising steadily in my estimation for the last several years; this is just one of the more dramatic bumps.)

Comment by asr on 2014 Less Wrong Census/Survey · 2014-10-31T00:19:42.392Z · LW · GW

Did the survey. Mischief managed.

Comment by asr on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-19T18:16:56.979Z · LW · GW

Did you read about Google's partnership with NASA and UCSD to build a quantum computer of 1000 qubits?

Technologically exciting, but ... imagine a world without encryption. As if all locks and keys on all houses, cars, banks, nuclear vaults, whatever, disappeared, only incomparably more consequential.

My understanding is that quantum computers are known to be able to break RSA and elliptic-curve-based public-key crypto systems. They are not known to be able to break arbitrary symmetric-key ciphers or hash functions. You can do a lot with symmetric-key systems -- Kerberos doesn't require public-key authentication. And you can sign things with Merkle signatures.

There are also a number of candidate public-key cryptosystems that are believed secure against quantum attacks.

So I think we shouldn't be too apocalyptic here.

Comment by asr on [LINK] Could a Quantum Computer Have Subjective Experience? · 2014-08-26T20:45:47.852Z · LW · GW

Taking up on the "level above mine" comments -- Scott is a very talented and successful researcher. He also has tenure and can work on what he likes. The fact that he considers this sort of philosophical investigation worth his time and attention makes me upwardly revise my impression of how worthwhile the topic is.

Comment by asr on [meta] Future moderation and investigation of downvote abuse cases, or, I don't want to deal with this stuff · 2014-08-18T02:32:35.713Z · LW · GW

Points 1 and 2 are reasonably clear. Point 3 is unhelpfully vague. If I were moderator, I would have no idea how far that pushes, and as a commenter I wouldn't have a lot of insight as to what to avoid.

I don't mind giving a catch-all authority to a moderator, but if there are specific things you have in mind that are to be avoided, it's probably better to enumerate them.

I would add an explicit "nothing illegal, nothing personally threatening" clause. Those haven't been problems, but it seems better to remind people and to make clear we all agree on that as a standard.

Comment by asr on Saving the World - Progress Report · 2014-08-01T19:46:37.468Z · LW · GW

Interesting. Can you say more about how your work compares to existing VMs, such as the JVM, and what sorts of things you want to prove about executions?

Comment by asr on [link] [poll] Future Progress in Artificial Intelligence · 2014-07-11T05:28:24.271Z · LW · GW

Doing an audit to catch all vulnerabilities is monstrously hard. But finding some vulnerabilities is a perfectly straightforward technical problem.

It happens routinely that people develop new and improved vulnerability detectors that can quickly find vulnerabilities in existing codebases. I would be unsurprised if better optimization engines in general lead to better vulnerability detectors.

Comment by asr on Open thread, 9-15 June 2014 · 2014-06-30T02:12:35.882Z · LW · GW

Having a top-level domain doesn't make an entity a country. Lots of indisputably non-countries have top-level domains. Nobody thinks the Bailiwick of Guernsey is a country, and yet .gg exists.

Comment by asr on Will AGI surprise the world? · 2014-06-25T15:09:26.822Z · LW · GW

To do that it's going to need a decent sense of probability and expected utility. Problem is, OpenCog (and SOAR, too, when I saw it) is still based in a fundamentally certainty-based way of looking at AI tasks, rather than one focused on probability and optimization.

I don't see why this follows. It might be that mildly smart random search, plus a theorem prover with a fixed timeout, plus a benchmark, delivers a steady stream of useful optimizations. The probabilistic reasoning and utility calculation might be implicit in the design of the "self-improvement-finding submodule", rather than an explicit part of the overall architecture. I don't claim this is particularly likely, but neither does undecidability seem like the fundamental limitation here.

Comment by asr on Will AGI surprise the world? · 2014-06-25T14:04:43.391Z · LW · GW

But it would have a very hard time strengthening its core logic, as Rice's Theorem would interfere: proving that certain improvements are improvements (or, even, that the optimized program performs the same task as the original source code) would be impossible.

This seems like the wrong conclusion to draw. Rice's theorem (and other undecidability results) imply that there exist optimizations that are safe but cannot be proven to be safe. It doesn't follow that most optimizations are hard to prove. One imagines that software could do what humans do -- hunt around in the space of optimizations until one looks plausible, try to find a proof, and then if it takes too long, try another. This won't necessarily enumerate the set of provable optimizations (much less the set of all enumerations), but it will produce some.

Comment by asr on Vector psychology · 2014-06-24T04:10:21.633Z · LW · GW

You might look into all the work that's been done with Functional MRI analysis of the brain-- your post reminds me of that. The general technique of "watch the brain and see which regions have activity correlated with various mental states" is a well known technique, and well enough known that all sorts of limitations and statistical difficulties have been pointed out (see wikipedia for citations.)

Comment by asr on Utilitarianism and Relativity Realism · 2014-06-24T00:35:08.000Z · LW · GW

In other words, even if this is completely correct, it doesn't disprove relativity. Rather, it disproves either relativity or most versions of utilitarianism--pick one.

It seems like all it shows is that we ought to keep our utility functions Lorentz-invariant. Or, more generally, when we talk about consequentialist ethics, we should only consider consequences that don't depend on aspects of the observer that we consider irrelevant.

Comment by asr on Open thread, 23-29 June 2014 · 2014-06-23T21:22:10.005Z · LW · GW

I'm curious if anyone has made substantial effort to reach a 'flow' state in tasks outside of coding, like reading or doing math etc etc., and what they learned. Are there easy tricks? Is it possible? Is flow just a buzzword that doesn't really mean anything?

I find reading is just about the easiest activity to get into that state with. I routinely get so absorbed in a book that I forget to move. And I think that's the experience of most readers. It's a little harder with programming actually, since there are all these pauses while I wait for things to compile or run, and all these times when I have to switch to a web browser to look something up. With reading, you can just keep turning pages.

Comment by asr on Conservation of expected moral evidence, clarified · 2014-06-20T16:40:00.958Z · LW · GW

The canonical example is that of a child who wants to steal a cookie. That child gets its morality mainly from its parents. The child strongly suspects that if it asks, all parents will indeed confirm that stealing cookies is wrong. So it decides not to ask, and happily steals the cookie.

I find this example confusing. I think what it shows is that children (humans?) aren't very moral. The reason the child steals instead of asking isn't anything to do with the child's subjective moral uncertainty -- it's that the penalty for stealing-before-asking is lower than stealing-after-asking, and the difference in penalty is enough to make "take the cookie and ask forgiveness if caught" better than "ask permission".

I suspect this is related to our strong belief in being risk-averse when handing out penalties. If I think there's a 50% chance my child misbehaved, the penalty won't be 50% of the penalty if they were caught red-handed. Often, if there's substantial uncertainty about guilt, the penalty is basically zero -- perhaps a warning. Here, the misbehavior is "doing a thing you knew was wrong;" even if the child knows the answer in advance, when the child explicitly asks and is refused, the parent gets new evidence about the child's state of mind, and this is the evidence that really matters.

I suspect this applies to the legal system and society more broadly as well -- because we don't hand out partial penalties for possible guilt, we encourage people to misbehave in ways that are deniable.

Comment by asr on Against utility functions · 2014-06-20T16:28:36.515Z · LW · GW

Without talking about utility functions, we can't talk about expected utility maximization, so we can't define what it means to be ideally rational in the instrumental sense

I like this explanation of why utility-maximization matters for Eliezer's overarching argument. I hadn't noticed that before.

But it seems like utility functions are an unnecessarily strong assumption here. If I understand right, expected utility maximization and related theorems imply that if you have a complete preference over outcomes, and have probabilities that tell you how decisions influence outcomes, you have implicit preferences over decisions.

But even if you have only partial information about outcomes and partial preferences, you still have some induced ordering of the possible actions. We lose the ability to show that there is always an optimal 'rational' decision, but we can still talk about instances of irrational decision-making.

Comment by asr on Against utility functions · 2014-06-20T15:58:28.532Z · LW · GW

I appreciate you writing this way -- speaking for myself, I'm perfectly happy with a short opening claim and then the subtleties and evidence emerges in the following comments. A dialogue can be a better way to illuminate a topic than a long comprehensive essay.

Comment by asr on Open thread, 16-22 June 2014 · 2014-06-20T13:58:11.411Z · LW · GW

Comment by asr on [tangential] Bitcoin: GHash just hit 51% · 2014-06-14T18:26:15.501Z · LW · GW

The attack that people are worrying about involves control of a majority of mining power, not control of a majority of mining output. So the seized bitcoins are irrelevant. The way the attack works is that the attacker would generate a forged chain of bitcoin blocks showing nonsense transactions or randomly dropping transactions that already happened. Because they control a majority of mining power, this forged chain would be the longest chain, and therefor a correct bitcoin implementation would try to follow it, with bad effects. This in turn would break the existing bitcoin network.

The government almost certainly has enough compute power to mount this attack if they want.

Comment by asr on Curiosity: Why did you mega-downvote "AI is Software" ? · 2014-06-05T19:06:53.943Z · LW · GW

I didn't down-vote, but was tempted to. The original post seemed content-free. It felt like an attempt to start a dispute about definitions and not a very interesting one.

It had an additional flaw, which is that it presented its idea in isolation, without any context on what the author was thinking, or what sort of response the author wanted. It didn't feel like it raised a question or answered a question, and so it doesn't really contribute to any discussion.

Comment by asr on Open thread, 3-8 June 2014 · 2014-06-04T02:44:42.163Z · LW · GW

The only reasons I can think of are your #1 and #2. But I think both are perfectly good reasons to vote...

Comment by asr on Open Thread, May 19 - 25, 2014 · 2014-05-23T15:55:39.453Z · LW · GW

Think about the continuum between what we have now and the free market (where you can control exactly where your money goes), and it becomes fairly clear that the only points which have a good reason to be used are the two extreme ends. If you advocate a point in the middle, you'll have a hard time justifying the choice of that particular point, as opposed to one further up or down.

I don't follow your argument here. We have some function that maps from "levels of individual control" to happiness outcomes. We want to find the maximum of this function. It might be that the endpoints are the max, or it might be that the max is in the middle.

Yes, it might be that there is no good justification for any particular precise value. But that seems both unsurprising and irrelevant. If you think that our utility function here is smooth, then sufficiently near the max, small changes in the level of social control would result in negligible changes in outcome. Once we're near enough the maximum, it's hard to tune precisely. What follows from this?

Comment by asr on Can noise have power? · 2014-05-23T14:01:12.853Z · LW · GW

Eliezer thinks the phrase 'worst case analysis' should refer to the 'omega' case.

"Worst case analysis" is a standard term of art in computer science, that shows up as early as second-semester programming, and Eliezer will be better understood if he uses the standard term in the standard way.

A computer scientist would not describe the "omega" case as random -- if the input is correlated with the random number source in a way that is detectable by the algorithm, they're by definition not random.

Comment by asr on Rebutting radical scientific skepticism · 2014-05-10T04:06:32.387Z · LW · GW

Yes. Perhaps we might say, this is what middle school or high school science should be.

Likewise direct demonstrations are the sort of thing I wish science museums focused on more clearly. Often they have 75% of it, but the story of "this experiment shows X" gets lost in the "whoa, cool". I'm in favor of neat stuff, but I wish they explained better what insight the viewer should have.

Comment by asr on Sortition - Hacking Government To Avoid Cognitive Biases And Corruption · 2014-05-08T05:49:18.500Z · LW · GW

Juries have a lot of "professional supervision." In the Common Law system, the judge restricts who can serve on the jury, determines the relevant law, tells the jury what specific question of fact they are deciding, controls the evidence shown to the jury, does the sentencing, and more. My impression is that the non-Common Law systems that use juries give them even less discretion. So when we have citizen-volunteers, we get good results only by very carefully hemming them in with professionals.

You can't supervise the executive in the same way. By definition, the executive is the part of the government in control of the coercive apparatus. If the nominal executives aren't able to give orders to the military without the approval of some other body, then the nominal executives aren't really in charge; they're just constitutional decoration, like the modern British monarchy, or the Presidium of the USSR.

Comment by asr on Sortition - Hacking Government To Avoid Cognitive Biases And Corruption · 2014-05-08T05:36:18.387Z · LW · GW

I found this post hard to follow. It would be more intelligible if you gave a clearer explanation of what problem you are trying to solve. Why exactly is it bad to have the same people look for problems and fix them? Why is it bad to have a legislature that can revise and amend statutes during the voting process?

I also don't really understand what sort of comment or feedback you are expecting here. Do you want us to discuss whether this lottery-and-many-committees structure is in general a good idea? Do you want us to critique through the details of your scheme?

The scheme seems to have certain advantages and certain disadvantages. I am personally quite skeptical; I would need to see lottery-based administration work at a small scale before I tried it on anything larger than a village.

What sort of evidence would convince you that this was, on balance, a bad idea?

Comment by asr on Arguments and relevance claims · 2014-05-07T19:52:37.192Z · LW · GW

I basically agree, but I think the point is stronger if framed differently:

Some defects in an argument are decisive, and others are minor. In casual arguments, people who nitpick are often unclear both to themselves and to others whether their objections are to minor correctable details, or seriously undermine the claim in question.

My impression is that mathematicians, philosophers, and scientists are conscious of this distinction and routinely say things like "the paper is a little sloppy in stating the conclusions that were proved, but this can be fixed easily" or "there's a gap in the argument here and I think it's a really serious problem." Outside professional communities, I don't see people make distinctions between fatal and superficial flaws in somebody else's argument.

In summary: I think your post is a good one but with minor correctable flaws.

Comment by asr on Open thread, 21-27 April 2014 · 2014-05-06T13:48:41.585Z · LW · GW

The idea that you can reasonable protect your anonymity by using a nickname is naive.

I think not so naive as all that. The effectiveness of a security measure depends on the threat. If your worry is "employers searching for my name or email address" then a pseudonym works fine. If your worry is "law enforcement checking whether a particular forum post was written by a particular suspect," then it's not so good. And if your worry is "they are wiretapping me or will search my computer", then the pseudonym is totally unhelpful.

I think in most LW contexts -- including drug discussions -- the former model is a better match. My impression is that security clearance investigations in the United States involve a lot of interviews with friends and family, but, at the present time, don't involve highly sophisticated computer analysis.

Comment by asr on Rebutting radical scientific skepticism · 2014-05-02T15:36:15.695Z · LW · GW

This is incredibly cool and it makes me sad that I've never seen this experiment done in a science museum, physics instructional lab, or anywhere else.

Comment by asr on Rebutting radical scientific skepticism · 2014-05-02T15:33:50.564Z · LW · GW

This is actually a really good example of what I wanted.

I think I have a lot of reason to believe v = f lambda -- It follows pretty much from the definition of "wave" and "wavelength". And I think I can check the frequency of my microwave without any direct assumptions about the speed of light, using an oscilloscope or somesuch.

Comment by asr on Rebutting radical scientific skepticism · 2014-05-02T15:31:31.000Z · LW · GW

But yes, you are correct, as long as your main criterion is something like "compelling at an emotional level", you should expect that different people understand it very differently.

This actually brings out something I had never thought about before. When I am reading or reviewing papers professionally, mostly the dispute between reviewers is about how interesting the topic is, not about whether the evidence is convincing. Likewise my impression about the history of physics is that mostly the professionals were in agreement about what would constitute evidence.

So it's striking that when I put aside my "working computer scientist" hat and put on my "amateur natural scientist" hat, suddenly that consensus goes away and everybody disagrees about what's convincing.

Comment by asr on Rebutting radical scientific skepticism · 2014-05-02T14:44:20.975Z · LW · GW

Well, you could use your smartphone's accelerometer to verify the equations for centrifugal force, or its GPS to verify parts of special and general relativity, or the fact that its chip functions to verify parts of quantum mechanics.

These don't feel like the are quite comparable to each other. I do really trust the accelerometer to measure acceleration. If I take my phone on the merry-go-round and it says "1.2 G", I believe it. I trust my GPS to measure position. But I only take on faith that the GPS had to account for time dilation to work right -- I don't really know anything about the internals of the GPS and so "trust us it works via relativity" isn't really compelling at an emotional level. For somebody who worked with GPS and really knew about the internals of the receiver, this might be a more compelling example.

But I'm not sure how you can legitimately claim to be verifying anything; if you don't trust those laws how can you trust the phone? It would be like using a laser rangefinder to verify the speed of light. For this sort of thing the fact that your equipment functions is better evidence that the people who made it know the laws of physics, than any test you could do with it.

Yes of course. In real life I'm perfectly happy to take on faith that everything in my undergraduate physics textbooks was true. But I want to experience it, not just read about it. And I think "my laser rangefinder works correctly" doesn't feel like experiencing the speed of light. In contrast, building my own rangefinder with a laser and a timing circuit would count as experiencing the speed of light.

I am starting to worry that my criteria for "experience" are idiosyncratic and that different people would find very different science demonstrations compelling.

Comment by asr on Rebutting radical scientific skepticism · 2014-05-02T06:43:43.594Z · LW · GW

Another advantage of replicating the original discovery is that you don't accidentally use unverified equipment or discoveries (ie equipment dependent on laws that were unknown at the time).

I don't consider this an advantage. My goal is to find vivid and direct demonstrations of scientific truths, and so I am happy to use things that are commonplace today, like telephones, computers, cameras, or what-have-you.

That said, I certainly would be interested in hearing about cases where there's something easy to see today that used to be hard -- is there something you have in mind?

Comment by asr on Rebutting radical scientific skepticism · 2014-04-30T21:16:38.158Z · LW · GW

Various ways to measure the speed of light. Many require few modern implements. How to measure constancy of the speed of light -- the original experiment, does not require any complicated or mysterious equipment, only careful design.

The early measurements of the speed of light don't require "modern implements." They do require quite sophisticated engineering or measurement. In particular, the astronomical measurements are not easy at all. Playing the"how would I prove X to myself" game brought home to me just how hard science is. Already by the 18th century and certainly by the 19th, professional astronomers were sophisticated enough to do measurements I couldn't easily match without extensive practice and a real equipment budget.

Suppose you were going to measure the speed of light by astronomy. Stellar aberration seems like the easiest approach, and that's a shift of 20 arcseconds across a time interval of six months. This is probably within my capacities to measure, but it's the sort of thing you would have to work at. It would be a year-long or years-long observation program requiring close attention to detail. In particular, if I wanted a measurement of the speed of light accurate to within 10% I would need my measurement to have error bars of about 2 arcseconds. I suspect an amateur who knew what they were doing could manage it, but it's not something you would just stumble onto as a casual observation.

Comment by asr on Rebutting radical scientific skepticism · 2014-04-30T20:53:56.053Z · LW · GW

Is there an easily visible consequence of special relativity that you can see without specialized equipment?

In general, things like a smartphone "verify" a great deal of modern science.

Yah. Though the immediacy of the verification will vary. When I use my cell phone, I really feel it that information is being carried by radio waves that don't penetrate metal. But I never found the GPS example quite compelling; people assure me "oh yes we needed relativity to get it to work right" and of course I believe them, but I've never seen the details presented and so this doesn't impress me at an emotional level.

I don't know how much my feelings here are idiosyncratic; how similar are different people in what sorts of observations make a big impression on them?

Just direct observation, by the way, gives you little. Yes, you can observe discontinuous spectra of fluorescent lights. So what? This does not prove quantum mechanics in any way, this is merely consistent with quantum mechanics, just as it is consistent with a large variety of other explanations.

I'm not so sure about "consistent with a large variety of other explanations" -- my impression is that nobody was able to come up with a believable theory of spectroscopy before Bohr. Can you point to a non-quantum explanation that ever seemed plausible? Furthermore once you say "okay, spectral lines are due to electron energy-level transitions", you wind up intellectually committed to a whole lot of other things, notably the Pauli exclusion rule.

Comment by asr on [Sequence announcement] Introduction to Mechanism Design · 2014-04-30T19:41:24.825Z · LW · GW

I would read this if written well.

Comment by asr on Open thread, 21-27 April 2014 · 2014-04-25T17:06:31.754Z · LW · GW

Knowing X and being able to do something about X are quite different things. A death-row prisoner might be able to make the correct prediction that he will be hanged tomorrow, but that does not "enable goal-accomplishing actions" for him -- in the Bayes' world as well. Is the Cassandra's world defined by being powerless?

Powerlessness seems like a good way to conceptualize the Cassandra alternative. Perhaps power and well-being are largely random and the best-possible predictions only give you a marginal improvement over the baseline. Or else perhaps the real limit is willpower, and the ability to take decisive action based on prediction is innate and cannot be easily altered. Put in other terms, "the world is divided into players and NPCs and your beliefs are irrelevant to which of those categories you are in."

I don't particularly think either of these is likely but if you believed the world worked in either of those ways, it would follow that optimizing your beliefs was wasted effort for "Cassandra World" reasons.