## Posts

Comment by bcoburn on Open thread, September 2-8, 2013 · 2013-09-16T22:58:14.298Z · score: 1 (1 votes) · LW · GW

My first idea is to use something based on cryptography. For example, using the parity of the pre-image of a particular output from a hash function.

That is, the parity of x in this equation:

f(x) = n, where n is your index variable and f is some hash function assumed to be hard to invert.

This does require assuming that the hash function is actually hard, but that both seems reasonable and is at least something that actual humans can't provide a counter example for. It's also relatively very fast to go from x to n, so this scheme is easy to verify.

Comment by bcoburn on Post ridiculous munchkin ideas! · 2013-05-11T03:30:20.890Z · score: 3 (3 votes) · LW · GW

Obligatory note re: standing desk ergonomics: http://ergo.human.cornell.edu/CUESitStand.html

The lesson seems to be to mostly sit, but stand and walk around every 30-45 minutes or so.

Comment by bcoburn on One thousand tips do not make a system · 2012-12-01T16:49:39.318Z · score: 4 (4 votes) · LW · GW

I think that the main difference between people who do and don't excel at SC2 isn't that experts don't follow algorithms, it's that their algorithms are more advanced/more complicated.

For example, Day[9]'s build order focused shows are mostly about filling in the details of the decision tree/algorithm to follow for a specific "build". Or, if you listen to professional players talking about how they react to beginners asking for detailed build orders the response isn't "just follow your intuition" it's "this is the order you build things in, spend your money as fast as possible, react in these ways to these situations", which certainly looks like an algorithm to me.

Edit: One other thing regarding practice: We occasionally talk about 10,000 hours and so on, but a key part of that is 10,000 hours of "deliberate practice", which is distinguished from just screwing around as being the sort of practice that lets you generate explicit algorithms.

Comment by bcoburn on How To Have Things Correctly · 2012-10-18T23:41:57.205Z · score: 4 (4 votes) · LW · GW

I actually see a connection between the two: One of the points in the article is to buy experiences rather than things, and Alicorn's post seems to be (possibly among other things) a set of ways to turn things into experiences.

Comment by bcoburn on Minimum viable workout routine · 2012-09-21T02:54:50.692Z · score: 0 (2 votes) · LW · GW

Yes, that is exactly what they are saying. It happens to be the case that this thing works for you. That is only very weak evidence that it works for anyone else at all. All humans are not the same.

We recommend getting over being insulted and frustrated when things that work for you specifically turn out to be flukes, it's not a surprising thing and sufficiently internalizing how many actual studies turn out to be flukes would make it the obvious result. Reality shouldn't be strange or surprising or insulting!

Comment by bcoburn on Who Wants To Start An Important Startup? · 2012-09-11T01:38:52.656Z · score: 1 (1 votes) · LW · GW

I'm not sure about the rest of the app, but the bookmarklet seems like a ridiculously good idea. The 'trivial inconvenience' of actually making cards for things is really brutal, anything that helps seems like a big deal.

Comment by bcoburn on Your inner Google · 2012-09-06T07:11:46.323Z · score: 0 (0 votes) · LW · GW

Is there a good book/resource in general for trying to learn the meta-model you mention?

Comment by bcoburn on Meta: What do you think of a karma vote checklist? · 2012-09-06T05:51:21.571Z · score: 1 (1 votes) · LW · GW

Of course, this is a straightforward problem to fix in the mechanism design: Just make responses to downvoted comments start at -5 karma, instead of having a direct penalty, as suggested elsewhere. I think that suggestion was for unrelated reasons, but it also fixes this little loophole.

Comment by bcoburn on Dragon Ball's Hyperbolic Time Chamber · 2012-09-05T00:07:22.241Z · score: 1 (1 votes) · LW · GW

it doesn't give many actual current details, but http://en.wikipedia.org/wiki/Computational_lithography implies that as of 2006 designing the photomask for a given chip required ~100 CPU years of processing, and presumably that has only gone up.

Etching a 22nm line with 193nm light is a hard problem, and a lot of the techniques used certainly appear to require huge amounts of processing. It's close to impossible to say how much of a bottle neck this particular step in the process is, but based on how much really knowing what is going on in even just simple mechanical design requires lots of simulation I would actually expect that every step in chip design has similar types of simulation requirements.

Comment by bcoburn on What Are You Doing for Self-Quantification? · 2012-08-31T04:03:50.085Z · score: 0 (0 votes) · LW · GW

also generates free time! generally just trying to walk between classes as fast as possible is probably good, if sprinting seems too scary.

Comment by bcoburn on [deleted post] 2012-08-22T22:28:01.322Z

Me as well.

Comment by bcoburn on Who Wants To Start An Important Startup? · 2012-08-20T05:53:05.002Z · score: 5 (5 votes) · LW · GW

Because it signals that you're the sort of person who feels a need to get certifications, or more precisely that you thought you actually needed the certification to get a job. (And because the actual certifications aren't taken to be particularly hard, such that completing one is strong evidence of actual skill)

Comment by bcoburn on Stupid Questions Open Thread Round 3 · 2012-07-08T06:27:15.099Z · score: 1 (1 votes) · LW · GW

More concisely than the original/gwern: The algorithm used by the mugger is roughly:

Find your assessed probability of the mugger being able to deliver whatever reward, being careful to specify the size of the reward in the conditions for the probability

offer an exchange such that U(payment to mugger) < U(reward) * P(reward)

This is an issue for AI design because if you use a prior based on Kolmogorov complexity than it's relatively straightforward to find such a reward, because even very large numbers have relatively low complexity, and therefore relatively high prior probabilities.

Comment by bcoburn on New Singularity.org · 2012-07-06T04:29:47.951Z · score: 1 (1 votes) · LW · GW

So I don't know about anyone else, but as far as I can tell my own personal true rejection is: It's just too hard to remember to click over to predictionbook.com and actually type something in when I make a prediction. I've tried the things that seem obvious to help with this, but the small inconvenience has so far been too much

Comment by bcoburn on Minimum viable workout routine · 2012-06-22T03:14:38.550Z · score: 0 (0 votes) · LW · GW

Do you have a specific recommendation for what the minimum for longevity actually is?

Three days doing three different high intensity weight bearing activities isn't the best overall workout program but it is certainly viable and far more minimal. It would give acceptable (but less) muscle growth and far better cardio improvements.

Comes pretty close, but still leaves a little room for guesswork.

Comment by bcoburn on [SEQ RERUN] The Opposite Sex · 2012-06-18T07:31:00.199Z · score: 2 (2 votes) · LW · GW

Just as an exercise, and mostly motivated by the IRC channel: Can anyone find a way to turn this post into a testable prediction about the real world?

In particular, it would be nice to have a specific way to tell the difference between "understanding the opposite sex is impossible" and "understanding the opposite sex is harder than the same sex" and "understanding types of people you haven't been in enough contact with is hard/impossible"

Comment by bcoburn on Group rationality diary, 6/11/12 · 2012-06-18T00:25:59.797Z · score: 1 (1 votes) · LW · GW

You could also try dissolving the whole capsule in water, which might make measuring out specific fractions easier.

Comment by bcoburn on Be Happier · 2012-04-21T13:45:58.753Z · score: 0 (0 votes) · LW · GW

I think it's pretty likely this is just a joke, not really some clever tactic

Comment by bcoburn on How does long-term use of caffeine affect productivity? · 2012-04-12T00:17:18.989Z · score: 4 (4 votes) · LW · GW

Just for the record, and in case it's important in experiment planning, caffeine isn't actually tasteless at all. has a fairly bitter and certainly easy to recognize taste dissolved in just water.

It is, however, really easy to mask in, for example, orange juice, so the taste shouldn't make the experiments hard as such. Just another design constraint to be aware of.

I'd also recommend adding some sort of n days on, m days off cycling to your tests, mostly because that's what I do and I want to take advantage of other people's research.

Comment by bcoburn on 60m Asteroid currently assigned a .022% chance of hitting Earth. · 2012-03-14T03:25:19.641Z · score: 0 (0 votes) · LW · GW

Why does it need to be aim along the planet? Use orbital mechanics: Send your spacecraft on an orbit such that it hits the planet it launched from at the fast point of a very long elliptical orbit. Or even just at the far side of the current planet's orbit, whatever. It can't be that hard to get an impact at whatever angle you'd prefer with most of the Orion vehicle's energy, launching direction barely seems to matter.

Comment by bcoburn on Facing the Intelligence Explosion discussion page · 2012-02-20T15:33:05.450Z · score: 4 (4 votes) · LW · GW

In a situation this specific, it seems to me to be worthwhile to reply exactly once, in order to inform other readers. Don't expect to change the troll's opinion, but making one comment in order to prevent them from accidentally convincing other people seems worthwhile.

Comment by bcoburn on Insufficiently Awesome · 2012-01-01T19:06:24.261Z · score: 0 (0 votes) · LW · GW

Does anyone know of a place to just buy one of those belts that tells you which way north is? I've looked and can't find such a thing.

Am therefore probably going to just make one, are there other things that it'd be useful to sense in a similar way? The first thing I think of is just the time, but maybe there's something better?

Comment by bcoburn on Tsuyoku Naritai! (I Want To Become Stronger) · 2011-12-27T01:47:08.495Z · score: 1 (1 votes) · LW · GW

"Improvement" is probably the literal translation, but it's used to mean the "Japanese business philosophy of continuous improvement", the idea of getting better by continuously making many small steps.

Comment by bcoburn on Online Course in Evidence-Based Medicine · 2011-12-04T16:47:16.928Z · score: 0 (0 votes) · LW · GW

Two things: What sort of time commitment/week would you expect for this?

the link in edit2 points to http://lesswrong.com/evidenceworksremote.com/courses instead of http://evidenceworksremote.com/courses which is presumably what it should be

Comment by bcoburn on Against WBE (Whole Brain Emulation) · 2011-11-28T00:46:54.334Z · score: 2 (2 votes) · LW · GW

Following up on this, I wondered what it'd take to emulate a relatively simple processor with as many normal transistors as your brain has neurons, and when we should get to that assuming Moore's Law hold. Also assuming that the number of transistors needed to emulate something is a simple linear function of the number of transistors in the thing you're emulating. This seems like it should give a relatively conservative lower bound, but is obviously still just a napkin calculation. The result is about 48 years, and the math is:

$T\_n\_e\_e\_d\_e\_d = T\_b\_r\_a\_i\_n \* \\frac\{T\_c\_u\_r\_r\_e\_n\_t\}\{T\_6\_5\_0\_2\} = 80\*10^9 \*\\frac\{1\.16\*10^9\}\{4000\} = 2\.32\*10^16 \\\\\*years = \\log\_\{2\}\{\\frac\{T\_n\_e\_e\_d\_e\_d\}\{T\_c\_u\_r\_r\_e\_n\_t\}\} \* 2 = 48\.5$

Where all numbers are taken from Wikipedia, and the random 2 in the second equation is the Moore's law years per doubling constant.

I'm not sure what to make of this number, but it is an interesting anchor for other estimates. That said, this whole style of problem is probably much easier in an FPGA or similar, which gives completely different estimates.

Comment by bcoburn on Absolute denial for atheists · 2011-11-17T03:36:04.675Z · score: 0 (0 votes) · LW · GW

I don't know for sure either way, and can't think of an experimental way to check off hand. I don't think that heating is likely to do anything to the other components of most drinks, and you might be able to make a better guess with domain knowledge I don't have.

I think ethanol will generally evaporate more quickly than water, so you might also be able to get a similar test by simply closing one portion into a container with only a little air, and leaving another open for a long enough time, overnight maybe. will still lose some water, which is I guess a more real problem with heating as well.

shrug, the details weren't really the point, just wanted to emphasize the idea of thinking of ways to test whatever you're interested in physically instead of just reasoning about it.

Comment by bcoburn on Poll results: LW probably doesn't cause akrasia · 2011-11-17T03:10:35.041Z · score: 1 (1 votes) · LW · GW

it's not quite trivial to actually measure, but total tabs opened in the last, say, hour is probably a better measurement than how many you have open right now.

After writing that I started thinking "maybe a large number of tabs open with a slow turnover/new tabs opening rate doesn't even correlate at all with procrastination", but I suspect that's just me coming up with excuses for things and isn't actually true. Could try measuring both if the survey actually works, shrug.

Comment by bcoburn on Rationality Quotes October 2011 · 2011-10-13T00:02:02.137Z · score: 2 (4 votes) · LW · GW

Also really badly needs to be applied to itself. So many words!

Comment by bcoburn on Fix My Head · 2011-09-18T01:38:29.044Z · score: 1 (1 votes) · LW · GW

It does dissolve reasonably into water, but tastes pretty terrible. Can dilute it with fruit juice if that's a problem, or just ignore it.

Comment by bcoburn on Open Thread: September 2011 · 2011-09-17T21:44:05.068Z · score: 1 (1 votes) · LW · GW

I don't know how well it works in games with only 1 scum player, but with at least two just the fact that there are two players who know they each have a partner changes their behavior enough that the game isn't random. There's also some change in what people say just because each side has a different win condition, although again this is less true with just one scum player.

As just a simple example, when you're playing as the scum it can be really hard (at least for me) to make a good argument that someone I know is a normal villager isn't, which can be enough for another player to deduce my role.

Comment by bcoburn on Take heed, for it is a trap · 2011-08-14T16:17:20.379Z · score: 2 (6 votes) · LW · GW

You could. Or you could just refuse to get into arguments about politics/philosophy. Or you could find a social group such that these things aren't problems.

I certainly don't have amazing solutions to this particular problem, but I'm fairly sure they exist.

Comment by bcoburn on Take heed, for it is a trap · 2011-08-14T12:06:41.671Z · score: 33 (35 votes) · LW · GW

To everyone who just read this and is about to argue with the specific details of the bullet points or the mock argument:

Don't bother, they're (hopefully) not really the point of this.

Focus on the conclusion and the point that LW beliefs have a large inferential distance. The summary of this post which is interesting to talk about is "some (maybe most) LW beliefs will appear to be crackpot beliefs to the general public" and "you can't actually explain them in a short conversation in person because the inferential distance is too large". Therefore, we should be very careful to not get into situations where we might need to explain things in short conversations in person.

Comment by bcoburn on Absolute denial for atheists · 2011-06-03T19:44:45.895Z · score: 4 (4 votes) · LW · GW

This is, indeed, exactly what happened.

Comment by bcoburn on Rationality Quotes: June 2011 · 2011-06-03T17:34:10.603Z · score: 5 (5 votes) · LW · GW

"Sweat" here is a standin for generic effort, whether it's actual physical sweat or not depends on what exactly you're training for.

Comment by bcoburn on Absolute denial for atheists · 2011-06-03T13:48:37.396Z · score: 3 (3 votes) · LW · GW

A relatively simple way to test whether you actually like the taste of alcohol specifically: take a reasonable quantity of your favorite alcoholic beverage, beer/wine/mixed drink/whatever, and split it into two containers. Close one, and heat the other slightly to evaporate off most of the actual ethanol. Then just do a blind taste test. This does still require not lying to yourself about which you prefer, but it removes most of the other things that make knowing whether you like the taste hard.

I personally don't care enough to try this, but just the habit of thinking "how could I test this?" is good.

Comment by bcoburn on Timeless Identity · 2011-05-25T23:17:34.298Z · score: 3 (3 votes) · LW · GW

Mandatory link on cryonics scaling that basically agrees with Eliezer:

http://lesswrong.com/lw/2f5/cryonics_wants_to_be_big/

Comment by bcoburn on Conceptual Analysis and Moral Theory · 2011-05-23T21:59:30.748Z · score: 0 (2 votes) · LW · GW

Why do we even care about what specifically Eliezer Yudkowsky was trying to do in that post? Isn't "is it more helpful to try to find the simplest boundary around a list or the simplest coherent explanation of intuitions?" a much better question?

Focus on what matters, work on actually solving problems instead of trying to just win arguments.

Comment by bcoburn on An inflection point for probability estimates of the AI takeoff? · 2011-05-03T13:32:21.522Z · score: 1 (1 votes) · LW · GW

This isn't even related to the law of large numbers, which says that if you flip many coins you expect to get close to half heads and half tails. This is as opposed to flipping 1 coin, where you expect to always get either 100% heads or 100% tails.

I personally expected that P(AI) would drop-off roughly linearly as n increased, so this certainly seems counter-intuitive to me.

Comment by bcoburn on Hunger can make you stupid · 2011-04-14T15:38:34.003Z · score: 0 (0 votes) · LW · GW

It depends on what you're trying to do, working in bad conditions/under pressure is good for training but bad for actually getting things done. Ironically this seems to mean that you should work harder to have good conditions when you're under more time pressure/in a worse situation overall.

Comment by bcoburn on Rationality Quotes: April 2011 · 2011-04-06T05:03:58.987Z · score: 2 (2 votes) · LW · GW

This one really needs to have been applied to itself, "short is good" is way better.

(also this was one of EY's quotes in the original rationality quotes set, http://lesswrong.com/lw/mx/rationality_quotes_3/ )

Comment by bcoburn on Experimental evidence of the value of redundant oral tradition · 2011-03-02T01:01:08.825Z · score: 2 (2 votes) · LW · GW

More people confirming a story is certainly epsilon more evidence that the story is correct (Because more people confirming a story being evidence that it is false is absurd).

A more interesting question is, what is the magnitude of epsilon in a case like the one described here? This is in principle testable, but I certainly don't know exactly how to go about testing it.

Comment by bcoburn on Procedural Knowledge Gaps · 2011-02-24T20:40:18.804Z · score: 1 (1 votes) · LW · GW

It's slightly better to specifically connect the other end the cable connected to the black side of the dead battery last, and to connect it to the frame of the car with the live battery instead of to the black terminal in that car.

The goal here is to make the last connection, the one that completes the circuit and can generate sparks, away from either battery, because lead-acid batteries can sometimes release hydrogen gas, which can cause fires or explode. The chances of this actually happening are pretty low, but there's no reason not to be careful. The end of the black cable connected to the running car is the only one that can be attached away from batteries, so that's the one used.

Comment by bcoburn on Secure Your Beliefs · 2011-02-20T15:54:48.389Z · score: 0 (0 votes) · LW · GW

That kind of comparison just completely ignores opportunity costs, so it will result in mistakes any time they are significant.

Comment by bcoburn on Rationality Quotes: February 2011 · 2011-02-03T04:20:55.130Z · score: 1 (1 votes) · LW · GW

You should try asking people to send smaller amounts of money at once, it's slightly more likely to work.

Comment by bcoburn on Dark Arts 101: Using presuppositions · 2010-12-30T20:52:31.954Z · score: 0 (2 votes) · LW · GW

Voted down because this is a really bad way to make a point.

On the other hand, the basic point is a good one: "they'll learn from it" is not in general a good reason for doing things that hurt people in whatever sense.

Comment by bcoburn on Rationality Quotes: December 2010 · 2010-12-15T21:24:39.878Z · score: 2 (2 votes) · LW · GW

The reasonable way to interpret this seems to be "don't trust something you don't understand/cannot predict." Not sure how seeing where it keeps its brain helps with that, though.

Comment by bcoburn on Existential Risk and Public Relations · 2010-08-21T16:11:01.164Z · score: 2 (2 votes) · LW · GW

It's interesting that you both seem to think that your problem is easier, I wonder if there's a general pattern there.