ISO: Automated P-Hacking Detection

2019-06-16T21:15:52.837Z · score: 6 (1 votes)
Comment by johnswentworth on Real-World Coordination Problems are Usually Information Problems · 2019-06-14T00:17:43.919Z · score: 4 (2 votes) · LW · GW

From Personal to Prison Gangs” is my main foundation here. I say real-world coordination problems "usually" look like this, because these are the kinds of problems we'd expect to increase over time, based on the ideas in that previous post.

That said, Personal to Prison Gangs attributes both the information problems and the trust problems to the same root cause: I interact with a larger number of people, with fewer interactions per person. On the one hand, fewer iterations means less penalty for "defectors", and common knowledge of this fact means less trust. On the other hand, more people + fewer interactions per person means both less time and less mental resources to customize my interaction with each individual person. Thus, in a large company, people are forced to rely more heavily on job titles - and in a larger society, people are forced to rely more heavily on identities more generally.

All the examples listed in the OP are the sorts of things you'd expect in a world with more people, more specialization, and fewer interactions between any given pair. (In some cases, this means zero interactions between a pair which would really benefit from interacting, as in several of the examples.)

I don't disagree that alignment & trust have a role here. But I do think that the large majority of real-world coordination problems could be solved by sticking the right two or three people in a room and just letting them talk for a full day. And in most cases, I think the relevant people would actually like to talk! The problem is finding the right two or three people and building that communication channel.

Real-World Coordination Problems are Usually Information Problems

2019-06-13T18:21:55.586Z · score: 28 (11 votes)
Comment by johnswentworth on Some Ways Coordination is Hard · 2019-06-13T16:25:27.608Z · score: 2 (1 votes) · LW · GW

Is "shilling point" some new thing I've never heard of, or is this just another spelling of "Schelling point"? I assume the latter, but it sounds like a name someone would come up with for a concept similar-to-but-slightly-different-from a Schelling point.

Comment by johnswentworth on Major Update on Cost Disease · 2019-06-05T22:52:59.171Z · score: 40 (12 votes) · LW · GW

I looked through Tabarrok's book, and my general impression is:

  • He does a decent job going through the education data. (At least, he finds the same stuff I did when I went through the data a few years ago.)
  • He totally whiffs on healthcare data. In particular, I saw no mention of demographic shifts, which are far and away the biggest driver of growth in US healthcare spending.
  • He then takes a hard left turn and goes off talking about the Baumol effect, without grounding it very well in the data. He gives a bunch of qualitative arguments that things look consistent with Baumol, but never makes the quantitative arguments that Baumol explains all the growth, and never properly rules out alternative hypotheses.

A good example is on page 50: "it is evident that despite higher costs Americans have chosen to buy more healthcare output over time. Once again, this is consistent with the Baumol effect but inconsistent with a purely cost-driven explanation for rising prices." It's also consistent with the demand curve shifting up over time, and Baumol having nothing to do with it. Which is exactly what we'd expect in an aging population.

He does do a decent job ruling out "purely cost-driven explanations" as an alternative hypothesis, but that does not imply Baumol.

Comment by johnswentworth on The Fundamental Theorem of Asset Pricing: Missing Link of the Dutch Book Arguments · 2019-06-02T17:58:08.792Z · score: 5 (2 votes) · LW · GW

Great question. The setup here assumes zero interest rates - in particular, I'm implicitly allowing borrowing without interest via short sales (real-world short sales charge interest). Once we allow for nonzero interest, there's a rate charged to borrow, and the price of each asset is its discounted expected value rather than just expected value. That's one of several modifications needed in order to use this theorem in real-world finance. (The same applies to the usual presentation of the Dutch Book arguments, and the same modification is possible.)

Comment by johnswentworth on The Fundamental Theorem of Asset Pricing: Missing Link of the Dutch Book Arguments · 2019-06-02T07:08:13.571Z · score: 11 (4 votes) · LW · GW

We're not actually talking about the VNM formalism here. That's why the "in the absence of uncertainty" part is important.

We have a finite set of world-states and preferences over those world-states. We do not care about preferences over random mixtures of world-states, we don't even have a notion of random mixtures, just the deterministic states themselves. We want a utility function which encodes our preferences over those deterministic world-states.

In the absence of uncertainty, we don't actually need the continuity assumption or the independence assumption for anything. They don't even make sense; we need a notion of random mixtures just to state those assumptions. VNM utility needs those because it's trying to get expected utility maximization right out the door. But we're not starting from VNM utility, we're starting from deterministic utility.

Whether we need completeness or not is more debatable. It depends on how we're interpreting missing preferences. If we interpret missing preferences as "I don't know", then it seems natural to allow the utility function to give any possible preference for that pair. In that case, lack of completeness may mean our utility function isn't unique, but it won't prevent a utility function from existing.

It's exactly the same in Eliezer's post. His circular preferences argument comes before random outcomes are even introduced. There's no notion of randomness at that point, no notion of lotteries, so he's not talking about VNM utility. The circular preferences argument is not the VNM utility theorem, it is a separate thing which makes a different claim under weaker assumptions. That does not make it incorrect.

Comment by johnswentworth on When Observation Beats Experiment · 2019-06-02T03:16:43.589Z · score: 2 (1 votes) · LW · GW

Right, we need to use experiments to figure out that Y is needed in the first place. That's the "figuring out the structure" part - figuring out what the relevant gears are and how they fit together.

Now, experiments also inherently involve some kind of observation. You make a change, then observe the effect of that change. In some cases, the observation built into the experiment may be enough to figure out the system's state - that's what happens in your small doses idea. But this is a very indirect (and likely error-prone) way of figuring out that Y is high in our rat strain.

Comment by johnswentworth on The Fundamental Theorem of Asset Pricing: Missing Link of the Dutch Book Arguments · 2019-06-01T20:40:07.233Z · score: 5 (3 votes) · LW · GW

I've tried to minimize the technical prerequisites for this post, but it's still very abstract and mathy. If you understand it and can write well, please consider writing up a more human-readable version which builds around a concrete example or two rather than keeping everything abstract. Alternatively, if you are Eliezer Yudkowsky, consider integrating the FTAP into that great intro I linked above.

I'll probably get around to writing a more concrete version of this post eventually, but I wanted to get the idea out there, since hardly anyone seems to know about it.

The Fundamental Theorem of Asset Pricing: Missing Link of the Dutch Book Arguments

2019-06-01T20:34:06.924Z · score: 37 (12 votes)
Comment by johnswentworth on When Observation Beats Experiment · 2019-06-01T18:23:35.327Z · score: 4 (2 votes) · LW · GW

Yeah, that gets into the technical details brushed under the rug. There's two relevant types of equations governing the equilibrium value of PP:

  • The thermodynamic equilibrium equation, which fixes a product/ratio of the concentrations of the 3 species (linear in log-concentrations)
  • The stoichiometric constraints, which fix a couple linear combinations of the concentrations of the 3 species (linear in concentrations)

I'm effectively assuming that the thermodynamics favor X + Y over PP, so that the stoichiometric constraints can be approximated as "X and Y concentrations are each fixed" - there's never enough PP produced to significantly decrease them. That way, we can ignore the stoichiometric limit (so long as X and Y are abundant), and just pay attention to the equilibrium equation. Then log-concentration of PP is a positive linear function of log-concentrations of X and Y, so everything is easy to think about. Increasing/decreasing either X or Y by enough can always shift the equilibrium PP above/below the threshold.

The problem with separate reactions (X -> PP and Y -> PP) is that, if Y is high, then increasing or decreasing X does nothing - PP will always be above threshold regardless of the X level. It's an or-gate, rather than a linear function. Similarly, if PP were mainly determined by stoichiometric limits in my original reactions, we'd have an and-gate.


Comment by johnswentworth on Feedback Requested! Draft of a New About/Welcome Page for LessWrong · 2019-06-01T04:05:49.680Z · score: 4 (2 votes) · LW · GW

I definitely think it's fine for the short term. I don't want to push premature perfectionism here - this will not make the site worse than it is, and may make it better.

I wouldn't want it to go up and then forget about it, and have several years of newcomers dropping off because the entry point didn't grab them. (I'm less concerned about perfecting a page whose purpose is not entry-point.) That said, if I'm ever really unhappy about it, I can always just draft something up myself and then propose it to you guys.

Comment by johnswentworth on Feedback Requested! Draft of a New About/Welcome Page for LessWrong · 2019-06-01T03:25:38.295Z · score: 11 (3 votes) · LW · GW

As written, it feels like this is trying to mix some aspects of a mission statement and some aspects of an entry point, and doing neither one very well. A lot of it comes out sounding like bland generispeak - not all of it, but a lot. It would be easy to make it more engaging if that's what we're going for, or more informative if that's the objective, etc - it needs some goal that says what readers are meant to get out of it, and then more focus on achieving that goal.

(Sorry if that sounds overly harsh. I'm viewing this thread as a round of editing, so critiquing it as a piece of writing seems right.)

Comment by johnswentworth on Feedback Requested! Draft of a New About/Welcome Page for LessWrong · 2019-06-01T01:50:22.567Z · score: 9 (3 votes) · LW · GW

A few other things rationality is not:

  • An aesthetic preference for square grids
  • Assertion of the superiority of Western culture
  • The belief that credentialed experts always know better
  • Abandoning your ethics/morals
  • Always defecting in the prisoners' dilemma

I would guess that making it clear what we're not talking about is more important to hooking new people than precisely defining rationality. Also, I would avoid using the word "truth" explicitly in the "what is rationality" section.

More generally, if the purpose of this page is be an entry point, I would front-load it with more hooks, examples, and links, and push less hook-y things toward the end. On a meta-level, if it's going to serve as an entry point, then it's also a key page to instrument with tracking, a/b test copy, and all that jazz. On the other hand, if the main purpose of the page is to serve as a mission statement or something along those lines, then parts explicitly aimed at newcomers could be dialed back, especially things like "What is rationality" or "Why should I care" that are addressed within the sequences.

Comment by johnswentworth on When Observation Beats Experiment · 2019-06-01T00:37:45.859Z · score: 8 (4 votes) · LW · GW

I was reading The Biology of Aging. Several times, the author says something to the effect of "experiment beats observation, therefore the gold standard in aging research is to show that adding X accelerates aging and removing X slows it." For many values of X, experimental manipulation definitely makes organisms live longer/shorter, but it's not clear that X actually changes significantly during actual aging.

So in this instance, we don't fully understand how the system works either, but a measurement of X could still tell us whether it's relevant (based on whether it changes).

When Observation Beats Experiment

2019-05-31T22:58:57.986Z · score: 13 (5 votes)
Comment by johnswentworth on How would you take over Rome? · 2019-05-25T00:27:44.105Z · score: 2 (1 votes) · LW · GW

General notes, before I actually propose a solution:

  • A lot of proposals so far involve things like "use my education and predictive talents to achieve a high position in society". Given how quickly smart people with amazing predictive talents walk into the White House today, combined with the political talents of LWers, I doubt that has any hope of working.
  • A lot of proposals involve introducing technologies like mills, cannon, etc which would be massive capital outlays by the standards of the time. You'd need to already have an empire's worth of resources on hand, and early-modern tech probably still wouldn't be enough to make the economy more efficient right away.
  • In terms of real economic value, probably the largest chunk of potential is corn and potatoes - both are New World plants, corn has caloric yield an order of magnitude higher than wheat (in terms of both land and labor requirements), and potatoes don't get burned when an army comes by. If I could manage a round trip to the New World, bringing back those crops would be huge - although that still leaves the question of how to capture the value. Conversely, it would be hard for any ancient economy to accumulate capital - and thus begin the trek to automation - without higher-yield crops. Unfortunately, a New World round-trip would itself take a massive capital outlay.

Thinking about low-capital-investment tech which could be implemented without advanced manufacturing...

  • RSA encryption & signing
  • logarithms & the slide rule
  • sextant & compass & associated navigation techniques
  • 18th-century design of chimneys and stoves (before which they were impractical for heating)
  • demonstration-scale telegraph (two magnets and some copper wire would suffice for material)
  • maybe very rudimentary radio (magnets, copper wire and copper foil suffice in principle)
  • aniline dyes - relatively easy once you know to look for them, and would be worth a fortune in ancient Rome
  • incremental metallurgical improvements

... I'm sure there's more to think of here, but that should be enough for a viable solution.


Proposal

Tech is the big edge, obviously. Moderns don't have a major edge in politics or war (absent the tech needed for modern military doctrine). Economically, most tech involves large capital outlays, and value capture is difficult when you're not already emperor. Given all that, general shape of solution I want is:

1. accumulate some initial resources using a low-barrier technological advantage

2. bring together a relatively small group of people (company-sized), in a remote location, for rapid vertical development of higher tech - not necessarily all the way to modernity, but enough to decisively swing the balance in a war

3. Go shopping for allies who don't like the Romans

Most of the effort is in step 2 - our relative advantage is tech, so our strategy puts most of the work in tech.

Let's flesh all that out.

  • Aniline dyes are the hot item for step 1 - they don't require advanced manufacturing or materials, capital investment required is relatively low, and given that ancient Rome was already hemorrhaging gold to buy Chinese silk, they'd definitely sell like hotcakes. Care must be taken to make sure we paid for the dyes rather than arrested or enslaved somehow, but that's probably tractable.
  • Round trip to the Americas is the next big step - we can't support people off the trade grid without high-yield crops, and fertilizer alone won't get us there. Outfitting a long-range ship should be viable for a dye-lord, and modern knowledge of sailing and the trade winds should help out. Ancient seafarers did not like leaving sight of land, but that's the kind of problem which can be solved by throwing money at it - and we'd want to train them in modern sailing anyway. Pack to trade in both directions, the voyage should be quite profitable, and we come back with all-important corn and potatoes.
  • At this point, it's viable to hire a couple hundred people with varying specialties and move to a remote patch of land somewhere on the edge of the empire where nobody will pay much attention, with running water and wood. Coastal would be useful, but not if it increases visibility too much. Get the corn and potatoes growing, and it should be viable to support the whole crowd with a fairly small agricultural base.
  • Now we have the base to make some capital investments while safely capturing their value, without fear of seizure. Let's say we have half of our two hundred people farming (comparable ratio to early modern England); most of the remaining hundred are artisans - smiths, carpenters, etc. They'll build most of our capital assets. We'll import most raw materials, like wool and metal ore, plus some processed material like sailcloth (until we can produce it ourselves), necessaries like salt, and whatever luxuries our team can enjoy without attracting undue attention.
  • Early capital assets are mainly ships and water mills - the former for trade, the latter for automation. Note that overland trade is not a high value prop - ships in any era have capacity multiple orders of magnitude higher than wagons/trucks, and premodern roads were pretty bad anyhow. The ships will sail the trade routes, and should provide ample profit for anything we need to buy. (We won't be able to keep our navigation secrets under wraps forever, but they should last long enough - ancient sailors' fear of the open sea works in our favor here.)
  • With plenty of wood and artisans, the mills can provide power for whatever light industry we want, but that industry itself will probably not be profitable - labor is very cheap relative to capital in the premodern world. The point of the mills is to power anything we want to keep away from prying eyes, in our remote corner of the world. Metallurgy will be the first and biggest priority - we can import ore, but the secrets of steel need to stay local.
  • The precision manufacturing feedback loop: build more precise tools, and those let us build even more precise tools. High-quality materials are a limiting factor for that loop, thus the importance of metallurgy.
  • Building a dynamo shouldn't be too hard, once we've imported magnets. We'll want to produce crap-tons of copper wire for anything electric - that shouldn't present any difficulties. Electricity would mainly be useful for chemical processes, e.g. splitting water - communication wouldn't matter much in our tiny remote town, and we don't want any secrets getting out. We'd need a dam to get useful power production, but dam-building isn't too hard.
  • Hand-cranked radios are still a maybe, but they'd be pretty cool and a huge military advantage.
  • Haber-Bosch process is the next big jump: with a source of ammonia, both chemical fertilizer and explosives become viable. (We could go the old-fashioned route for explosives, but Rule of Cool.) Against the famed legion phalanxes, we're looking at tight-packed targets - ideal for things that go boom.
  • At this point, we'd probably have the pieces to move on Rome. A decent-size trade fleet provides both support infrastructure and mobility for any troops we ally with, and a trade business provides connections throughout the empire and surrounding areas. Explosives would, honestly, be as much about marketing ourselves to potential allies as about actually winning fights, but they'd be awesome marketing. Encryption and (maybe) primitive radios would be a less flashy but more practical military advantage, if it actually came to a fight. All that's left is to find someone with a reasonably-sized military who's more interested in beating the Romans than in taking their stuff, and strike a deal.
Comment by johnswentworth on How would you take over Rome? · 2019-05-24T22:08:26.925Z · score: 2 (1 votes) · LW · GW

Major barrier right at the start: buying a little chunk of land is one thing, but a bunch of the stuff in step 2 will require very large capital outlays by the standards of the time - especially the mill.

There's a reason automation didn't catch on before the industrial revolution: capital was scarce and peasants were abundant. It wouldn't be easy to actually produce crops more cost-effectively than peasant labor, when the peasant labor is absurdly cheap. Things like fertilizer could help, but even that will run into problems - people don't like poop on their food, and chemical fertilizer/insecticide requires relatively complicated manufacturing facilities.

Comment by johnswentworth on Constraints & Slackness Reasoning Exercises · 2019-05-22T23:54:31.813Z · score: 6 (3 votes) · LW · GW

I actually started to talk about finding loosely-coupled constraints in an earlier draft of the post, but that quickly turned into the entire skill of model-building. That was when I decided to just go with the games, at least for this post.

Comment by johnswentworth on Go Do Something · 2019-05-22T17:16:59.474Z · score: 4 (2 votes) · LW · GW

That was exactly what the little Zvi voice in the back of my head said. I'm not yet convinced. The "network effects -> natural monopoly" argument is a strong one, but it still seems like coordination problems are the main economic bottleneck even when there isn't value capture involved, especially in smaller-scale situations.

Some examples:

  • Academics who specialize in bridging between fields or sub-disciplines, e.g. biophysicists, mathematical chemists, synthetic biologists (usually from an engineering background), mathematicians who translate one sub-field's jargon into another, etc.
  • Cross-department coordination within companies, e.g. car-ad spreadsheet example. People who work across specialized departments seem to have unusually high value relative to effort exerted.
  • There's a book on tackling large coordination problems in government - they call them "wicked" problems. The opening chapter is the only interesting one. It's written by Mike McConnell, the guy tasked with fixing up US intelligence after 9/11. Various agencies had all the pieces to stop the attacks, but multiple cross-agency coordination failures prevented them from acting in time.
  • McConnell also tells the story of the Goldwater-Nichols Act. After the invasion of Grenada, the complete coordination failure of the military was apparent. Each half of the island was controlled by a different branch, and in order to talk to each other, officers had to walk to the nearest payphone and get routed through one of the opposite branch offices on the US mainland, because their radios were incompatible. The Goldwater-Nichols Act reorganized things to fix this. It passed despite unanimous opposition by the service chiefs, it worked, and ten years later every single service chief testified before congress that it was the best thing to ever happen to the US military.

In all of these cases, there's no clear natural monopoly and no obviously outsized value capture relative to value created. Rather, the "potential energy" is created by language barriers, intra-organization political coalitions, information silos, and communities with limited cross-talk.

That's not to say value capture isn't relevant to e.g. Google or Facebook. Obviously it is. But Google (and more debatably Facebook) still creates huge amounts of real value, regardless of how much it captures, and it does so with little "effort" - Google's employee base is tiny relative to value created, and most of those employees don't even work on search.

There is an argument to be made that I'm really talking about two qualitatively different cases here: coordination problems which involve breaking down cross-silo barriers, and coordination problems which involve building new markets. Maybe both of these are interesting on their own, but generalizing to all coordination problems goes too far? On the other hand, there are outside-view reasons to expect that coordination problems in general should get worse as the world modernizes - see From Personal to Prison Gangs.

Comment by johnswentworth on Go Do Something · 2019-05-22T01:12:43.090Z · score: 14 (4 votes) · LW · GW

Totally agree. In particular, I do think that solving small-scale coordination problems is one of the main ways that individuals can have high positive impact on their company/community, relative to effort expended. (I like to use an example from an online car dealership where I used to work: the salespeople had no idea what cars were listed or at what price, which caused a lot of friction when someone called in about a car. Our product manager eventually solved this with five minutes of effort: he asked our marketing guy to forward his daily car-ad spreadsheet to the sales team.)

That said, generalized efficient markets principle doesn't go completely out the window the moment we zoom in from the whole-world-level. The bigger and more obvious the gain from some coordination problem, the more people have probably tried to solve it already, and the harder it's likely to be. All the usual considerations of generalized efficiency still apply.

This still leaves the question of why coordination problems have unusually high returns (at the world-scale). Are there few people who are actually good at it? Is it a matter of value capture rather than value creation? Are people just bad at realizing coordination problems need to be solved? Different theories about the large-scale potentially have different predictions about the difficulty & reward of small-scale coordination problems.

Comment by johnswentworth on Go Do Something · 2019-05-21T23:58:20.271Z · score: 10 (2 votes) · LW · GW

Agreed, though with the caveat that losing some money in the stock market is an important early step in gaining experience - presumably it's the same with coordination problems. But that sort of practice should be undertaken with the understanding that it's likely to fail on an object-level, and you want that learning experience to be cheap - e.g. don't make it harder for the next person to solve the coordination problem.

In particular, I wouldn't want to discourage people from building coordination skills by having a minimum level of status required to even try. Rather, we ideally want ways to experiment that aren't too damaging if they fail. (And, of course, we want to have realistic expectations about chance of success - ideally people go into a learning experience fully aware that it's a learning experience, and don't bet their house on day-trading.)

Comment by johnswentworth on Go Do Something · 2019-05-21T23:21:04.317Z · score: 18 (7 votes) · LW · GW

At some point I'll get around to writing a proper post on this topic, but a few brief bullet points:

  • Coordination problems seem to be the primary bottleneck to economic progress across the large majority of companies and industries today.
  • One class of evidence in support: go down Forbe's list of billionaires, and practically all of them (other than the heirs) made their fortune by spending their day-to-day work solving coordination problems (e.g. founding and managing a business). At a higher level of abstraction, most of the successful internet companies make their money by solving coordination problems - Uber, Lyft, Facebook, Amazon and Google are obvious examples.
  • Flip side of that coin: solving coordination problems yields massive rewards, so generalized efficient markets principle suggests that it must be really hard to solve coordination problems consistently and at scale.

I think the main take-away is not "try to do something other than solve coordination problems", but rather "coordination problems are really difficult in general, like beating-the-stock-market level of difficult". They're a big-game kind of problem, with potentially huge rewards, but you need to go into it with the same mindset as beating the market: you need to either find a highly specialized niche, or be the very best in the world at some relevant skill, and either way you also need to be fully competent at all the other relevant skills. If it looks like there's some easy low-hanging fruit to pick, you're probably missing something, unless there's a really good reason why nobody else in the world could have noticed that particular fruit.

Constraints & Slackness Reasoning Exercises

2019-05-21T22:53:11.048Z · score: 34 (11 votes)
Comment by johnswentworth on How to improve at critical thinking on science/medical literature? · 2019-05-14T20:29:14.291Z · score: 9 (4 votes) · LW · GW

A few months ago I read this paper for a class (paywalled). In it, the authors perform a similar set of knockdown experiments using both short hairpin RNA and CRISPR in order to repress a gene. Their results with the shRNAs are quite impressive, but the CRISPR part is less so. Why the disparity?

The key to this sort of thing is to picture it from the authors' perspective. Read between the lines, and picture what the authors actually do on a day-to-day basis. How did they decide exactly which experiments to do, which analyses to run, what to write up in the paper?

In the case of the paper I linked above, the authors had a great deal of experience and expertise with shRNAs, but seemed to be new to CRISPR. Most likely, they tried out the new technique either because someone in the lab wanted to try it or because a reviewer suggested it. But they didn't have much expertise with CRISPR, so they had some probably-spurious results in that part of the paper. All we see in the paper itself is a few results which don't quite line up with everything else, but it's not hard to guess what's going on if we think about what the authors actually did.

This principle generalizes. The main things to ask when evaluating a paper's reliability are things like:

  • Does it seem like the authors ran the numbers on every little subset of their data until they found p < .05?
  • Does it seem like the authors massaged the data until it gave the answer they wanted?
  • Does it seem like the authors actively looked for alternative hypotheses/interpretations of their data, and tried to rule them out?

... and so forth. In short, try to picture the authors' actual decision-making process, and then ask whether that decision-making process will yield reliable results.

There's all sorts of math you can run and red flags to watch out for - multiple tests, bad incentives, data not actually matching claims, etc - but at the end of the day, those are mostly just concrete techniques for operationalizing the question "what decision-making process did these authors actually use?" Start with that question, and the rest will follow naturally.

Comment by johnswentworth on Hierarchy and wings · 2019-05-07T00:40:13.160Z · score: 5 (2 votes) · LW · GW

Thanks, the decision makes sense given your reasoning.

I also agree more formalization shouldn't be required, if it's early exploration of an idea. I had read the post as saying that there was already a formal model out there, which cast the whole thing in a different light.

Comment by johnswentworth on Hierarchy and wings · 2019-05-06T20:44:00.798Z · score: 9 (4 votes) · LW · GW

Could you give a reference for the Hierarchy Game? A quick google search did not turn up anything that sounded like game theory.

On a separate note, this post is IMO really toeing the line in terms of what's too political for LW. I'd consider it safely in-bounds if there were more explanation of the content of the hierarchy game, the conditions required for stability of expropriation in that game, and then discussion of evidence that those conditions actually do exist in current political systems. As it stands, the post has too many borderline-controversial claims and too little explicit evidence.

Comment by johnswentworth on Aging research and population ethics · 2019-04-28T21:34:03.100Z · score: 2 (1 votes) · LW · GW

Sorry, yes, LEV as you've defined it does not immediately lead to unbounded life expectancy. I'm not sure this is the way most people define LEV? I always thought the magic number was expected lifespan based on current mortality rates increasing by 1 yr per yr - that way everything remains well defined even when life-expectancy-accounting-for-medical-advances diverges, and we can meaningfully talk about the critical transition point.

Anyway, that's kind of beside the point I'm trying to make: increasing rate of medical progress is not necessarily the most useful way to think about the problem, at least for now. Maybe you were already thinking of it the way I had in mind, and I just got confused by the LEV label.

Comment by johnswentworth on Aging research and population ethics · 2019-04-28T18:53:57.737Z · score: 2 (3 votes) · LW · GW

One general piece of feedback on both this and the previous post: LEV isn't necessarily the ideal model for analyzing antiaging research. It's an attractive idea, especially for those of us with quantitative backgrounds and a strong understanding of economics, but it's definitely not how most biologists view the problem - and in this case, it's not just biologists missing the boat in some obvious way.

Here's (my interpretation of) how biologists see the problem.

There are organisms which do not age - notably the hydra. This means that the hydra's mortality rate is independent of the hydra's age: a hundred-year-old hydra is no more or less likely to die in the next year than a one-day-old hydra. An older hydra looks and behaves exactly like a younger hydra (once development is complete). Contrast to humans: humans clearly look and behave differently as we get older. Our mortality rate increases with age.

The main goal of mainstream antiaging research is not LEV, but antiaging in the sense of the hydra: make older humans biologically indistinguishable from younger humans. This does not immediately lead to unbounded life expectancy, as in LEV; it just means that older humans would have the same mortality rates as younger humans. For today's first world countries, where age-related diseases are the main cause of death, that would mean dramatically longer (but still finite) life expectancies.

One specific way this vision differs from LEV: LEV inherently depends on continuous progress, on new research constantly removing mortality sources. Antiaging, on the other hand, will some day be done: once we've nailed down the key mechanisms of human aging, and figured out how to correct them, that's it. Older humans can be reset to a "younger" biological state, mortality curves will be independent of age, job done and we all go research something else. This makes antiaging mostly a technical problem, whereas LEV is as much economic as technical.

Comment by johnswentworth on [Answer] Why wasn't science invented in China? · 2019-04-24T06:31:19.243Z · score: 15 (7 votes) · LW · GW
What made the ancient Greeks so generative? It seems they founded the Western philosophical and scientific traditions, but what led to their innovativeness?

Hypothesis: they weren't any more innovative than anyone else; their language and culture just got transmitted over a much larger area. In the wake of Alexander's conquests, Greek became the working language of government bureaucrats across most of the Middle East. It quickly turned into a status symbol and a credential, sort of like a college degree today. Speaking, reading and writing Greek automatically put you into the educated social class. A working knowledge of Greek culture followed along naturally with the language.

Without Alexander's conquests, I very much doubt that Aristotle's name would be at all known today.

Comment by johnswentworth on Generating a novelty scale · 2019-04-21T18:12:18.877Z · score: 3 (2 votes) · LW · GW

A general rule that I try to follow is "never write something which someone else has already written better". Rather than give a numerical scale, I'll list a few distinct ways that pieces can satisfy this rule. I'll give examples of each from my own writing, with the caveat that they are not necessarily very good pieces - just examples of my own reasoning regarding their novelty.

Note that these are ways that a piece of writing can be novel, not guarantees that nobody has ever written the same thing before.

Side note: if using a numerical scale, I worry about confusing novelty with importance - the example scale in the OP seems to mix the two. Perhaps a better approach would be to give handles for several different ways things can be novel, and then use those as tags?

Comment by johnswentworth on Declarative Mathematics · 2019-04-19T16:44:53.775Z · score: 2 (1 votes) · LW · GW

Perhaps the difference is what you're imagining as "under the hood". Nobody wants to think about the axiom of choice when solving a differential equation.

Comment by johnswentworth on The Simple Solow Model of Software Engineering · 2019-04-11T16:45:45.694Z · score: 8 (4 votes) · LW · GW

Possible point of confusion: equilibrium does not imply static equilibrium.

If a firm can't find someone to maintain their COBOL accounting software, and decides to scrap the old mainframe and have someone write a new piece of software with similar functionality but on modern infra, then that's functionally the same as replacement due to depreciation.

If that sort of thing happens regularly, then we have a dynamic equilibrium. As an analogy, consider the human body: all of our red blood cells are replaced every couple of months, yet the total number of red blood cells is in equilibrium. Replacement balances removal. Most cell types in the human body are regularly replaced this way, at varying timescales.

That's the sort of equilibrium we're talking about here. It's not that the same software sticks around needing maintenance forever; it's that software is constantly repaired or replaced, but mostly provides the same functionality.

Comment by johnswentworth on The Simple Solow Model of Software Engineering · 2019-04-10T23:02:40.833Z · score: 6 (3 votes) · LW · GW

Yeah, in retrospect I should have been more clear about that. Thanks for drawing attention to it, other people probably interpreted it the same way you did.

Comment by johnswentworth on The Simple Solow Model of Software Engineering · 2019-04-10T21:17:39.438Z · score: 4 (2 votes) · LW · GW

Sounds like you're mostly talking about ops, which is a different beast.

An example from my previous job, to illustrate the sort of things I'm talking about: we had a mortgage app, so we called a credit report api, an api to get house data from an address, and an api to pull current pricing from the mortgage securities market (there were others, but those three were the most important). Within a six-month span, the first two apis made various small breaking changes to the format returned, and the third was shut down altogether and had to be switched to a new service.

(We also had the whole backend setup on Kubernetes, and breaking changes there were pretty rare. But as long as the infrastructure is working, it's tangential to most engineers' day-to-day work; the bulk of the code is not infrastructure/process related. Though I suppose we did have a slew of new bugs every time anything in the stack "upgraded" to a new version.)

Comment by johnswentworth on The Simple Solow Model of Software Engineering · 2019-04-10T17:49:35.094Z · score: 6 (3 votes) · LW · GW

Good points, this gets more into the details of the relevant models. The short answer is that capital equilibrates on a faster timescale than growth happens.

About a year ago I did some research into where most capital investments in the US end up - i.e. what the major capital sinks are. The major items are:

  • infrastructure: power grid, roads, railroads, data transmission, pipelines, etc.
  • oil wells
  • buildings (both residential and commercial)
  • vehicles

Most of the things on that list need constant repair/replacement, and aren't expanding much over time. The power grid, roads and other infrastructure (excluding data transmission) currently grow at a similar rate to the population, whereas they need repair/replacement at a much faster rate - so most of the invested capital goes to repair/replacement. Same for oil wells: shale deposits (which absorbed massive capital investments over the past decade) see well production drop off sharply after about two years. After that, they get replaced by new wells nearby. Vehicles follow a similar story to infrastructure: total number of vehicles grows at a similar rate to the population, but they wear out much faster than a human lifetime, so most vehicle purchases are replacements of old vehicles.

Now, this doesn't mean that people "prioritize maintenance above new stuff"; replacement of an old capital asset serves the same economic role as repairing the old one. But it does mean that capital mostly goes to repair/replace rather than growth.

Since capital equilibrates on a faster timescale than growth, growth must be driven by other factors - notably innovation and population growth. In the context of a software company, population growth (i.e. growing engineer headcount) is the big one. Few companies can constantly add new features without either adding new engineers or abandoning old products/features. (To the extent that companies abandon old products/features in order to develop new ones, that would be economic innovation, at least if there's net gain.)

Comment by johnswentworth on The Simple Solow Model of Software Engineering · 2019-04-10T05:32:28.567Z · score: 2 (1 votes) · LW · GW

Yeah, I used to have conversations like this all the time with my boss. Practically anything in an economics textbook can be applied to management in general, and software engineering in particular, with a little thought. Price theory and coordination games were most relevant to my previous job (at a mortgage-tech startup).

The Simple Solow Model of Software Engineering

2019-04-08T23:06:41.327Z · score: 26 (10 votes)
Comment by johnswentworth on How good is a human's gut judgement at guessing someone's IQ? · 2019-04-07T16:09:50.997Z · score: 2 (1 votes) · LW · GW

Yeah, sorry, I should have been more clear there. The mechanisms which trade off immune strength against other things are the underlying cause. Testosterone levels in males are a good example - higher testosterone increases attractiveness, physical strength, and spatial-visual reasoning, but it's an immune suppressor.

Comment by johnswentworth on Asymptotically Unambitious AGI · 2019-04-02T03:27:21.396Z · score: 3 (2 votes) · LW · GW

We cannot "prove" that something is physically impossible, only that it is impossible under some model of physics. Normally that distinction would be entirely irrelevant, but when dealing with a superintelligent AI, it's quite likely to understand the physics better than we do. For all we know, it may turn out that Alcubierre drives are possible, and if so then the AI could definitely break out that way and would have an incentive to do so.

I agree that the AI is not really boxed here; it's the "myopia" that makes the difference. But one of two things should generally be true:

  • The AI doesn't want to get out of the box, in which case the box doesn't need to be secure in the first place.
  • The AI cannot get out of the box, in which case the AI doesn't need to be safe (but also won't be very useful).

This case seems like the former, so long as hacking the human is easier than getting out of the box. But that means we don't need to make the box perfect anyway.

Comment by johnswentworth on Asymptotically Unambitious AGI · 2019-04-01T16:24:38.322Z · score: 2 (1 votes) · LW · GW

Can you expand a bit on why a commitment to give a high reward won't save us? Is it a matter of the AI seeking more certainty, or is there some other issue?

Comment by johnswentworth on Asymptotically Unambitious AGI · 2019-04-01T16:23:02.467Z · score: 2 (1 votes) · LW · GW

This is pretty thorough from an engineering standpoint. I still would not at all trust such a box to hold a superintelligent AI. The unknown unknowns remain too big a factor. Just because an unknown information channel does not have any humans listening to it directly, does not mean that it can't be used to push around matter, or couple to some other channel, or otherwise leave an opening.

Also, the human and their clothes are definitely not the only potentially-useful tools in the box: there's also the computer, the computer's power supply, whatever's cooling the room, whatever's keeping the air fresh, whatever's used for input, and whatever's used for output. If any of those things were outside the room, then they would offer a side channel for information to escape. They can be buried in concrete, but they'll still need to be within the vacuums and whatnot.

In general, trying to outsmart a superintelligent AI is a Bad Idea, and building an AI-proof box definitely falls in that category. If an AI is not safe with arbitrarily large amounts of power, then it is not safe, full stop.

Comment by johnswentworth on Asymptotically Unambitious AGI · 2019-04-01T07:16:18.578Z · score: 2 (1 votes) · LW · GW

So I think this proposal is a specific instance of a general principle: wireheading acts as a safety valve. As long as the AI is prone to wireheading, and you leave it some path to wireheading, the AI will never bother with anything more difficult than that path. In this particular case, it will never do anything harder than hacking its human operator, and that's where most of the safety benefit comes from.

This kind of setup creates a direct trade-off between safety and capability: the harder it is to hack the human operator, the more likely that some other escape channel is easier - manipulating electrical circuitry to push atoms around, or some weird thing like that. The AI will become an existential threat if you send in an operator who really doesn't want to give it 100% reward, and make that operator hard to hack - then the AI has a strong incentive to gain whatever power it needs. (I definitely do not trust a door with a kill switch to keep a superintelligent AI in a box.)

My intuition says that nothing else in the proposal actually matters for safety, except the wireheading safety valve aspect. In particular, I think just giving the AI direct exploration abilities won't mess up safety, as long as the wireheading path is present and there's not "99.99% certainty is better than 99.98%"-type failure mode.

Comment by johnswentworth on Asymptotically Unambitious AGI · 2019-04-01T02:03:05.802Z · score: 2 (1 votes) · LW · GW
The reward belongs to a finite set of rationals between 0 and 1.

Once the AI becomes sufficiently powerful, it is definitely going to hack the operator any time it expects the operator to give a reward less than 1. So the operator's input is really binary, at least once the AI has learned an accurate model. Given that, why allow non-binary rewards at all? Is it just supposed to provide faster learning early on?

Along similar lines: once the AI has learned an accurate model, why would we expect it to ever provide anything useful at all, rather than just hacking the operator all day? Do we think that hacking the human is likely to be harder than obtaining perfect rewards every time without hacking the human? Seems like that would depend very heavily on the problem at hand, and on the operator's feedback strategy.

To put it differently: this setup will not provide a solution to any problem which is more difficult than hacking the human operator.

Comment by johnswentworth on Review of Q&A [LW2.0 internal document] · 2019-03-30T00:46:22.918Z · score: 17 (7 votes) · LW · GW

Two minor comments:

  • It would be nice for other people to be able to throw more into the bounty pool somehow.
  • A shorter-term, smaller-scale thing-to-try on the bounty front might be to make karma transferable, and let people create karma bounties. That would avoid having to deal with money.
Comment by johnswentworth on Review of Q&A [LW2.0 internal document] · 2019-03-30T00:39:53.466Z · score: 6 (3 votes) · LW · GW

Totally on board with that. The important point is eliminating risk-management overhead by (usually) only having to reward someone who contributes value, in hindsight.

Comment by johnswentworth on Review of Q&A [LW2.0 internal document] · 2019-03-30T00:12:56.166Z · score: 6 (3 votes) · LW · GW

I posted a handful of example questions I have been/would be interested in on Raemon's bounty question. I think these examples address several of the challenges in section 4:

  • All of them are questions which I'd expect many people on lesswrong to be independently interested in, and which would make great blog-post-material. So the bounties wouldn't be the sole incentive; even those who don't get the bounty are likely to derive some value from the exercise.
  • There's not really a trust/credentials problem; it would be easy for me to tell whether a given response had been competently executed. If I needed to hire someone in advance of seeing their answer, then there would be a trust problem, but the best-answer-gets-paid format mostly solves that. Even if there are zero competent responses, I've bought useful information: I've learned that the question is tougher than I thought. Also, they're all the sort of project that I expect a smart, generally-educated non-expert to be able to execute.
  • They are all motivated by deeper/vaguer questions, but answers to the questions as stated have enough value to justify themselves. Directly answering the deeper questions would not be the objective of any of them.
  • They're all sufficiently clean-cut that I wouldn't expect much feedback to be necessary mid-effort.

I see that second point as the biggest advantage of bounties over a marketplace: just paying for the best answer means I don't need to go to the effort of finding someone who's competent and motivated and so forth. I don't need to babysit someone while they work on the problem, to make sure we're always on the same page. I don't need to establish careful criteria for what counts as "finished". I can just throw out my question, declare a bounty, and move on. That's a much lower-effort investment on the asker's side than a marketplace.

In short, with a bounty system, competition between answerers solves most of the trust problems which would otherwise require lots of pre-screening and detailed contract specifications.

Bounties will also likely need to be higher to compensate for answer-side risk, but that's a very worthwhile tradeoff for those of us who have some money and don't want to deal with hiring and contracts and other forms of baby-sitting.

Comment by johnswentworth on What would you need to be motivated to answer "hard" LW questions? · 2019-03-29T21:56:32.404Z · score: 23 (5 votes) · LW · GW

I would like more concrete examples of nontrivial questions people might be interested in. Too much of this conversation is too abstract, and I worry people are imagining different things.

Toward that end, here are a few research projects I've either taken on or considered, which I would have been happy to outsource and which seem like a good fit for the format:

  • Go through the data on spending by US colleges. Look at how much is actually charged per student (including a comparison of sticker price to actual tuition), how much is spent per student, and where all the money is spent. Graph how these have changed over time, to figure out exactly which expenditures account for the rapid growth of college cost. Where is all the extra money going? (I've done this one; results here.)
  • Go through the data on aggregate financial assets held, and on real capital assets held by private citizens/public companies/the state (i.e. patents, equipment, property, buildings, etc) to find out where money invested ultimately ends up. What are the main capital sinks in the US economy? Where do marginal capital investments go? (I've also done this one, but haven't gotten around to writing it up.)
  • Go through the genes of JCVI's minimal cell, and write up an accessible explanation of the (known) functionality of all of its genes (grouping them into pathways/systems as needed). The idea is to give someone with minimal bio background a comprehensive knowledge of everything needed for bare-minimum life. Some of this will have to be speculative, since not all gene functions are known, but a closed list of known-unknowns sure beats unknown-unknowns.
  • Something like Laura Deming's longevity FAQ, but focused on the macro rather than micro side of what's known - i.e. (macroscopic) physiology of vascular calcification and heart disease, alzheimers, cancer, and maybe a bit on statistical models of old-age survival rates. In general, there seems to be lots of research on the micro side, lots known on the macro side, but few-if-any well-understood mechanistic links from tone to the other; so understanding both sides in depth is likely to have value.
  • An accessible explanation of Cox' Theorem, especially what each piece means. The tough part: include a few examples in which a non-obvious interpretation of a system as a probabilistic model is directly derived via Cox' Theorem. I have tried to write this at least four separate times, and the examples part in particular seems like a great exercise for people interested in embedded agency.
Comment by johnswentworth on Parable of the flooding mountain range · 2019-03-29T21:17:07.119Z · score: 5 (3 votes) · LW · GW

Reading this, I figured you were talking about local-descent-type optimization algorithms, i.e. gradient descent and variants.

From that perspective, there's two really important pieces missing from these analogies:

  • The mountaineers can presumably backtrack, at least as long as the water remains low enough
  • The mountaineers can presumably communicate

With backtracking, even a lone mountaineer can do better sometimes (and never any worse) by continuing to explore after reaching the top of a hill - as long as he keeps an eye on the water, and makes sure he has time to get back. In an algorithmic context, this just means keeping track of the best point seen, while continuing to explore.

With backtracking and communication, the mountaineers can each go explore independently, then all come back and compare notes (again keeping track of water etc), all go to the highest point found, and maybe even repeat that process. In an algorithmic context, this just means spinning off some extra threads, then taking the best result found by any of them.

In an evolutionary context, those pieces are probably not so relevant.

Comment by johnswentworth on Declarative Mathematics · 2019-03-23T19:30:53.365Z · score: 4 (2 votes) · LW · GW

Those are great examples! That's exactly the sort of thing I see the tools currently associated with neural nets being most useful for long term - applications which aren't really neural nets at all. Automated differentiation and optimization aren't specific to neural nets, they're generic mathematical tools. The neural network community just happens to be the main group developing them.

I really look forward to the day when I can bust out a standard DE solver, use it to estimate the frequency of some stable nonlinear oscillator, and then compute the sensitivity of that frequency to each of the DE's parameters with an extra two lines of code.

Comment by johnswentworth on Declarative Mathematics · 2019-03-23T07:27:38.177Z · score: 5 (3 votes) · LW · GW

Geometric algebra is really neat, thanks for the links. I've been looking for something like that since I first encountered Pauli matrices back in quantum. I would describe it as an improved language for talking about lots of physical phenomena; that makes it a component of a potentially-better interface layer for many different mathematical frameworks. That's really the key to a successful declarative framework: having an interface layer/language which makes it easy to recognize and precisely formulate the kinds of problems the framework can handle.

I'm generally suspicious of anything combining neural nets and nonlinear DEs. As James Mickens would say, putting a numerical solver for a chaotic system in the middle of another already-notoriously-finicky-and-often-unstable-system is like asking Godzilla to prevent Mega-Godzilla from terrorizing Japan. This does not lead to rising property values in Tokyo! That said, it does seem like something along those lines will have to work inside learning frameworks sooner or later, so it's cool to see a nice implementation that puts everything under one roof.

Declarative Mathematics

2019-03-21T19:05:08.688Z · score: 60 (25 votes)
Comment by johnswentworth on Asking for help teaching a critical thinking class. · 2019-03-07T04:10:31.954Z · score: 9 (4 votes) · LW · GW

There used to be a set of Walter Lewin's physics 101 lectures on MIT opencourseware; they're probably still floating around on youtube somewhere. The very first lecture, he explained that his grandmother used to argue people were taller lying down that standing up - y'know, because there's less weight compacting them when lying down. And of course this is completely ridiculous, but he does the experiment anyway: carefully measures the height of a student lying down, then standing up. On the surface, he's using this to illustrate the importance of tracking measurement uncertainty, but the ultimate message is epistemic: turns out people are a bit shorter standing up.

He talks about this as an example of why we need to carefully quantify uncertainty, but it's a great example for epistemological hygiene more generally. It sounds like something ridiculous and low-status to believe, a "crazy old people" sort of thing, but it's not really that implausible on its own merits - and it turns out to be true.

Anyway, besides that one example, I'd say it's generally easy to make people believe something quickly just by insinuating that the alternative hypothesis is somehow low-status, something which only weird people believe. Heck, whole scientific fields have fallen for that sort of trick for decades at a time - behaviorism, frequentism, Copenhagen interpretation... Students will likely be even more prone to it, since they're trained to tie epistemics to status: "truth" in school is whatever gets you a gold star when you repeat it back to the teacher.

Comment by johnswentworth on Unconscious Economies · 2019-02-27T17:05:54.511Z · score: 14 (6 votes) · LW · GW

Thanks for writing this. Multiple times I've looked for a compact, self-contained explanation of this idea, thinking "surely it's common knowledge within econ?".

Comment by johnswentworth on How good is a human's gut judgement at guessing someone's IQ? · 2019-02-27T01:12:31.494Z · score: 4 (2 votes) · LW · GW

It's been a few years, so I don't have sources to cite, but I remember looking into this at one point and finding that immune health during developmental years is a common major underlying cause for intelligence, attractiveness, physical fitness, and so forth. This makes a lot of sense from an evolutionary standpoint: infectious disease used to be the #1 killer, so immune health would be the thing which everything else traded off against.

One consequence is that things like e.g. attractiveness and intelligence actually do positively correlate, so peoples' halo-effect estimates actually do work, to some extent.

Comment by johnswentworth on Constructing Goodhart · 2019-02-14T18:37:52.423Z · score: 2 (1 votes) · LW · GW

The problem is that you invoke the idea that it's starting from something close to pareto-optimal. But pareto optimal with respect to what? Pareto optimality implies a multi-objective problem, and it's not clear what those objectives are. That's why we need the whole causality framework: the multiple objectives are internal nodes of the DAG.

The standard description of overfitting does fit into the DAG model, but most of the usual solutions to that problem are specific to overfitting; they don't generalize to Goodhart problems in e.g. management.

Comment by johnswentworth on When should we expect the education bubble to pop? How can we short it? · 2019-02-10T01:29:52.651Z · score: 11 (6 votes) · LW · GW

That model works, but it requires irrational agents to make it work. The bubble isn't really "stable" in a game-theoretic equilibrium sense; it's made stable by assuming that some of the actors aren't rational game-theoretic agents. So it isn't a true Nash equilibrium unless you omit all those irrational agents.

The fundamental difference with a signalling arms race is that the model holds up even without any agent behaving irrationally.

That distinction cashes out in expectations about whether we should be able to find ways to profit. In a market bubble, even if it's propped up by irrational investors, we expect to be able to find ways around that liquidity problem - like shorting options or taking opposite positions on near-substitute assets. If there's irrational agents in the mix, it shouldn't be surprising to find clever ways to relieve them of their money. But if everyone is behaving rationally, if the equilibrium is a true Nash equilibrium, then we should not expect to find some clever way to do better. That's the point of equilibria, after all.

Constructing Goodhart

2019-02-03T21:59:53.785Z · score: 29 (11 votes)

From Personal to Prison Gangs: Enforcing Prosocial Behavior

2019-01-24T18:07:33.262Z · score: 81 (28 votes)

The E-Coli Test for AI Alignment

2018-12-16T08:10:50.502Z · score: 58 (22 votes)

Competitive Markets as Distributed Backprop

2018-11-10T16:47:37.622Z · score: 44 (16 votes)

Two Kinds of Technology Change

2018-10-11T04:54:50.121Z · score: 61 (22 votes)

The Valley of Bad Theory

2018-10-06T03:06:03.532Z · score: 55 (27 votes)

Don't Get Distracted by the Boilerplate

2018-07-26T02:15:46.951Z · score: 44 (22 votes)

ISO: Name of Problem

2018-07-24T17:15:06.676Z · score: 32 (13 votes)

Letting Go III: Unilateral or GTFO

2018-07-10T06:26:34.411Z · score: 22 (7 votes)

Letting Go II: Understanding is Key

2018-07-03T04:08:44.638Z · score: 12 (3 votes)

The Power of Letting Go Part I: Examples

2018-06-29T01:19:03.474Z · score: 38 (15 votes)

Problem Solving with Mazes and Crayon

2018-06-19T06:15:13.081Z · score: 121 (55 votes)

Fun With DAGs

2018-05-13T19:35:49.014Z · score: 38 (15 votes)

The Epsilon Fallacy

2018-03-17T00:08:01.203Z · score: 70 (19 votes)

The Cause of Time

2013-10-05T02:56:46.150Z · score: 0 (19 votes)

Recent MIRI workshop results?

2013-07-16T01:25:02.704Z · score: 2 (7 votes)