Comment by vaniver on You Have About Five Words · 2019-03-13T21:46:50.613Z · score: 3 (1 votes) · LW · GW
In the case of LessWrong, I think the core sequences are around 10,000 words, not sure how big the overall EA corpus is.

This feel like a 100x underestimate; The Sequences clocks in at over a million words, I believe, and it's not the case that only 1% of the words are core.

Comment by vaniver on How dangerous is it to ride a bicycle without a helmet? · 2019-03-09T06:55:28.119Z · score: 13 (5 votes) · LW · GW

It feels like the per-experience costs are more relevant than the lifetime costs, since *also* you have to aggregate the lifetime annoyance. "Is it worth wearing a helmet this time to avoid 2/3rds of a micromort?"

It could be the case that the "get used to it" costs are a single investment, or there are other solutions that might not be worth it for someone who can tolerate a normal helmet but are worth it for habryka.

Comment by vaniver on In My Culture · 2019-03-08T18:54:21.409Z · score: 7 (3 votes) · LW · GW
My closest answer would be something like "in my version of utopia," although maybe that's too strong?

I think this implies way too much endorsement. I often find myself editing a document and thinking "in American English, the comma goes inside the quotation marks," even though "in programming, the period goes outside the quotation marks".

Comment by vaniver on Rule Thinkers In, Not Out · 2019-03-05T04:16:20.883Z · score: 27 (7 votes) · LW · GW

When someone has an incomplete moral worldview (or one based on easily disprovable assertions), there's a way in which the truth isn't "safe" if safety is measured by something like 'reversibility' or 'ability to continue being the way they were.' It is also often the case that one can't make a single small change, and then move on; if, say, you manage to convince a Christian that God isn't real (or some other thing that will predictably cause the whole edifice of their worldview to come crashing down eventually), then the default thing to happen is for them to be lost and alone.

Where to go from there is genuinely unclear to me. Like, one can imagine caring mostly about helping other people grow, in which a 'reversibility' criterion is sort of ludicrous; it's not like people can undo puberty, or so on. If you present them with an alternative system, they don't need to end up lost and alone, because you can directly introduce them to humanism, or whatever. But here you're in something of a double bind; it's somewhat irresponsible to break people's functioning systems without giving them a replacement, and it's somewhat creepy if you break people's functioning systems to pitch your replacement. (And since 'functioning' is value-laden, it's easy for you to think their system needs replacing.)

Comment by vaniver on Rule Thinkers In, Not Out · 2019-03-02T18:04:10.535Z · score: 21 (4 votes) · LW · GW

I think I have this skill, but I don't know that I could write this guide. Partly this is because there are lots of features about me that make this easier, which are hard (or too expensive) to copy. For example, Michael once suggested part of my emotional relationship to lots of this came from being gay, and thus not having to participate in a particular variety of competition and signalling that was constraining others; that seemed like it wasn't the primary factor, but was probably a significant one.

Another thing that's quite difficult here is that many of the claims are about values, or things upstream of values; how can Draco Malfoy learn the truth about blood purism in a 'safe' way?

Comment by vaniver on Rule Thinkers In, Not Out · 2019-02-28T19:58:05.499Z · score: 15 (4 votes) · LW · GW

He appears to have had novel ideas in his technical specialty, but his public writings are mostly about old ideas that have insufficient public defense. There, novelty isn't a virtue (while correctness is).

Comment by vaniver on Rule Thinkers In, Not Out · 2019-02-28T17:24:35.572Z · score: 29 (9 votes) · LW · GW

My sense is that his worldview was 'very sane' in the cynical HPMOR!Quirrell sense (and he was one of the major inspirations for Quirrell, so that's not surprising), and that he was extremely open about it in person in a way that was surprising and exciting.

I think his standout feature was breadth more than depth. I am not sure I could distinguish which of his ideas were 'original' and which weren't. He rarely if ever wrote things, which makes the genealogy of ideas hard to track. (Especially if many people who do write things were discussing ideas with him and getting feedback on them.)

Comment by vaniver on Unconscious Economies · 2019-02-28T05:45:20.019Z · score: 4 (2 votes) · LW · GW

If instead you had to pay for every view (such as in environments where bandwidth costs are expensive, such as interviewing candidates for a job), then you would do the opposite of clickbait, attempting to get people to not 'click on your content.' (Or people who didn't attempt to get their audience to self-screen would lose out because of the costs to those who did.)

Comment by vaniver on When does introspection avoid the pitfalls of rumination? · 2019-02-21T00:41:41.082Z · score: 7 (3 votes) · LW · GW
The techniques you mention may include focusing on causes and consequences, but they are very solution-oriented.

Focusing, which is an introspective technique, is explicitly not focused on solutions; it's focused on figuring out what the actual problem is (which generally is more about listening to the complaint than it is about thinking about the environment or how things could be solved). This then helps someone find a solution, but they're likely not doing that with Focusing.

Comment by vaniver on De-Bugged brains wanted · 2019-02-20T20:27:49.406Z · score: 13 (4 votes) · LW · GW
Sorry for the lack of links on this; it's stuff that's covered in the Sequences, but where exactly I'm not sure. I'll gladly strong upvote a reply to this comment linking to relevant material on these points.

I don't think it's in single posts? Like, there's the Robin Hanson post The Fallacy Fallacy, or JGWeissman's Catchy Fallacy Name Fallacy, but those are mostly about "here are specific issues with focusing on fallacies" as opposed to "and also here's how Bayesian epistemology works instead." If I were to point to a single post, it might be Science Isn't Strict Enough, which of course is about science instead of about logic, and is doing the "this is how this standard diverges from what seems to be the right standard" argument but in the opposite direction, sort of.

Comment by vaniver on Is voting theory important? An attempt to check my bias. · 2019-02-20T00:53:22.377Z · score: 15 (5 votes) · LW · GW

Like Raemon, I want to echo the point that following your intellectual curiosity is probably the best way to do research work, and generally make the most of your energy/time budget. But some specific considerations:

1. What seems important to Vaniver.

I expect that voting systems mostly won't matter for AI outcomes. It seems like the primary question is whether or not the AI system we make does anything like what we like/endorse (i.e. whether or not existential accidents happen), and the secondary question is whether or not teams coordinated to form a coalition to build such a safe system (or otherwise prevented the creation of unsafe systems). Voting seems mostly useful for aggregating preferences over scarce joint decisions in a bandwidth-sensitive way ("where should the group go to lunch?" as opposed to "what do you personally want to eat?", or "which of these four candidates should be president?" as opposed to "what are your complete views on politics?"), and the coalition-building problem will likely look more like negotiation (see this paper by Critch as an example of the sort of thing that seems useful to me in that space) and the preference-satisfaction solution in the glorious transhuman future will likely look more like telling Alexa how you want your personal environment to be and not having to worry much about scarcity or joint decision-making.

It's possible that government policy will be important, and the health of public discourse will be important, but it seems quite unlikely to me that election reforms will have the desired effects in time.

---

2. Whether it's the core problem of discourse, or will be sufficient to overcome modern challenges.

It seems like the forces pushing towards political polarization are considerably stronger than just the pressures from electoral systems, and mostly have to do with communication media stuff. Basically, current media technologies push the creation and curation of media closer to the consumer, who has different (and worse) incentives than elites, which leads to a general dumbing-down and coarsening of discourse. Superior election technology seems likely to help broadly-liked centrists defeat people who manage to eke out 51% support and 49% hate, but that doesn't seem like it'll fix discussions of cultural hot spots. (Will broadly liked centrists cause American politics to be more sensible on climate change, or the weird mix of negotiations about border security, or so on?)

Figuring out what's upstream of worsening discourse and pushing on that (or seeking to create more good discourse, or so on) is probably more effective is better public conversations are actually the goal; and even if this effort helps, if it can't help enough, it may be better to write off the thing that it would help.

---

3. Whether or not it's important if it seems important to Vaniver.

There's a claim in Inadequate Equilibria, specifically the end of Moloch's Toolbox, which is that there are lots of problems that don't get solved because there aren't all that many people who are unbiased and will float to the problem that seems most important (the 'maximizing altruists') compared to the number of problems, and so you get problems that seem 'quite serious' but are also neglected because they're more costly than human civilization can support at present. (This dynamic is common; when I worked in industry, there were many improvements that could be made to the system that weren't being made because they weren't the most important improvement to be making at the time.)

But also this sort of meta-work has its own costs. Compare Alice, who views LessWrong on her phone and notices a bug, and then fixes the bug and submits a pull request, and then moves on, with Beatrice, who considers all the bugs on LessWrong and decides which is most important, and then fixes that one and submits a pull request. Then compare both of them with Carol, who also considers all the different projects and tries to figure out which of them is most important, which also maybe requires considering all the different metrics of project importance, which also maybe requires considering all the different decision theories, which also maybe requires...

It seems good for Alice to not pay the costs of optimizing, and just do the local improvements, especially if the alternative is that Alice doesn't make any improvements. Beatrice will do more important work, but is 'paying twice' for it, and in situations where the bugs are roughly equally important this means Beatrice is perhaps less effective than someone less reflective. I think that people who are naturally interested in this sort of maximizing altruism should do it, and people who aren't (and want to just be Alice instead) should be Alice without worrying about it too much (or trying to convince themselves that, no, they are doing the maximizing altruism thing).

Comment by vaniver on Is voting theory important? An attempt to check my bias. · 2019-02-19T20:15:47.994Z · score: 6 (3 votes) · LW · GW

You might be thinking of "And the loser is... Plurality Voting" which describes a 2010 voting systems conference, where Approval Voting ended up winning the approval vote. (I do wish they had had the experts vote under a bunch of different systems, but oh well.)

Comment by vaniver on Epistemic Tenure · 2019-02-19T20:04:57.749Z · score: 27 (7 votes) · LW · GW
I think I'm largely (albeit tentatively) with Dagon here: it's not clear that we don't _want_ our responses to his wrongness to back-propagate into his idea generation. Isn't that part of how a person's idea generation gets better?

It is important that Bob was surprisingly right about something in the past; this means something was going on in his epistemology that wasn't going on in the group epistemology, and the group's attempt to update Bob may fail because it misses that important structure. Epistemic tenure is, in some sense, the group saying to Bob "we don't really get what's going on with you, and we like it, so keep it up, and we'll be tolerant of wackiness that is the inevitable byproduct of keeping it up."

That is, a typical person should care a lot about not believing bad things, and the typical 'intellectual venture capitalist' who backs a lot of crackpot horses should likely end up losing their claim on the group's attention. But when the intellectual venture capitalist is right, it's important to keep their strategy around, even if you think it's luck or that you've incorporated all of the technique that went into their first prediction, because maybe you haven't, and their value comes from their continued ability to be a maverick without losing all of their claim on group attention.

Comment by vaniver on Reinterpreting "AI and Compute" · 2019-02-19T19:53:51.896Z · score: 4 (2 votes) · LW · GW

Link fixed, thanks!

Comment by vaniver on Greatest Lower Bound for AGI · 2019-02-06T02:55:43.436Z · score: 4 (2 votes) · LW · GW

It's a straightforward application of the Copernican principle. Of course, that is not always the best approach.

Comment by vaniver on (notes on) Policy Desiderata for Superintelligent AI: A Vector Field Approach · 2019-02-05T19:54:11.026Z · score: 7 (3 votes) · LW · GW
I read this as saying something like “This paper only makes sense if facts matter, separate to values.” It’s funny to me that this sentence felt necessary to be written.

I mean, it's more something like "there's a shared way in which facts matter," right? If I mostly think in terms of material consumption by individuals, and you mostly think in terms of human dignity and relationships, the way in which facts matter for both of us is only tenuously related.

Comment by vaniver on Why is this utilitarian calculus wrong? Or is it? · 2019-02-04T06:35:39.409Z · score: 3 (1 votes) · LW · GW

I think we're using margins differently. Yes, you shouldn't expect situations with x>1 to be durable, but you should expect x>1 before every charitable donation that you make. Otherwise you wouldn't make the donation! And so x=1 is the 'money in the bank' valuation, instead of the upper bound.

Comment by vaniver on Why is this utilitarian calculus wrong? Or is it? · 2019-02-01T21:44:59.297Z · score: 3 (1 votes) · LW · GW
Wait, are you claiming that humans have moral intuitions because it maximizes global utility? Surely moral intuitions have been produced by evolution.

No, I'm claiming that moral intuitions reflect the precomputation of higher-order strategic considerations (of the sort "if I let this person get away with stealing a bike, then I will be globally worse off even though I seem locally better off").

I agree that you should expect evolution to create agents that maximize inclusive genetic fitness, which is quite different from global utility. But even if one adopts the frame that 'utilitarian calculus is the standard of correctness,' one can still use those moral intuitions as valuable cognitive guides, by directing attention towards considerations that might otherwise be missed.

Comment by vaniver on Why is this utilitarian calculus wrong? Or is it? · 2019-02-01T02:00:36.653Z · score: 17 (6 votes) · LW · GW

On first-order effects, it seems that your preference rankings as are follows:

1) You have the widget, the commune has $80, your total satisfaction is $30+80x.

2a) You have nothing, the commune has $100, your total satisfaction is $100x.

2b) You have $100, the commune has nothing, your total satisfaction is $100.

3) You have the widget, a monopoly you don't value has $80. Your total satisfaction is $30+80y.

By changing x and y, we represent your altruism to the other parties in the situation; if x is greater than 1, then you would rather give the commune money than have it yourself, but if x is above 1.5 then you'd rather just give the money to the commune than have a widget for yourself. For ys below 7/8ths, you'd rather not buy the widget. (The x and y I inferred from the question are slightly above 1 and slightly above 0, which suggests the best option is indeed 1.)

---

Why do humans have moral intuitions at all? I claim a major role is to represent higher order effects as shorthand. When you see a bike you don't own, you might run the first order calculations and think it's worth more to you than it is to whoever owns it, and so global utility is maximized by you stealing the bike. But a world in which agents reflexively don't steal bikes has other benefits to it, such that the low-theft equilibrium might have higher global utility than the high-theft equilibrium. But you can't get from the high-theft equilibrium to the low-theft equilibrium by making small pareto improvements.

And so if you notice you have moral intuitions that rise up whenever you run the numbers and decide you shouldn't be upset that someone stole your bike, try to figure out what effects those intuitions are trying to have.

---

Why put economic transactions in a separate domain from charitable donations? There are a few related things to disentangle.

First, for you personally, it really doesn't matter much. If you would rather pay your favorite charity $100 for a t-shirt with their logo on it, even though you normally wouldn't pay $100 for a t-shirt, even though you could just give them the $100, then do it.

Second, for society as a whole, prices are a information-transmission mechanism, conveying how much caring something requires to produce, and how much people care about it being produced. Mucking with this mechanism to divert value flows generally destroys more than it creates, especially since the prices can freely fluctuate in response to changing conditions, whereas policies are stickier.

Comment by vaniver on The Relationship Between Hierarchy and Wealth · 2019-01-31T03:34:01.291Z · score: 5 (2 votes) · LW · GW
Possibly dowries have to be in cash and you don’t have liquidity.

Land dowries were common.

Dowries and inheritance are best thought of as the same thing, happening at different times; your sons have to wait until you die to come into full possession of your/their lands, but your daughters are 'dead to you' as soon as they get married. So the primary difference between sons and daughters is dynastic prestige; a son both maintains the wealth within the dynasty and accrues whatever dowry he can attract, whereas a daughter leaks wealth to another dynasty. (Indeed, when the mismatch was sufficiently large the man was typically forced to take his wife's surname as a condition of being allowed to marry her, a sort of honorary swapping of the sexes.)

Interestingly, one of the things that happens here is that dowries are much more variable than male inheritance (which either gives almost all to the eldest son, or splits it almost equally, with deliberate splits being more rare); you can just send an unattractive daughter to a convent (tho this also typically required a dowry!) while giving your more promising daughters a larger share.

Comment by vaniver on Disentangling arguments for the importance of AI safety · 2019-01-24T15:59:41.219Z · score: 3 (1 votes) · LW · GW
C. The proponents of the original arguments were misinterpreted, or overemphasised some of their beliefs at the expense of others, and actually these shifts are just a change in emphasis.

My interpretation of what happened here is that more narrow AI successes made it more convincing that one could reach ASI by building all of the components of it directly, rather than necessitating building an AI that can do most of the hard work for you. If it only takes 5 cognitive modules to take over the world instead of 500, then one no longer needs to posit an extra mechanism by which a buildable system is able to reach the ability to take over the world. And so from my perspective it's mostly a shift in emphasis, with small amounts of A and B as well.

Comment by vaniver on CDT=EDT=UDT · 2019-01-22T01:26:18.114Z · score: 3 (1 votes) · LW · GW
I'm objecting to the further implication that doing this makes it not a Bayes net.

I mean, white horses are not horses, right? [Example non-troll interpretations of that are "the set 'horses' only contains horses, not sets" and "the two sets 'white horses' and 'horses' are distinct." An example interpretation that is false is "for all members X of the set 'white horses', X is not a member of the set 'horses'."]

To be clear, I don't think it's all that important to use influence diagrams instead of causal diagrams for decision problems, but I do think it's useful to have distinct and precise concepts (such that if it even becomes important to separate the two, we can).

What is it that you want out of them being Bayes nets?

Comment by vaniver on CDT=EDT=UDT · 2019-01-22T01:12:56.110Z · score: 3 (1 votes) · LW · GW
All the nodes in the network should be thought of as grounding out in imagination, in that it's a world-model, not a world. Maybe I'm not seeing your point.

My point is that my world model contains both 'unimaginative things' and 'things like world models', and it makes sense to separate those nodes (because the latter are typically functions of the former). Agreed that all of it is 'in my head', but it's important that the 'in my head' realm contain the 'in X's head' toolkit.

Comment by vaniver on CDT=EDT=UDT · 2019-01-17T21:29:19.206Z · score: 3 (1 votes) · LW · GW
I guess, philosophically, I worry that giving the nodes special types like that pushes people toward thinking about agents as not-embedded-in-the-world, thinking things like "we need to extend Bayes nets to represent actions and utilities, because those are not normal variable nodes". Not that memoryless cartesian environments are any better in that respect.

I see where this is coming from, but I think it might also go the opposite direction. For example, my current guess of how counterfactuals/counterlogicals ground out is the imagination process; I implicitly or explicitly think of different actions I could take (or different ways math could be), and somehow select from those actions (hypotheses / theories); the 'magic' is all happening in my imagination instead of 'in the world' (noting that, of course, my imagination is being physically instantiated). Less imaginative reactive processes (like thermostats 'deciding' whether to turn on the heater or not) don't get this treatment, and are better considered as 'just part of the environment', and if we build an imaginative process out of unimaginative processes (certainly neurons are more like thermostats than they are like minds) then it's clear the 'magic' comes from the arrangement of them rather than the individual units.

Which suggests how the type distinction might be natural; places where I see decision nodes are ones where I expect to think about what action to take next (or expect some other process to think about what action to take next), or think that it's necessary to think about how that thinking will go.

Comment by vaniver on CDT=EDT=UDT · 2019-01-14T19:34:13.082Z · score: 14 (4 votes) · LW · GW
Aside: Bayes nets which are representing decision problems are usually called influence diagrams rather than Bayes nets. I think this convention is silly; why do we need a special term for that?

In influence diagrams, nodes have a type--uncertainty, decision, or objective. This gives you legibility, and makes it more obvious what sort of interventions are 'in the spirit of the problem' or 'necessary to give a full solution.' (It's not obvious from the structure of the causal network that I should set 'my action' instead of 'Omega's prediction' in Newcomb's Problem; I need to read it off the labels. In an influence diagram, it's obvious from the shape of the node.) This is a fairly small benefit, tho, and seems much less useful than the restriction on causal networks that the arrows imply causation.

[Edit] They also make it clearer how to do factorized decision-making with different states of local knowledge, especially when knowledge is downstream of earlier decisions you made; if you're trying to reason about how a simple agent should deal with a simple situation, this isn't that helpful, but if you're trying to reason about many different corporate policies simultaneously, then something influence-diagram shaped might be better.

Comment by vaniver on Open and Welcome Thread December 2018 · 2019-01-08T23:38:42.765Z · score: 4 (2 votes) · LW · GW
Suppose there was some doubt about whether it was genuinely conscious. Wouldn't that amount to the question of whether or not it was a zombie?

No. There are a few places this doubt could be localized, but it won't be in 'whether or not zombies are possible.' By definition we can't get physical evidence about whether or not it's a zombie (a zombie is in all physical respects similar to a non-zombie, except non-zombies beam their experience to a universe causally downstream of us, where it becomes "what it is like to be a non-zombie," and zombies don't), in exactly the same way we can't get physical evidence about whether or not we're zombies. In trying to differentiate between different physical outcomes, only physicalist theories are useful.

The doubt will likely be localized in 'what it means to be conscious' or 'how to measure whether or not something is conscious' or 'how to manufacture consciousness', where one hopes that answers to one question inform the others.

Perhaps instead the doubt is localized in 'what decisions are motivated by facts about consciousness.' If there is 'something it's like to be Alexa,' what does that mean about the behavior of Amazon or its customers? In a similar way, it seems highly likely that the inner lives of non-human animals parallel ours in specific ways (and don't in others), and even if we agree exactly on what their inner lives are like we might disagree on what that implies about how humans should treat them.

Comment by vaniver on Open and Welcome Thread December 2018 · 2019-01-04T05:19:32.285Z · score: 13 (5 votes) · LW · GW
This is mostly just arguing over semantics.

If an argument is about semantics, this is not a good response. That is...

Just replace "philosophical zombie" with whatever your preferred term is for

An important part of normal human conversations is error correction. Suppose I say "three, as an even number, ..."; the typical thing to do is to silently think "probably he meant odd instead of even; I will simply edit my memory of the sentence accordingly and continue to listen." But in technical contexts, this is often a mistake; if I write a proof that hinges on the evenness of three, that proof is wrong, and it's worth flagging the discrepancy and raising it.

Technical contexts also benefit from specificity of language. If I have a term used to refer to the belief that "three is even," using that term to also refer to the belief that "three is odd" will be the source of no end of confusion. ("Threevenism is false!" "What do you mean? Of course Threevenism is true.") So if there is a technical concept that specifically refers to X, using it to refer to Y will lead to the same sort of confusion; use a different word!

That is, on the object level: it is not at all sensible to think that philosophical zombies are useful as a concept; the idea is deeply confused. Separately, it seems highly possible that people vary in their internal experience, such that some people experience 'qualia' and other people don't. If the main reason we think people have qualia is that they say that they do, and Dennett says that he doesn't, then the standard argument doesn't go through for him. Whether that difference will end up being deep and meaningful or merely cosmetic seems unclear, and more likely discerned through psychological study of multiple humans, in much the same way that the question of mental imagery was best attacked by a survey.

This variability suggests it's likely a questionable thing to use as a foundation for other theories. For example, it seems to me like it would be unfortunate if someone thought it was fine to torture some humans and not others, because "only the qualia of being tortured is bad," because it seems to me like torturing humans is likely bad for different reasons.

Comment by vaniver on Reinterpreting "AI and Compute" · 2018-12-27T18:56:16.590Z · score: 5 (3 votes) · LW · GW

Subcommunities of AI researchers. A simple concrete example of gains from trade is when everyone uses the same library or conceptual methodology, and someone finds a bug. The primary ones of interest are algorithmic gains; the new thing used to do better lipreading can also be used by other researchers to do better on other tasks (or to further enhance this approach and push it further for lipreading).

Comment by vaniver on Reinterpreting "AI and Compute" · 2018-12-26T19:49:29.045Z · score: 10 (5 votes) · LW · GW

I am amused that the footnotes are as long as the actual post.

Footnote 3 includes a rather salient point:

However, if you instead think that something like the typical amount of computing power available to talented researchers is what’s most important — or if you simply think that looking at the amount of computing power available to various groups can’t tell us much at all — then the OpenAI data seems to imply relatively little about future progress.

Especially in the light of this news item from Import AI #126:

The paper obtained state-of-the-art scores on lipreading, significantly exceeding prior SOTAs. It achieved this via a lot of large-scale infrastructure, combined with some elegant algorithmic tricks. But ultimately it was rejected from ICLR, with a comment from a meta-reviewer saying ‘Excellent engineering work, but it’s hard to see how others can build on it’, among other things.

It's possible that we will see more divergence between 'big compute' and 'small compute' worlds in a way that one might expect will slow down progress (because the two worlds aren't getting the same gains from trade that they used to).

Comment by vaniver on Open and Welcome Thread December 2018 · 2018-12-26T19:37:38.314Z · score: 13 (5 votes) · LW · GW
In other words, if a philosophical zombie existed, there would likely be evidence that it was a philosophical zombie, such as it not talking about qualia. However, there are individuals who outright deny the existence of qualia, such as Daniel Dennett. Is it not impossible that individuals like Dennett are themselves philosophical zombies?

Nope, your "in other words" summary is incorrect. A philosophical zombie is not any entity without consciousness; it is an entity without consciousness that falsely perceives itself as having consciousness. An entity that perceives itself as not having consciousness (or not having qualia or whatever) is a different thing entirely.

Comment by vaniver on You can be wrong about what you like, and you often are · 2018-12-20T23:04:50.497Z · score: 3 (1 votes) · LW · GW
I definitely don't mean to imply that this is true. I personally don't think that it is.

Your perception of them stays similar when you flip the signs? ("I don't like watching TV, I only read novels" becomes "yep, that person is probably mistaken about what they want/like.")

Comment by vaniver on Reasons compute may not drive AI capabilities growth · 2018-12-20T01:21:46.079Z · score: 4 (2 votes) · LW · GW

When it comes to the 'ideas' vs. 'compute' spectrum:

It seems to me like one of the main differences (but probably not the core one?) is whether or not whether or not something works seems predictable. Suppose Alice thinks that it's hard to come up with something that works, but things that look like they'll work do with pretty high probability, and suppose Bob thinks it's easy to see lots of things that might work, but things that might work rarely do; I think Alice is more likely to think we're ideas-limited (since if we had a textbook from the future, we could just code it up and train it real quick) and Bob is more likely to think we're compute-limited (since our actual progress is going to look much more like ruling out all of the bad ideas that are in between us and the good ideas, and the more computational experiments we can run, the faster that process can happen).

I tend to be quite close to the end of the 'ideas' spectrum, tho the issue is pretty nuanced and mixed.

I think one of the things that's interesting to me is not how much training time can be optimized, but 'model size'--what seems important is not whether our RL algorithm can solve a double-pendulum lightning-quick but whether we can put the same basic RL architecture into an octopus's body and have it figure out how to control the tentacles quickly. If the 'exponential effort to get linear returns' story is true, even if we're currently not making the most of our hardware, gains of 100x in utilization of hardware only turn into 2 higher steps in the return space. I think the primary thing that inclines me towards the 'ideas will drive progress' view is that if there's a method that's exponential effort to linear returns and another method that's, say, polynomial effort to linear returns, the second method should blow past the exponential one pretty quickly. (Even something that reduces the base of the exponent would be a big deal for complicated tasks.)

If you go down that route, then I think you start thinking a lot about the efficiency of other things (like how good human Go players are at turning games into knowledge) and what information theory suggests about strategies, and so on. And you also start thinking about how close we are--for a lot of these things, just turning up the resources plowed into existing techniques can work (like beating DotA) and so it's not clear we need to search for "phase change" strategies first. (Even if you're interested in, say, something like curing cancer, it's not clear whether continuing improvements to current NN-based molecular dynamics predictors, causal network discovery tools, and other diagnostic and therapeutic aids will get to the finish line first as opposed to figuring out how to build robot scientists and then putting them to work on curing cancer.)

Comment by vaniver on Reasons compute may not drive AI capabilities growth · 2018-12-20T01:12:08.056Z · score: 5 (3 votes) · LW · GW

Elaborating on my comment (on the world where training time is the bottleneck, and engineers help):

To the extent major progress and flashy results are dependent on massive engineering efforts, that this seems like this lowers the portability of advances and makes it more difficult for teams to form coalitions. [Compare to a world where you just have to glue together different conceptual advances, and so you plug one model into another and are basically done.] This also means we should think about how progress happens in other fields with lots of free parameters that are sort of optimized jointly--semiconductor manufacturing is the primary thing that comes to mind, where you have about a dozen different fields of engineering that are all constrained by each other and the joint tradeoffs are sort of nightmarish to behold or manage. [Subfield A would be much better off if we switched from silicon to germanium, but everyone else would scream--but perhaps we'll need to switch eventually anyway.] The more bloated all of these projects become, the harder it is to do fundamental reimaginings of how these things work (a favorite example of mine here is replacing matmuls in neural networks with bitshifts, also known as "you only wanted the ability to multiply by powers of 2, right?", which seems like it is ludicrously more efficient and is still pretty trainable, but requires thinking about gradient updates differently, and the more effort you've put into optimizing how you pipe gradient updates around, the harder it is to make transitions like that).

This is also possibly quite relevant to safety; if it's hard to 'tack on safety' at the end, then it's important we start with something safe and then build a mountain of small improvements for it, rather than building the mountain of improvements for something that turns out to be not safe and then starting over.

Comment by vaniver on Good arguments against "cultural appropriation" · 2018-12-18T19:27:08.541Z · score: 5 (2 votes) · LW · GW
Such CA is thought to result in diminished status and power for people in the "appropriated" culture.

I'm having a hard time separating this from the 'offense' argument that you're not including. Like, The Simpsons introduces Apu, who is Indian and works at a convenience store. Written by and voice-acted by white Americans, he's very much "Indian immigrants as seen from the outside" as opposed to "the self-representation of Indian immigrants"; as a character in a comedy show, he's often a subject of mockery.

But someone being offended by Apu is what expecting this will lead to diminished status and power for Indian immigrants to America feels like from the inside. That makes me suspect that we should feel similarly about individuals taking offense claims of this category of CA, but I'm curious what makes you consider them separately.

Comment by vaniver on In Defense of Finance · 2018-12-18T18:34:40.797Z · score: 14 (5 votes) · LW · GW
Are you aware of any critiques or defenses of the financial industry that are better than the level of this post?

This actually stopped me from writing a similar comment to Benquo's: I disagreed with the post, but was not really in its audience (I am, on net, probably pro-finance relative to the American public).

Like, I also thought the MBS were ridiculous because of skin in the game reasons, where there was widespread fraud in granting the mortgages, and a presumable contributor was that the granter was no longer on the hook for whether or not the mortgage was repaid. But more significant was that finance and government were conjoined twins, where the government could force everyone to make the same mistakes (I once went to a talk by a former bank CEO where he claimed that everyone is always in violation of some regulation, and so when the government wanted people to grant more mortgages, the regulators could just say "hey, the amount of slack we cut you will depend on how well you're meeting the president's minority lending targets"), and even more significant than that was a widespread belief that increased homeownership would cause more desirable behavior on the part of citizens, and upstream of that was widespread misunderstanding of causality (and statistics more generally; a study that showed that lenders were not biased against minorities (because their default rates were the same once you took credit score into account) was widely used to argue that lenders were biased against minorities).

Which is not the "hey, we should just increase reserve ratios" that Jacobian is responding to; he might disagree on the details ("how significant were minority borrowers to the crash, and how significant were attempts to increase minority borrowing to the erosion of lending standards?") but he's not going to disagree on the general thrust ("yes, governments should understand causality better").

That said, I think the 'free banking' academic subdiscipline is of quite good quality and highly relevant to other projects you might be interested in, but is quite hard to get to from where we are now.

Comment by vaniver on Prediction Markets Are About Being Right · 2018-12-10T23:42:07.754Z · score: 5 (2 votes) · LW · GW
What I cannot do is predict that they are wrong, and wait for events to prove me right. There is no judgment day. No profit stream. No right. No wrong.

I'm aware of one counterexample to this: Salvator Mundi was thought to be a copy of the original piece by da Vinci, was restored and authenticated as the original, and then increased in value enormously (because there are less than 20 da Vincis). But the contemporary art market is very much not about that sort of archaeological discovery or restoration, but is instead about who the artist is and who they know.

Comment by vaniver on Why should I care about rationality? · 2018-12-08T05:28:12.595Z · score: 31 (11 votes) · LW · GW

The best advice is tailored to individuals, and the best explanations are targeted at avoiding or uninstalling specific confusions, instead of just pointing at the concept. But here I think the right call is giving evidence for 'a' reader instead of TurnTrout. So, a general case for rationality:

First, by rationality I mean a focus on cognitive process rather than a specific body of conclusions or thoughts. The Way is the art that pursues a goal, not my best guess at how to achieve that goal.

Why care about cognitive process? A few factors come to mind:

1) You're stuck doing cognition, and you might want to do it better. Using your process to focus on your process can actually stabilize and improve things; see Where Recursive Justification Hits Bottom and The Lens that Sees Its Own Flaws.

2) Studying the creation of tools lets you know what tool to use when. Rather than reflexively bouncing from moment to moment, you can be deliberately doing things that you expect to help.

3) As a special case of 2, sometimes, it's important to get things right on the first try. This means we can't rely on processes that require lots of samples (like empiricism) and have to instead figure out what's going on in a way that lets us do the right thing, which often also involves figuring out what sorts of cognitive work would validly give us that hope.

4) Process benefits tend to accumulate. If I expend effort and acquire food for myself today, I will be in approximately the same position tomorrow; if I expend effort and establish a system that provides me with food, I will be in a different, hopefully better position tomorrow.

Who shouldn't care about rationality? First, for any task where the correct strategy to employ is either 'obvious' or 'unintuitive but known to a tradition', then the benefits of thinking it through yourself are much lower. Second, to the extent that most rationality techniques that we know route through "think about it," the more expensive thinking is, the less useful the rationality techniques become.

Comment by vaniver on The housekeeper · 2018-12-04T18:48:06.571Z · score: 15 (6 votes) · LW · GW

Boarding houses used to be quite close to this, and I would love for the EA / Rationality community to have more of those. But also they fell out of favor for a reason (mostly legal, I think, but perhaps also increased wealth and housing stock). In particular, it seems like the person being more of the house manager (who selects guests as they desire / ultimately owns the house) than the house keeper (who is dependent on the goodwill of their fellow tenants) makes the system more sustainable / polishes some of the rough edges.

Homemakers are still around, though, and my sense is when there's a group house that has something of this flavor, it's because there's a house affordable on one or two programmer salaries that is large enough for ~8 people, and so there's a space for spouse/boyfriend/girlfriend whose primary contribution is 'being part of the family' and 'making the space nice.' There it seems important that they're part of the family instead of a Hufflepuff recruited from the hinterlands primarily to act as a servant.

[Note that it's particularly weird to have a master-servant relationship coincide with a 'community building' role; if everyone likes Alice's parties thrown at The House, and Alice is friends with everyone at The House, it's a little weird for The House to fire Alice for not keeping the place as tidy as they like, because presumably that damages the friendships and the broader community fabric (since, say, Alice might not be a fan of The House anymore).]

Comment by vaniver on The housekeeper · 2018-12-04T18:39:18.666Z · score: 4 (2 votes) · LW · GW

When I was at Event Horizon, I was one of the people voting that we should spend more on the house manager, but also at the time about a third (maybe even half?) of residents of Event Horizon were living on runway, and so a 10% increase in rent would mean 9% less time to lift off. And with 10 people (the size of a more normal house), this just covers rent for the house manager; being able to pay them a somewhat reasonable salary looks more like a 20% or 30% increase in rent.

Comment by vaniver on The housekeeper · 2018-12-04T18:36:09.800Z · score: 3 (1 votes) · LW · GW

It's had a huge range, from about a dozen to just over twenty.

Comment by vaniver on Bodega Bay: workshop · 2018-11-27T04:19:01.952Z · score: 16 (6 votes) · LW · GW

This is the carpet, and these are the backjacks, both of which were found by Duncan.

Comment by vaniver on Humans Consulting HCH · 2018-11-26T23:00:24.651Z · score: 4 (2 votes) · LW · GW
How does the recursion bottom out? If real Hugh's response to the question is to ask the machine, then perfectly simulated Hugh's response must be the same. If real Hugh's response is not to ask the machine, then the machine remains unused.

I think there are lots of strategies here that just fail to work. For example, if Hugh passes on the question with no modification, then you build an infinite tower that never does any work.

But there are strategies that do work. For example, whenever Hugh receives a question he can answer, he does so, and whenever he receives a question that is 'too complicated', he divides it into subquestions and consults HCH separately on each subquestion, using the results of the consultation to compute the overall answer. This looks like it will terminate, so long as the answers can flow back up the pyramid. Hugh could also pass along numbers about how subdivided a question has become, or the whole stack trace so far, in case there are problems that seem like they have cyclical dependencies (where I want to find out A, which depends on B, which depends on C, which depends on A, which depends on...). Hugh could pass back upwards results like "I didn't know how to make progress on the subproblem you gave me."

For example, you could imagine attempting to prove a mathematical conjecture. The first level has Hugh looking at the whole problem, and he thinks "I don't know how to solve this, but I would know how to solve it if I had lemmas like A, B, and C." So he asks HCH to separately solve A, B, and C. This spins up a copy of Hugh looking at A, who also thinks "I don't know how to solve this, but I would if I had lemmas like Aa, Ab, and Ac." This spins up a copy of Hugh looking at Aa, who thinks "oh, this is solvable like so; here's a proof of Aa." Hugh_A is now looking at the proofs, disproofs, and indeterminates of Aa, Ab, and Ac, and now can either write their conclusion about A, or spins up new subagents to examine new subparts of the problem.

Note that in this formulation, you primarily have communication up and down the pyramid, and the communication is normally at the creation and destruction of subagents. It could end up that you prove the same lemma thousands of times across the branches of the tree, because it turned out to be useful in many different places.

Comment by vaniver on “She Wanted It” · 2018-11-20T02:02:39.134Z · score: 21 (7 votes) · LW · GW

My read of this thread is that your (Andaro's) original comment pointed at a particular subset of relationships, which are 'bad' but seem better than the alternatives to the person inside them, where the reason to trust the judgment of the person inside them is that right to exit means they will leave relationships that are better than their alternatives. Paperclip Maximizer then pointed out that a major class of reasons people stay in abusive relationships is that their alternatives are manipulated by the abuser, either through explicit or implicit threats or attacks directed at the epistemology (such that the alternatives are difficult to imagine or correctly weigh).

I understood Paperclip Maximizer's point to be that there's a disconnect between the sort of relationships you describe in the ancestral comment and what a 'typical' abusive relationship might look like; it might be highly difficult to determine whether "right to exit" is being denied or not. (For example, in #12, the primary factor preventing exit is the pride of the person stuck in the relationship. Is that their partner blocking exercising the right?) If this disconnect exists as a tradeoff, such that the more a relationship involves reducing right to exit the more we suspect that relationship is (or could be) abusive, then the original comment doesn't seem germane; interpreted such that it's true, it's irrelevant, and interpreted such that it's relevant, it's untrue.

Comment by vaniver on Clickbait might not be destroying our general Intelligence · 2018-11-19T23:12:50.466Z · score: 8 (4 votes) · LW · GW

PSA: His name is spelled "Eliezer."

Suppose that different humans have different selection criteria when deciding to share a meme. ...
Nowadays, memes can specialize to focus onto tiny subsets of the population.

One difference between 'the past' and 'the present' that Eliezer doesn't mention, but which is relevant to the question of selection effects, is to what extent memes are spread by 'thought leaders' (who are typically optimizing for multiple things, and have some sense of responsibility) and to what extent memes are spread 'peer-to-peer.' Whether this improves or degrades selection on the relevant criteria obviously depends on the incentives involved, but with 'general reasonableness' it's clear to see how a pundit is incentivized to appeal to other pundits (of all camps) whereas a footsoldier is incentivized to appeal to other footsoldiers. (One common point among the base of both left and right appears to be distrust of the party elite, which is often seen as being too willing to cooperate with the other side--imagine how they might react to the party elite of a century ago, before the increased polarization!)

And so, if more and more of the political conversation becomes "signalling on Facebook pages" instead of "editorials in the national paper of record", it's clear to see how reasonableness could be modeled less, and thus adopted less.

Comment by vaniver on No Really, Why Aren't Rationalists Winning? · 2018-11-06T22:04:43.823Z · score: 9 (6 votes) · LW · GW
I think it's possible (and important) to analyze this phenomenon and see what's going on. But the point is that this will involve analyzing a phenomenon - ie truth-seeking, ie epistemic rationality, ie the thing we're good at and which is our comparative advantage - and not winning immediately.

I mostly agree with this, but want to point at something that your comment didn't really cover, that "whether to go to the homeopath or the doctor" is a question that I expect epistemic rationality to be helpful for. (This is, in large part, a question that Inadequate Equilibria was pointed towards.) [This is sort of the fundamental question of self-help, once you've separated it into "what advice should I follow?" and "what advice is out there?"]

But this requires that the question of how to evaluate strategies be framed more in terms of "I used my judgment to weigh evidence" and less in terms of "I followed the prestige" or "I compared the lengths of their articulated justifications" or similar things. A layperson in 1820 who is using the latter will wrongly pick the doctors, and a confused layperson in 2000 will wrongly pick homeopathy, but ideally a rationalist would switch from homeopathy to doctors as the actual facts on the ground change.

This doesn't mean a rationalist in 1820 should be satisfied with homeopathy; it should be known to them as a temporary plug to a hole in their map. But that also doesn't mean it's the most interesting or important hole in their map; probably then they'd be most interested in what's up with electricity. [Similarly, today I'm somewhat confused about what's going on with diet, and have some 'reasonable' guesses and some 'woo' guesses, but it's clearly not the most interesting hole in my map.]

And so my sense is a rationalist in 2018 should know what they know, and what they don't, and be scientific about things to the degree that they capture their curiosity (which relates both to 'irregularities in the map' and 'practically useful'). Which is basically how I read your comment, except that you seem more worried about particular things than I am.

Comment by vaniver on When does rationality-as-search have nontrivial implications? · 2018-11-06T17:51:01.204Z · score: 22 (7 votes) · LW · GW

This seems broadly right to me, but it seems to me like metaheuristics (in the numerical optimization sense) are practical and have a structure like the one that you're describing. Neural architecture search is the name people are using for this sort of thing in contemporary ML.

What's different between them and the sort of thing you describe? Well, for one the softening is even stronger; rather than a performance-weighted average across all strategies, it's a performance-weighted sampling strategy that has access to all strategies (but will only actually evaluate a small subset of them). But it seems like the core strategy--be both doing object-level cognition and meta-level cognition about how you're doing object-level cognitive--is basically the same.

It remains unclear to me whether the right way to find these meta-strategies is something like "start at the impractical ideal and rescue what you can" or "start with something that works and build new features"; it seems like modern computational Bayesian methods look more like the former than the latter. When I think about how to describe human epistemology, it seems like computationally bounded Bayes is a promising approach (where probabilities change both by the standard updates among hypotheses that already exist, and new operations to be formalized to add or remove hypotheses; you want to be able to capture "Why didn't you assign high probability to X?" "Because I didn't think of it; now that I have, I do."). But of course I'm using my judgment that already works to consider adding new features here, rather than having built how to think out of rescuing what I can from the impractical ideal of how to think.

Comment by vaniver on What is ambitious value learning? · 2018-11-01T16:36:44.029Z · score: 8 (4 votes) · LW · GW
although typically rewards in RL depend only on states,

Presumably this should be a period? (Or perhaps there's a clause missing pointing out the distinction between caring about history and caring about states, tho you could transform one into the other?)

Comment by vaniver on October gwern.net links · 2018-11-01T14:13:35.047Z · score: 6 (3 votes) · LW · GW

I'm glad you finally got around to watching it! I stopped watching new episodes as they were coming out around season 6, but would still catch up occasionally until about midway through season 7, where I've been stuck for a while. This seems like as good an impetus as any to make it up to the end of season 8.

One thing worth mentioning about fanfiction--which I originally read from Bad Horse but couldn't find the original source--is that one benefit ponyfic has over other source materials is that you can write basically any story in the Equestria universe, enabling fanfic 'about people' rather than, say, 'about wizards' or 'about vampires' or 'about ninjas' or so on. I could more easily find his claim that fimfiction is just better than other fanfiction sites, from a UI perspective.

Comment by vaniver on On Doing the Improbable · 2018-10-28T21:01:01.300Z · score: 23 (8 votes) · LW · GW
They don’t think it’s pretty likely to succeed…

How worth doing something is depends on the product of its success chance and its payoff, but it's not clear that anticipations of goodness scale as much as consequences of goodness do, which could lead to predictably unmotivating plans (which 'should be' motivating).

Comment by vaniver on Schools Proliferating Without Practicioners · 2018-10-27T19:01:35.160Z · score: 13 (3 votes) · LW · GW

This is what I mean when I say that presentation of Double Crux is logical, instead of probabilistic. The version of double crux that I use is generally probabilistic, and I claim is an obvious modification of the logical version.

Public Positions and Private Guts

2018-10-11T19:38:25.567Z · score: 90 (27 votes)

Maps of Meaning: Abridged and Translated

2018-10-11T00:27:20.974Z · score: 54 (22 votes)

Compact vs. Wide Models

2018-07-16T04:09:10.075Z · score: 32 (13 votes)

Thoughts on AI Safety via Debate

2018-05-09T19:46:00.417Z · score: 88 (21 votes)

Turning 30

2018-05-08T05:37:45.001Z · score: 69 (21 votes)

My confusions with Paul's Agenda

2018-04-20T17:24:13.466Z · score: 90 (22 votes)

LW Migration Announcement

2018-03-22T02:18:19.892Z · score: 139 (37 votes)

LW Migration Announcement

2018-03-22T02:17:13.927Z · score: 2 (2 votes)

Leaving beta: Voting on moving to LessWrong.com

2018-03-11T23:40:26.663Z · score: 6 (6 votes)

Leaving beta: Voting on moving to LessWrong.com

2018-03-11T22:53:17.721Z · score: 139 (42 votes)

LW 2.0 Open Beta Live

2017-09-21T01:15:53.341Z · score: 23 (23 votes)

LW 2.0 Open Beta starts 9/20

2017-09-15T02:57:10.729Z · score: 24 (24 votes)

Pair Debug to Understand, not Fix

2017-06-21T23:25:40.480Z · score: 8 (8 votes)

Don't Shoot the Messenger

2017-04-19T22:14:45.585Z · score: 11 (11 votes)

The Quaker and the Parselmouth

2017-01-20T21:24:12.010Z · score: 6 (7 votes)

Announcement: Intelligence in Literature Prize

2017-01-04T20:07:50.745Z · score: 9 (9 votes)

Community needs, individual needs, and a model of adult development

2016-12-17T00:18:17.718Z · score: 12 (13 votes)

Contra Robinson on Schooling

2016-12-02T19:05:13.922Z · score: 4 (5 votes)

Downvotes temporarily disabled

2016-12-01T17:31:41.763Z · score: 17 (18 votes)

Articles in Main

2016-11-29T21:35:17.618Z · score: 3 (4 votes)

Linkposts now live!

2016-09-28T15:13:19.542Z · score: 27 (30 votes)

Yudkowsky's Guide to Writing Intelligent Characters

2016-09-28T14:36:48.583Z · score: 4 (5 votes)

Meetup : Welcome Scott Aaronson to Texas

2016-07-25T01:27:43.908Z · score: 1 (2 votes)

Happy Notice Your Surprise Day!

2016-04-01T13:02:33.530Z · score: 14 (15 votes)

Posting to Main currently disabled

2016-02-19T03:55:08.370Z · score: 22 (25 votes)

Upcoming LW Changes

2016-02-03T05:34:34.472Z · score: 46 (47 votes)

LessWrong 2.0

2015-12-09T18:59:37.232Z · score: 92 (96 votes)

Meetup : Austin, TX - Petrov Day Celebration

2015-09-15T00:36:13.593Z · score: 1 (2 votes)

Conceptual Specialization of Labor Enables Precision

2015-06-08T02:11:20.991Z · score: 10 (11 votes)

Rationality Quotes Thread May 2015

2015-05-01T14:31:04.391Z · score: 9 (10 votes)

Meetup : Austin, TX - Schelling Day

2015-04-13T14:19:21.680Z · score: 1 (2 votes)

Sapiens

2015-04-08T02:56:25.114Z · score: 40 (35 votes)

Thinking well

2015-04-01T22:03:41.634Z · score: 28 (29 votes)

Rationality Quotes Thread April 2015

2015-04-01T13:35:48.660Z · score: 7 (9 votes)

Meetup : Austin, TX - Quack's

2015-03-20T15:12:31.376Z · score: 1 (2 votes)

Rationality Quotes Thread March 2015

2015-03-02T23:38:48.068Z · score: 8 (8 votes)

Rationality Quotes Thread February 2015

2015-02-01T15:53:28.049Z · score: 6 (6 votes)

Control Theory Commentary

2015-01-22T05:31:03.698Z · score: 18 (18 votes)

Behavior: The Control of Perception

2015-01-21T01:21:58.801Z · score: 31 (31 votes)

An Introduction to Control Theory

2015-01-19T20:50:02.624Z · score: 35 (35 votes)

Estimate Effect Sizes

2014-03-27T16:56:35.113Z · score: 1 (2 votes)

[LINK] Will Eating Nuts Save Your Life?

2013-11-30T03:13:03.878Z · score: 7 (12 votes)

Understanding Simpson's Paradox

2013-09-18T19:07:56.653Z · score: 11 (11 votes)

Rationality Quotes September 2013

2013-09-04T05:02:05.267Z · score: 5 (5 votes)

Harry Potter and the Methods of Rationality discussion thread, part 27, chapter 98

2013-08-28T19:29:17.855Z · score: 2 (3 votes)

Rationality Quotes August 2013

2013-08-02T20:59:04.223Z · score: 7 (7 votes)

Rationality Quotes July 2013

2013-07-02T16:21:59.219Z · score: 5 (5 votes)

Open Thread, July 1-15, 2013

2013-07-01T17:10:10.892Z · score: 4 (5 votes)

Harry Potter and the Methods of Rationality discussion thread, part 19, chapter 88-89

2013-06-30T01:22:02.743Z · score: 12 (13 votes)

Produce / Consume Ratios

2013-04-21T18:38:06.144Z · score: 13 (17 votes)