Posts

Vaniver's View on Factored Cognition 2019-08-23T02:54:00.915Z · score: 33 (8 votes)
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:58.633Z · score: 41 (10 votes)
Commentary On "The Abolition of Man" 2019-07-15T18:56:27.295Z · score: 65 (15 votes)
Is there a guide to 'Problems that are too fast to Google'? 2019-06-17T05:04:39.613Z · score: 48 (14 votes)
Welcome to LessWrong! 2019-06-14T19:42:26.128Z · score: 81 (38 votes)
Steelmanning Divination 2019-06-05T22:53:54.615Z · score: 141 (57 votes)
Public Positions and Private Guts 2018-10-11T19:38:25.567Z · score: 90 (28 votes)
Maps of Meaning: Abridged and Translated 2018-10-11T00:27:20.974Z · score: 54 (22 votes)
Compact vs. Wide Models 2018-07-16T04:09:10.075Z · score: 32 (13 votes)
Thoughts on AI Safety via Debate 2018-05-09T19:46:00.417Z · score: 88 (21 votes)
Turning 30 2018-05-08T05:37:45.001Z · score: 75 (24 votes)
My confusions with Paul's Agenda 2018-04-20T17:24:13.466Z · score: 90 (22 votes)
LW Migration Announcement 2018-03-22T02:18:19.892Z · score: 139 (37 votes)
LW Migration Announcement 2018-03-22T02:17:13.927Z · score: 2 (2 votes)
Leaving beta: Voting on moving to LessWrong.com 2018-03-11T23:40:26.663Z · score: 6 (6 votes)
Leaving beta: Voting on moving to LessWrong.com 2018-03-11T22:53:17.721Z · score: 139 (42 votes)
LW 2.0 Open Beta Live 2017-09-21T01:15:53.341Z · score: 23 (23 votes)
LW 2.0 Open Beta starts 9/20 2017-09-15T02:57:10.729Z · score: 24 (24 votes)
Pair Debug to Understand, not Fix 2017-06-21T23:25:40.480Z · score: 8 (8 votes)
Don't Shoot the Messenger 2017-04-19T22:14:45.585Z · score: 11 (11 votes)
The Quaker and the Parselmouth 2017-01-20T21:24:12.010Z · score: 6 (7 votes)
Announcement: Intelligence in Literature Prize 2017-01-04T20:07:50.745Z · score: 9 (9 votes)
Community needs, individual needs, and a model of adult development 2016-12-17T00:18:17.718Z · score: 12 (13 votes)
Contra Robinson on Schooling 2016-12-02T19:05:13.922Z · score: 4 (5 votes)
Downvotes temporarily disabled 2016-12-01T17:31:41.763Z · score: 17 (18 votes)
Articles in Main 2016-11-29T21:35:17.618Z · score: 3 (4 votes)
Linkposts now live! 2016-09-28T15:13:19.542Z · score: 27 (30 votes)
Yudkowsky's Guide to Writing Intelligent Characters 2016-09-28T14:36:48.583Z · score: 4 (5 votes)
Meetup : Welcome Scott Aaronson to Texas 2016-07-25T01:27:43.908Z · score: 1 (2 votes)
Happy Notice Your Surprise Day! 2016-04-01T13:02:33.530Z · score: 14 (15 votes)
Posting to Main currently disabled 2016-02-19T03:55:08.370Z · score: 22 (25 votes)
Upcoming LW Changes 2016-02-03T05:34:34.472Z · score: 46 (47 votes)
LessWrong 2.0 2015-12-09T18:59:37.232Z · score: 92 (96 votes)
Meetup : Austin, TX - Petrov Day Celebration 2015-09-15T00:36:13.593Z · score: 1 (2 votes)
Conceptual Specialization of Labor Enables Precision 2015-06-08T02:11:20.991Z · score: 10 (11 votes)
Rationality Quotes Thread May 2015 2015-05-01T14:31:04.391Z · score: 9 (10 votes)
Meetup : Austin, TX - Schelling Day 2015-04-13T14:19:21.680Z · score: 1 (2 votes)
Sapiens 2015-04-08T02:56:25.114Z · score: 42 (36 votes)
Thinking well 2015-04-01T22:03:41.634Z · score: 28 (29 votes)
Rationality Quotes Thread April 2015 2015-04-01T13:35:48.660Z · score: 7 (9 votes)
Meetup : Austin, TX - Quack's 2015-03-20T15:12:31.376Z · score: 1 (2 votes)
Rationality Quotes Thread March 2015 2015-03-02T23:38:48.068Z · score: 8 (8 votes)
Rationality Quotes Thread February 2015 2015-02-01T15:53:28.049Z · score: 6 (6 votes)
Control Theory Commentary 2015-01-22T05:31:03.698Z · score: 18 (18 votes)
Behavior: The Control of Perception 2015-01-21T01:21:58.801Z · score: 31 (31 votes)
An Introduction to Control Theory 2015-01-19T20:50:02.624Z · score: 35 (35 votes)
Estimate Effect Sizes 2014-03-27T16:56:35.113Z · score: 1 (2 votes)
[LINK] Will Eating Nuts Save Your Life? 2013-11-30T03:13:03.878Z · score: 7 (12 votes)
Understanding Simpson's Paradox 2013-09-18T19:07:56.653Z · score: 11 (11 votes)
Rationality Quotes September 2013 2013-09-04T05:02:05.267Z · score: 5 (5 votes)

Comments

Comment by vaniver on A Critique of Functional Decision Theory · 2019-09-14T00:28:59.268Z · score: 7 (3 votes) · LW · GW
In particular, the old name for this, 'updatelessness', threw me for a loop for a while because it sounded like the dumb "don't take input from your environment" instead of the conscious "consider what impact you're having on hypothetical versions of yourself".

As a further example, consider glomarization. If I haven't committed a crime, pleading the fifth is worse than pleading innocence; however it means that when I have committed a crime, I have to either pay the costs of pleading guilty, pay the costs of lying, or plead the fifth (which will code to "I'm guilty", because I never say it when I'm innocent). If I care about honesty and being difficult to distinguish from the versions of myself who commit crimes, then I want to glomarize even before I commit any crimes.

Comment by vaniver on G Gordon Worley III's Shortform · 2019-09-14T00:15:34.972Z · score: 21 (7 votes) · LW · GW

There's a dynamic here that I think is somewhat important: socially recognized gnosis.

That is, contemporary American society views doctors as knowing things that laypeople don't know, and views physicists as knowing things that laypeople don't know, and so on. Suppose a doctor examines a person and says "ah, they have condition X," and Amy responds with "why do you say that?", and the doctor responds with "sorry, I don't think I can generate a short enough explanation that is understandable to you." It seems like the doctor's response to Amy is 'socially justified', in that the doctor won't really lose points for referring to a pre-existing distinction between those-in-the-know and laypeople (except maybe for doing it rudely or gracelessly). There's an important sense in which society understands that it in fact takes many years of focused study to become a physicist, and physicists should not be constrained by 'immediate public justification' or something similar.

But then there's a social question, of how to grant that status. One might imagine that we want astronomers to be able to do their astronomy and have their unintelligibility be respected, while we don't want to respect the unintelligibility of astrologers.

So far I've been talking 'nationally' or 'globally' but I think a similar question holds locally. Do we want it to be the case that 'rationalists as a whole' think that meditators have gnosis and that this is respectable, or do we want 'rationalists as a whole' to think that any such respect is provisional or 'at individual discretion' or a mistake?

That is, when you say:

I don't consider this a problem, but I also recognize that within some parts of the rationalist community that is considered a problem (I model you as being one such person, Duncan).

I feel hopeful that we can settle whether or not this is a problem (or at least achieve much more mutual understanding and clarity).

So it is true that I can't provide adequate episteme of my claim, and maybe that's what you're reacting to.

This feels like the more important part ("if you don't have episteme, why do you believe it?") but I think there's a nearly-as-important other half, which is something like "presenting as having respected gnosis" vs. "presenting as having unrespected gnosis." If you're like "as a doctor, it is my considered medical opinion that everyone has spirituality", that's very different from "look, I can't justify this and so you should take it with a grain of salt, but I think everyone secretly has spirituality". I don't think you're at the first extreme, but I think Duncan is reacting to signals along that dimension.

Comment by vaniver on A Critique of Functional Decision Theory · 2019-09-13T21:53:55.041Z · score: 20 (10 votes) · LW · GW

(I work at MIRI, and edited the Cheating Death in Damascus paper, but this comment wasn't reviewed by anyone else at MIRI.)

This should be a constraint on any plausible decision theory.

But this principle prevents you from cooperating with yourself across empirical branches in the world!

Suppose a good predictor offers you a fair coin flip at favorable odds (say, 2 of their dollars to one of yours). If you called correctly, you can either forgive (no money moves) or demand; if you called incorrectly, you can either pay up or back out. The predictor only responds to your demand that they pay up if they predict that you would yourself pay up when you lose, but otherwise this interaction doesn't affect the rest of your life.

You call heads, the coin comes up tails. The Guaranteed Payoffs principle says:

You're certain that you're in a world where you will just lose a dollar if you pay up, and will lose no dollars if you don't pay up. It maximizes utility conditioned on this starting spot to not pay up.

The FDT perspective is to say:

The price of winning $2 in half of the worlds is losing $1 in the other half of the worlds. You want to be the sort of agent who can profit from these sorts of bets and/or you want to take this opportunity to transfer utility across worlds, because it's net profitable.

Note that the Bomb case is one in which we condition on the 1 in a trillion trillion failure case, and ignore the 999999999999999999999999 cases in which FDT saves $100. This is like pointing at people who got into a plane that crashed and saying "what morons, choosing to get on a plane that would crash!" instead of judging their actions from the state of uncertainty that they were in when they decided to get on the plane.

This is what Abram means when he says "with respect to the prior of the decision problem"; not that the FDT agent is expected to do well from any starting spot, but from the 'natural' one. (If the problem statement is as described and the FDT agent sees "you'll take the right box" and the FDT agent takes the left box, then it must be the case that this was the unlucky bad prediction and made unlikely accordingly.) It's not that the FDT agent wanders through the world unable to determine where it is even after obtaining evidence; it's that as the FDT agent navigates the world it considers its impact across all (connected) logical space instead of just immediately downstream of itself. Note that in my coin flip case, FDT is still trying to win the reward when the coin comes up heads even though in this case it came up tails, as opposed to saying "well, every time I see this problem the coin will come up tails, therefore I shouldn't participate in the bet."

[I do think this jump, from 'only consider things downstream of you' to 'consider everything', does need justification and I think the case hasn't been as compelling as I'd like it to be. In particular, the old name for this, 'updatelessness', threw me for a loop for a while because it sounded like the dumb "don't take input from your environment" instead of the conscious "consider what impact you're having on hypothetical versions of yourself".]

But then, it seems to me, that FDT has lost much of its initial motivation: the case for one-boxing in Newcomb’s problem didn’t seem to stem from whether the Predictor was running a simulation of me, or just using some other way to predict what I’d do.

It seems to me like either you are convinced that the predictor is using features you can control (based on whether or not you decide to one-box) or features you can't control (like whether you're English or Scottish). If you think the latter, you two-box (because regardless of whether the predictor is rewarding you for being Scottish or not, you benefit from the $1000), and if you think the former you one-box (because you want to move the probability that the predictor fills the large box).

According to me, the simulation is just a realistic way to instantiate an actual dependence between the decision I'm making now and the prediction. (Like, when we have AIs we'll actually be able to put them in Newcomb-like scenarios!) If you want to posit a different, realistic version of that, then FDT is able to handle it (and the difficulty is all in moving from the English description of the problem to the subjunctive dependency graph).

Now, because there’s an agent making predictions, the FDT adherent will presumably want to say that the right action is one-boxing.

I don't think this is right; I think this is true only if the FDT agent thinks that S (a physically verifiable fact about the world, like the lesion) is logically downstream of its decision. In the simplest such graph I can construct, S is still logically upstream of the decision; are we making different graphs?

But it’s very implausible that there’s some S such that a tiny change in its physical makeup should affect whether one ought to one-box or two-box.

I don't buy this as an objection; decisions are often discontinuous. Suppose I'm considering staying at two different hotels, one with price A and the other with price B with B<A; then construct a series of changes to A that moves it imperceptibly, and at some point my decision switches abruptly from staying at hotel B to staying at hotel A. Whenever you pass multiple continuous quantities through an argmin or argmax, you can get sudden changes.

(Or, put a more analogous way, you can imagine insurance against an event with probability p, and we smoothly vary p, and at some point our action discontinuously jumps from not buying the insurance to buying the insurance.)

Comment by vaniver on Raemon's Scratchpad · 2019-09-01T19:26:09.879Z · score: 8 (3 votes) · LW · GW
It may take multiple years to find a group house where everyone gets along with everyone. I think it makes sense, earlier on, to focus on exploring (i.e. if you've just moved to the Bay, don't worry about getting a group house culture that is a perfect fit), but within 3 years I think it's achievable for most people to have found a group house that is good for friendship.

A thing that I have seen work well here is small houses nucleating out of large houses. If you're living in a place with >20 people for 6 months, probably you'll make a small group of friends that want similar things, and then you can found a smaller place with less risk. But of course this requires there being big houses that people can move into and out of, and that don't become the lower-common-denominator house that people can't form friendships in because they want to avoid the common spaces.

But of course the larger the house, the harder it is to get off the ground, and a place with deliberately high churn represents even more of a risk.

Comment by vaniver on How does one get invited to the alignment forum? · 2019-08-28T05:59:02.309Z · score: 5 (2 votes) · LW · GW
I just got approved for the Alignment Forum. I don't suppose you could explain why I was approved? I had others ask me about what gets someone approved.

Basically, in the runup for MSFP blog post day I reviewed a bunch of the old applications and approved three or four people, if I remember correctly. The expected pathways are something like "write good comment on LW and get them promoted to alignment forum" or "be someone whose name I recognize because of the AI alignment work (and think it's good)" or "come to an event where we think attendees should get membership."

Comment by vaniver on Vaniver's View on Factored Cognition · 2019-08-23T21:33:41.109Z · score: 3 (1 votes) · LW · GW

When I imagine them they are being initiated by some unit higher in the hierarchy. Basically, you could imagine having a tree of humans that is implementing a particular search process, or a different tree of humans implementing a search over search processes, with the second perhaps being more capable (because it can improve itself) but also perhaps leading to inner alignment problems.

Comment by vaniver on Vaniver's View on Factored Cognition · 2019-08-23T04:13:19.116Z · score: 6 (3 votes) · LW · GW
It wouldn't surprise me if I was similarly confused now, tho hopefully I am less so, and you shouldn't take this post as me speaking for Paul.

This post was improved some by a discussion with Evan which crystallized some points as 'clear disagreements' instead of me being confused, but I think there are more points to crystallize further in this way. It was posted tonight in the state it's in as part of MSFP 2019's blog post day, but might get edited more tomorrow or perhaps will get further elaborated in the comments section.

Comment by vaniver on Thoughts from a Two Boxer · 2019-08-23T01:37:09.710Z · score: 9 (5 votes) · LW · GW
One of my biggest open questions in decision theory is where this line between fair and unfair problems should lie.

I think the current piece that points at this question most directly is Success-First Decision Theories by Preston Greene.

At this point I am not convinced any problem where agents in the environment have access to our decision theory's source code or copies of our agent are fair problems. But my impression from hearing and reading what people talk about is that this is a heretical position.

It seems somewhat likely to me that agents will be reasoning about each other using access to source code fairly soon (if just human operators evaluating whether or not to run intelligent programs, or what inputs to give to those programs). So then the question is something like: "what's the point of declaring a problem unfair?", to which the main answer seems to be "to spend limited no free lunch points." If I perform poorly on worlds that don't exist in order to perform better on worlds that do exist, that's a profitable trade.

Which leads to this:

I disagree with this view and see Newcomb's problem as punishing rational agents.
...
My big complaint with mind reading is that there just isn't any mind reading.

One thing that seems important (for decision theories implemented by humans or embedded agents, as distinct from decision theories implemented by Cartesian agents) is whether or not the decision theory is robust to ignorance / black swans. That is, if you bake into your view of the world that mind reading is impossible, then you can be durably exploited by any actual mind reading (whereas having some sort of ontological update process or low probability on bizarre occurrences allows you to only be exploited a finite number of times).

But note the connection to the earlier bit--if something is actually impossible, then it feels costless to give up on it in order to perform better in the other worlds. (My personal resolution to counterfactual mugging, for example, seems to rest on an underlying belief that it's free to write off logically inconsistent worlds, in a way that it's not free to write off factually inconsistent worlds that could have been factually consistent / are factually consistent in a different part of the multiverse.)

Comment by vaniver on Why so much variance in human intelligence? · 2019-08-22T22:57:25.109Z · score: 47 (16 votes) · LW · GW

Four factors of some relevance:

First, humans aren't at equilibrium; as you point out, our environment has shifted much more quickly than evolution has time to catch up with. So we should expect that many analyses that make sense at equilibrium aren't correctly describing what's happening now.

Second, while it seems like "humans are very different yet mice are all the same," this is often because it's easy to track the differences in humans but difficult to track the differences in mice. What fraction of mice become parents (a decent proxy for the primary measure of success, according to evolution)? Would it look like the core skills of being a mouse (finding food, evading predators, sociability, or so on) have variance comparable to the human variation in intelligence? What fraction of humans become parents?

Third, while we have some evidence that humans are selected for intelligence (like the whole skull/birth canal business), intelligence is just one of many traits that are useful for humans, and we don't have reason to believe this is the equilibrium that would result if intelligence were the only determinant of fitness. Consider Cochran et al on evidence for selection for intelligence for Ashkenazi Jews; they estimate that parents had perhaps a 1 IQ point edge over non-parents for the last ~500 years (with lower estimates on the heritability of intelligence having to only slightly increase that number).

Fourth, rapid population growth generally amplifies variance along dimensions that aren't heavily selected for if the population growth is accomplished in part by increasing the number of parents.

Comment by vaniver on AI Alignment Open Thread August 2019 · 2019-08-15T19:58:48.694Z · score: 5 (2 votes) · LW · GW

Basically, whether you think it's primarily related to alignment vs. rationality. (Everything on the AF is also on LW, but the reverse isn't true.) The feedback loop if you're posting too much or stuff that isn't polished enough is downvotes (or insufficient upvotes).

Comment by vaniver on Conversation on forecasting with Vaniver and Ozzie Gooen · 2019-07-30T20:37:53.295Z · score: 14 (5 votes) · LW · GW

An idea that I had later which I didn't end up saying to Ozzie at the retreat was something like a "Good Judgment Project for high schoolers", in the same way that there are math contests and programming contests and so on. I would be really interested in seeing what happens if we can identify people who would be superforecasters as adults when they're still teens or in undergrad or whatever, and then point them towards a career in forecasting / have them work together to build up the art, and this seems like a project that's "normal" enough to catch on while still doing something interesting.

Comment by vaniver on Conversation on forecasting with Vaniver and Ozzie Gooen · 2019-07-30T20:32:40.865Z · score: 10 (3 votes) · LW · GW
(there wasn't a clear case for how this would happen AFAICT, just 'i dunno neural net magic might be able to help.' I don't expect neural-net magic to help here in the next 10 years but I could see it helping in the next 20 or 30. I'm not sure if it happens much farther in advance than "actual AGI" though)

I thought Ozzie's plan here was closer to "if you have a knowledge graph, you can durably encode a lot of this in ways that transfer between questions", and you can have lots of things where you rapidly build out a suite of forecasts with quantifiers and pointers. I thought "maybe NLP will help you pick out bad questions" but I think this is more "recognizing common user errors" than it is "understanding what's going on."

Comment by vaniver on Nutrition is Satisficing · 2019-07-17T22:59:17.340Z · score: 6 (4 votes) · LW · GW

Both low-carb and keto are improved by eating lots of leafy green vegetables, I think. The main question is something like "are you getting more sugar or fiber out of this?". Plant-free diets, of course, recommend against eating vegetables.

Comment by vaniver on Why did we wait so long for the bicycle? · 2019-07-17T22:50:28.884Z · score: 9 (4 votes) · LW · GW
A quick google search gave me an estimate of a 300:1 cost ratio of iron to silver (1 lb iron costing ~300 lb silver) in the 14th century.

You have it inverted; your link says:

That is, a (pure!) silver pound would buy 300 pounds of iron or steel.
Comment by vaniver on What are we predicting for Neuralink event? · 2019-07-17T19:04:33.747Z · score: 8 (3 votes) · LW · GW

The first half of #1 I got right, but I think the second half was more wrong than right. While this might be fast enough to be useful in a crisis, it looks like the design is focused more on getting useful information out of regions rather than the 'gross functionality' target I mentioned there.

I think the big result here was that they came up with a way to do deep insertion of wires designed for biocompatibility and longevity, which is impressive and along a dimension I wasn't tracking too much in my prediction. In retrospect, I might have updated too much on the article I read beforehand, which gave me the sense that this was closer to 'a medical startup that got Musk's money' than 'the thing Musk said he was trying to do, which will try to be useful along the way', which is what the white paper looks more like.

Comment by vaniver on What are we predicting for Neuralink event? · 2019-07-17T18:54:30.592Z · score: 3 (1 votes) · LW · GW

White paper they released.

Comment by vaniver on Commentary On "The Abolition of Man" · 2019-07-17T02:54:02.763Z · score: 6 (2 votes) · LW · GW

I don't have a solid sense of this yet, in large part because of how much of it is experiential.

I think I would count the 5-second level as gesturing in this direction; I also note the claim that HPMOR lets people 'experience' content from the Sequences instead of just read it. Some friends who did (old-style) boxing described it as calibrating their emotional reactions to danger and conflict in a way that seems related.

I've been experimenting with conceptualizing some of my long-standing dilemmas as questions of the form "does this desire have merit?" as opposed to something closer to "should I do A or B?", but it's too soon to see if that's the right approach.

Comment by vaniver on Commentary On "The Abolition of Man" · 2019-07-15T21:32:39.988Z · score: 5 (2 votes) · LW · GW

See also: Role are Martial Arts For Agency.

Comment by vaniver on What are we predicting for Neuralink event? · 2019-07-15T17:46:17.834Z · score: 5 (2 votes) · LW · GW

Length of the control vector seems important; there's lots of ways to use gross signals to control small vectors that don't scale to controlling large vectors. Basically, you could imagine that question as something like "could you dance with it?" (doable in 2014) or "could you play a piano with it?" (doable in 2018), both of which naively seem more complicated than an (x,y) pair (at least, when you don't have visual feedback).

Comment by vaniver on What are we predicting for Neuralink event? · 2019-07-14T17:54:38.851Z · score: 21 (9 votes) · LW · GW

I predict with moderate confidence that we will not see:

  • 'Augmented reality'-style overlays or video beamed directly to the visual cortex.
  • Language output (as text or audio or so on) or input.
  • Pure tech or design demos without any demonstrations or experiments with real biology.

I predict with weak confidence that we won't see results in humans. (This prediction is stronger the more invasive the results we're seeing; a superior EEG they could show off in humans, but repair or treatment of strokes will likely only be in mice.)

(Those strike me as the next milestones along the 'make BCIs that are useful for making top performers higher performing' dimension, which seems to be Musk's long-term vision for Neuralink.)

They've mostly been focusing on medical applications. So I predict we will see something closer to:

  • High-spatial-fidelity brain monitoring (probably invasive?), intended to determine gross functionality of different regions (perhaps useful in conjunction with something like ultrasound to do targeted drug delivery for strokes).
  • Neural prostheses intended to replace the functionality of single brain regions that have been destroyed. (This seems more likely for regions that are somehow symmetric or simple.)
  • Results in rats or mice.

I notice I wanted to put 'dexterous motor control' on both lists, so I'm somehow confused; it seems like we already have prostheses that perform pretty well based on external nerve sites (like reading off what you wanted to do with your missing hand from nerves in your arm) but I somehow don't expect us to have the spatial precision or filtering capacity to do that in the brain. (And also it just seems much riskier to attach electrodes internally or to the spinal cord than at an external site, making it unclear why you would even want that.) The main question here for me is something closer to 'bandwidth', where it seems likely you can pilot a drone using solely EEG if the thing you're communicating is closer to "a location that you should be at" than "how quickly each of the four rotors should be spinning in what direction." But we might have results where rats have learned how to pilot drones using low-level controls, or something cool like that.

Comment by vaniver on The AI Timelines Scam · 2019-07-12T05:08:34.364Z · score: 19 (7 votes) · LW · GW

Specifically, 'urgent' is measured by the difference between the time you have and the time it will take to do. If I need the coffee to be done in 15 minutes and the bread to be done in an hour, but if I want the bread to be done in an hour I need to preheat the oven now (whereas the coffee only takes 10 minutes to brew start to finish) then preheating the oven is urgent whereas brewing the coffee has 5 minutes of float time. If I haven't started the coffee in 5 minutes, then it becomes urgent. See critical path analysis and Gantt charts and so on.

This might be worth a post? It feels like it'd be low on my queue but might also be easy to write.

Comment by vaniver on The AI Timelines Scam · 2019-07-12T04:58:45.570Z · score: 32 (7 votes) · LW · GW

I mostly agree with your analysis; especially the point about 1 (that the more likely I think my thoughts are to be wrong, the lower cost it is to share them).

I understand that there are good reasons for discussions to be private, but can you elaborate on why we'd want discussions about privacy to be private?

Most examples here have the difficulty that I can't share them without paying the costs, but here's one that seems pretty normal:

Suppose someone is a student and wants to be hired later as a policy analyst for governments, and believes that governments care strongly about past affiliations and beliefs. Then it might make sense for them to censor themselves in public under their real name because of potential negative consequences of things they said when young. However, any statement of the form "I specifically want to hide my views on X" made under their real name has similar possible negative consequences, because it's an explicit admission that the person has something to hide.

Currently, people hiding their unpopular opinions to not face career consequences is fairly standard, and so it's not that damning to say "I think this norm is sensible" or maybe even "I follow this norm," but it seems like it would have been particularly awkward to be first person to explicitly argue for that norm.

Comment by vaniver on The AI Timelines Scam · 2019-07-12T02:01:11.539Z · score: 22 (9 votes) · LW · GW

Do we have those generally trusted arbiters? I note that it seems like many people who I think of as 'generally trusted' are trusted because of some 'private information', even if it's just something like "I've talked to Carol and get the sense that she's sensible."

Comment by vaniver on The AI Timelines Scam · 2019-07-11T16:52:12.759Z · score: 59 (16 votes) · LW · GW

[Note: this, and all comments on this post unless specified otherwise, is written with my 'LW user' hat on, not my 'LW Admin' or 'MIRI employee' hat on, and thus is my personal view instead of the LW view or the MIRI view.]

As someone who thinks about AGI timelines a lot, I find myself dissatisfied with this post because it's unclear what "The AI Timelines Scam" you're talking about, and I'm worried if I poke at the bits it'll feel like a motte and bailey, where it seems quite reasonable to me that '73% of tech executives thinking that the singularity will arrive in <10 years is probably just inflated 'pro-tech' reasoning,' but also it seems quite unreasonable to suggest that strategic considerations about dual use technology should be discussed openly (or should be discussed openly because tech executives have distorted beliefs). It also seems like there's an argument for weighting urgency in planning that could lead to 'distorted' timelines while being a rational response to uncertainty.

On the first point, I think the following might be a fair description of some thinkers in the AGI space, but don't think this is a fair summary of MIRI (and I think it's illegible, to me at least, whether you are intending this to be a summary of MIRI):

This bears similarity to some conversations on AI risk I've been party to in the past few years. The fear is that Others (DeepMind, China, whoever) will develop AGI soon, so We have to develop AGI first in order to make sure it's safe, because Others won't make sure it's safe and We will. Also, We have to discuss AGI strategy in private (and avoid public discussion), so Others don't get the wrong ideas. (Generally, these claims have little empirical/rational backing to them; they're based on scary stories, not historically validated threat models)

I do think it makes sense to write more publicly about the difficulties of writing publicly, but there's always going to be something odd about it. Suppose I have 5 reasons for wanting discussions to be private, and 3 of them I can easily say. Discussing those three reasons will give people an incomplete picture that might seem complete, in a way that saying "yeah, the sum of factors is against" won't. Further, without giving specific examples, it's hard to see which of the ones that are difficult to say you would endorse and which you wouldn't, and it's not obvious to me legibility is the best standard here.

But my simple sense is that openly discussing whether or not nuclear weapons were possible (a technical claim on which people might have private information, including intuitions informed by their scientific experience) would have had costs and it was sensible to be secretive about it. If I think that timelines are short because maybe technology X and technology Y fit together neatly, then publicly announcing that increases the chances that we get short timelines because someone plugs together technology X and technology Y. It does seem like marginal scientists speed things up here.

Now, I'm paying a price here; it may be the case that people have tried to glue together technology X and technology Y and it won't work. I think private discussions on this are way better than no discussions on this, because it increases the chances that those sorts of crucial facts get revealed. It's not obvious that public discussions are all that much better on these grounds.


On the second point, it feels important to note that the threshold for "take something seriously" is actually quite small. I might think that the chance that I have Lyme disease is 5%, and yet that motivates significant action because of hugely asymmetric cost considerations, or rapid decrease in efficacy of action. I think there's often a problem where someone 'has short timelines' in the sense that they think 10-year scenarios should be planned about at all, but this can be easily mistaken for 'they think 10-year scenarios are most likely' because often if you think both an urgent concern and a distant concern are possible, almost all of your effort goes into the urgent concern instead of the distant concern (as sensible critical-path project management would suggest).

Comment by vaniver on The AI Timelines Scam · 2019-07-11T16:45:46.426Z · score: 53 (19 votes) · LW · GW

On 3, I notice this part of your post jumps out to me:

Of course, I'd have written a substantially different post, or none at all, if I believed the technical arguments that AGI is likely to come soon had merit to them

One possibility behind the "none at all" is that 'disagreement leads to writing posts, agreement leads to silence', but another possibility is 'if I think X, I am encouraged to say it, and if I think Y, I am encouraged to be silent.'

My sense is it's more the latter, which makes this seem weirdly 'bad faith' to me. That is, suppose I know Alice doesn't want to talk about biological x-risk in public because of the risk that terrorist groups will switch to using biological weapons, but I think Alice's concerns are overblown and so write a post about how actually it's very hard to use biological weapons and we shouldn't waste money on countermeasures. Alice won't respond with "look, it's not hard, you just do A, B, C and then you kill thousands of people," because this is worse for Alice than public beliefs shifting in a way that seems wrong to her.

It is not obvious what the right path is here. Obviously, we can't let anyone hijack the group epistemology by having concerns about what can and can't be made public knowledge, but also it seems like we shouldn't pretend that everything can be openly discussed in a costless way, or that the costs are always worth it.

Comment by vaniver on The Results of My First LessWrong-inspired I Ching Divination · 2019-07-09T17:22:10.349Z · score: 8 (4 votes) · LW · GW

One side note: I've been surprised by how much the presentation differed between the copy I originally read (Brian Walker's translation) and various "get I Ching readings online" sites that I've gone to over the years. It might be worth looking at a few different translations to find the one that fits you best.

It definitely makes sense to track "am I discovering anything new?", as measured by "I changed my plans" or "I explored fruitfully" or "my emotional orientation towards X improved" (instead of merely changing). It seems worth comparing to other retrospective / prospective analyses you might try; in the same way that one diet should be compared against other diets (not just on grounds of nutrition, but also enjoyment and convenience and so on).

I also attempted to track "how much of a stretch was the scenario/perspective/etc.?", where sometimes it would be right-on and other times I could kind of see it and other times my sense was "nope, that's just not resonating at all." If something is resonating too much, either you have a run of luck that's unseasonably long or you're getting prompts that aren't specific enough to be wrong. If you're trying to train the skill of discernment, you need both to notice when things are right and wrong, and thinking that it's right is worthless unless sometimes you also think it's wrong.

Comment by vaniver on 87,000 Hours or: Thoughts on Home Ownership · 2019-07-06T18:40:32.649Z · score: 5 (2 votes) · LW · GW
People don't customize their houses all that much, to the degree they do it doesn't get them very good returns on well being per dollar spent, and they pay larger well being costs from the aforementioned commute and career inflexibility problems.

I feel conflicting desires here, to point out that this sometimes happens, and to worry that this is justifying a bias instead of correcting it. For example, I switched from 'wanting to rent' to 'wanting to buy' when I realized that I would benefit a lot from having an Endless Pool in my house, and that this wasn't compatible with renting (unless I could find a place that already had one, or whose owner wanted one, or so on). But also that this convinced me doesn't mean that most people who are convinced are correctly convinced. It might be better for me to do whatever 'looking seriously at the price differential' and deciding to invest in the policy of walking to the local pool more instead; but I think actually money isn't the main thing here. (Like, for a while I thought it was better to be in Austin than in the Bay because a software engineer would earn about $10k/yr more all things considered, and then after thinking about it realized that I was happy paying $10k/yr to be in the Bay instead.)

Comment by vaniver on Causal Reality vs Social Reality · 2019-07-05T00:52:39.780Z · score: 7 (3 votes) · LW · GW
Some defensiveness is both justified and adaptive.

This seems right but tricky. That is, it seems important to distinguish 'adaptive for my situation' and 'adaptive for truth-seeking' (either as an individual or as a community), and it seems right that hostility or counterattack or so on are sometimes the right tool for individual and community truth-seeking. (Sometimes you are better off if you gag Loki: even though gagging in general is a 'symmetric weapon,' gagging of trolls is as asymmetric as your troll-identification system.) Further, there's this way in which 'social monkey'-style defenses seem like they made it harder to know (yourself, or have it known in the community) that you have validly identified the person you're gagging as Loki (because you've eroded the asymmetry of your identification system).

It seems like the hoped behavior is something like the follows: Alice gets a vibe that Bob is being non-cooperative, Alice points out an observation that is relevant to Alice's vibe ("Bob's tone") that also could generate the same vibe in others, and then Bob either acts in a reassuring manner ("oh, I didn't mean to offend you, let me retract the point or state it more carefully") or in a confronting manner ("I don't think you should have been offended by that, and your false accusation / tone policing puts you in the wrong"), and then there are three points to track: object-level correctness, whether Bob is being cooperative once Bob's cooperation has been raised to salience, and whether Alice's vibe of Bob's intent was a valid inference.

It seems to me like we can still go through a similar script without making excuses or obfuscating, but it requires some creativity and this might not be the best path to go down.

Comment by vaniver on Causal Reality vs Social Reality · 2019-07-05T00:34:30.206Z · score: 10 (4 votes) · LW · GW

To be clear I agree with the benefits of politeness, and also think people probably *underweight* the benefits of politeness because they're less easy to see. (And, further, there's a selection effect that people who are 'rude' are disproportionately likely to be ones who find politeness unusually costly or difficult to understand, and have less experience with its benefits.)

This is one of the reasons I like an injunction that's closer to "show the other person how to be polite to you" than "deal with it yourself"; often the person who 'didn't see how to word it any other way' will look at your script and go "oh, I could have written that," and sometimes you'll notice that you're asking them to thread a very narrow needle or are objecting to the core of their message instead of their tone.

Comment by vaniver on Causal Reality vs Social Reality · 2019-07-04T20:12:50.171Z · score: 38 (9 votes) · LW · GW
I, at least, am a social monkey.

I basically don't find this compelling, for reasons analogous to No, It's not The Incentives, it's you. Yes, there are ways to establish emotional safety between people so that I can point out errors in your reasoning in a way that reduces the degree of threat you feel. But there are also ways for you to reduce the number of bucket errors in your mind, so that I can point out errors in your reasoning without it seeming like an attack on "am I ok?" or something similar.

Versions of this sort of thing that look more like "here is how I would gracefully make that same objection" (which has the side benefit of testing for illusion of transparency) seem to me more likely to be helpful, whereas versions that look closer to "we need to settle this meta issue before we can touch the object level" seem to me like they're less likely to be helpful, and more likely to be the sort of defensive dodge that should be taxed instead of subsidized.

Comment by vaniver on Discussion Thread: The AI Does Not Hate You by Tom Chivers · 2019-06-29T14:36:31.463Z · score: 5 (2 votes) · LW · GW

Not very much--the feminism chapter is 6 pages, and the neoreaction chapter is 5 pages. Both read like "look, you might have heard rumors that they're bad because of X, but here's the more nuanced version," and basically give the sort of defense that Scott Alexander would give. About feminism, he mostly brings up Scott Aaronson's Comment #171 and Scott Alexander's response to the response, Scott Alexander's explanation of why there are so few female computer programmers (because of the distribution of interests varying by sex), and the overreaction to James Damore. On neoreaction, he brings up Moldbug's posts on Overcoming Bias, More Right, and Michael Anissimov, and says 'comment sections are the worst' and 'if you're all about taking ideas seriously and discussing them civilly, people who have no other discussion partners will seek you out.'

Comment by vaniver on Discussion Thread: The AI Does Not Hate You by Tom Chivers · 2019-06-29T04:15:56.733Z · score: 5 (2 votes) · LW · GW

You mean Part 7 ("The Dark Sides"), or the ways in which the book is bad?

I thought Part 7 was well-done, overall; he asks if we're a cult (and decides "no" after talking about the question in a sensible way), has a chapter on "you can't psychoanalyze your way to the truth", and talks about feminism and neoreactionaries in a way that's basically sensible.

Some community gossip shows up, but in a way that seems almost totally fair and respects the privacy of the people involved. My one complaint, as someone responsible for the LessWrong brand, is that he refers to one piece of community gossip as 'the LessWrong baby' and discusses a comment thread in which people are unkind to the mother*, while that comment thread happened on SlateStarCodex. But this is mostly the fault of the person he interviewed in that chapter, I think, who introduced that term, and is likely a sensible attempt to avoid naming the actual humans involved, which is what I've done whenever I want to refer to the gossip.

*I'm deliberately not naming the people involved, as they aren't named in the book either, and suspect it should stay that way. If you already know the story you know the search terms, and if you don't it's not really relevant.

Comment by vaniver on Discussion Thread: The AI Does Not Hate You by Tom Chivers · 2019-06-29T03:59:15.464Z · score: 10 (5 votes) · LW · GW

One of the things that I'm sad about is that the book makes no mention of LW 2.0 / the revival. (The last reference I could find was to something in early 2018, but much of the book relates to stuff happening in 2017.) We announced the transition in June 2017, but how much it had succeeded might not have been obvious then (or it may have been the sort of thing that didn't get advertised to Chivers by his in-person contacts), and so there's a chapter on the diaspora which says there's no central hub. Which is still somewhat true--I don't think LW is as much of a central hub as I want it to be--but is not true to the same extent that it was in 2016, say.

Comment by vaniver on Discussion Thread: The AI Does Not Hate You by Tom Chivers · 2019-06-28T23:27:12.621Z · score: 16 (6 votes) · LW · GW

I was pretty pleased with it, and recommended it to my parents. (Like Ajeya, I've had some difficulty giving them the full picture since I stopped working in industry.) There's a sentence on rationalists and small talk that I read out loud to several people in the office, all of whom thought it fit pretty well.

One correction: he refers several times to UCLA Berkeley, when it should just be UC Berkeley. (UCLA refers to the University of California at Los Angeles, a different university in the same UC system as Berkeley.)


Comment by vaniver on How to deal with a misleading conference talk about AI risk? · 2019-06-27T22:27:29.538Z · score: 15 (7 votes) · LW · GW

I've read the slides of the underlying talk, but not listened to it. I currently don't expect to write a long response to this. My thoughts about points the talk touches on:

  • Existential risk vs. catastrophic risk. Often, there's some question about whether or not existential risks are even possible. On slide 7 and 8 Sussman identifies a lot of reasons to think that humans cause catastrophic risks (ecological destruction could possibly kill 90% of people, but seems much more difficult for it to kill 100% of people), and the distinction between the two is only important if you think about the cosmic endowment. But of course if we think AI is an existential threat, and we think humans make AI, then it is true that humans present an existential threat to ourselves. I also note here that Sussman identifies synthetic biology as possibly an existential risk, which raises the question of why an AI couldn't be a source of the existential risk presented by synthetic biology. (If an AI is built that wants to kill us, and that weapon is lying around, then we should be more concerned about AI because it has an opportunity.)
  • Accident risk vs. misuse risk. This article talks about it some, but the basic question is "will advanced AI cause problems because it did something no one wanted (accidents), or something bad people wanted (misuse)?". Most technical AI safety research is focused on accident risk, for reasons that are too long to describe here, but it's not crazy to be concerned about misuse risk, which seems to be Sussman's primary focus. I also think the sort of accident risks that we're concerned about require much deeper solutions that the normal sorts of bugs or accidents that one might imagine on hearing about this; the autonomous vehicle accident that occupies much of the talk is not a good testbed for thinking about what I think of as 'accident risk' and instead one should focus on something like the 'nearest unblocked strategy' article and related things.
  • Openness vs. closure. Open software allows for verifiability; I can know that lots of people have evaluated the decision-making of my self-driving car, rather than just Tesla's internal programming team. But also open software allows for copying and modification; the software used to enable drones that deliver packages could be repurposed to enable drones that deliver hand grenades. If we think a technology is 'dual use', in that it can both be used to make things better (like printing DNA for medical treatments) and worse (like printing DNA to create new viruses), we generally don't want those technologies to be open, and instead have carefully monitored access to dissuade improper use.
  • Solving near-term problems vs. long-term problems. Many people working on technical AI safety focus on applications with immediate uses, like the underlying math for how autonomous vehicles might play nicely with human drivers, and many people working on technical AI safety focus on research that will need to be done before we can safely deploy advanced artificial intelligence. Both of these problems seem real to me, and I wouldn't dissuade someone from working on near-term safety work (especially if the alternative is that they do capabilities work!). I think that the 'long-term' here is measured in "low numbers of decades" instead of "low numbers of centuries," and so it might be a mistake to call it 'long-term,' but the question of how to do prioritization here is actually somewhat complicated, and it seems better if we end up in a world where people working on near-term and long-term issues see each other as collaborators and allies instead of competitors for a limited supply of resources or attention.
Comment by vaniver on How to deal with a misleading conference talk about AI risk? · 2019-06-27T22:01:52.060Z · score: 27 (11 votes) · LW · GW

I've given responses before where I go into detail about how I disagree with some public presentation on AI; the primary example is this one from January 2017, which Yvain also responded to. Generally this is done after messaging the draft to the person in question, to give them a chance to clarify or correct misunderstandings (and to be cooperative instead of blindsiding them).

I generally think it's counterproductive to 'partially engage' or to be dismissive; for example, one consequence of XiXiDu's interviews with AI experts was that some of them (that received mostly dismissive remarks in the LW comments) came away with the impression that people interested in AI risk were jerks who aren't really worth engaging with. For example, I might think someone is confused if they think climate change is more important than AI safety, but I don't think that it's useful to just tell them that they're confused or off-handedly remark that "of course AI safety is more important," since the underlying considerations (like the difference between catastrophic risks and existential risks) are actually non-obvious.

Comment by vaniver on Embedded Agency: Not Just an AI Problem · 2019-06-27T01:53:18.119Z · score: 31 (7 votes) · LW · GW

See here.

[As a side note, I notice that the habit of "pepper things with hyperlinks whenever possible" seems to be less common on modern LW than it was on old LW, but I think it was actually a pretty great habit and I'd like to see more of it.]

Comment by vaniver on Research Agenda in reverse: what *would* a solution look like? · 2019-06-26T23:17:52.865Z · score: 14 (5 votes) · LW · GW

In my experience, people mostly haven't had the view of "we can just do CEV, it'll be fine" and instead have had the view of "before we figure out what our preferences are, which is an inherently political and messy question, let's figure out how to load any preferences at all."

It seems like there needs to be some interplay here--"what we can load" informs "what shape we should force our preferences into" and "what shape our preferences actually are" informs "what loading needs to be capable of to count as aligned."

Comment by vaniver on Jordan Peterson on AI-FOOM · 2019-06-26T22:10:51.824Z · score: 32 (11 votes) · LW · GW

Yeah, it's sort of awkward that there are two different things one might want to talk about with FOOM: the idea of recursive self improvement in the typical I.J. Good sense, and the "human threshold isn't special and can be blown past quickly" idea. AlphaZero being able to hit the superhuman level at Go after 3 days of training, and doing so only a year or two after any professional Go player was defeated by a computer, feels relevant to the second thing but not the first (and is connected to the 'fleets of cars will learn very differently' thing Peterson is pointing at).

[And the two actually are distinct; RSI is an argument for 'blowing past humans is possible' but many 'slow takeoff' views look more like "RSI pulls humans along with it" than "things look slow to a Martian," and there's ways to quickly blow past humans that don't involve RSI.]

Comment by vaniver on What does the word "collaborative" mean in the phrase "collaborative truthseeking"? · 2019-06-26T18:23:26.108Z · score: 21 (6 votes) · LW · GW

If "collaborative" is qualifying truth-seeking, perhaps we can see it more easily by contrast with non-collaborative truthseeking. So what might that look like?

  • I might simply be optimizing for the accuracy of my beliefs, instead of whether or not you also discover the truth.
  • I might be optimizing competitively, where my beliefs are simply judged on whether they're better than yours.
  • I might be primarily concerned about learning from the environment or from myself as opposed to learning from you.
  • I might be following only my interests, instead of joint interests.
  • I might be behaving in a way that doesn't incentivize you to point out things useful to me, or discarding clues you provide, or in a way that fails to provide you clues.

This suggests collaborative truthseeking is done 1) for the benefit of both parties, 2) in a way that builds trust and mutual understanding, and 3) in a way that uses that trust and mutual understanding as a foundation.

There's another relevant contrast, where we could look at collaborative non-truthseeking, or contrast "collaborative truthseeking" as a procedure with other procedures that could be used (like "allocating blame"), but this one seems most related to what you're driving at.

Comment by vaniver on Jordan Peterson on AI-FOOM · 2019-06-26T18:03:49.695Z · score: 25 (11 votes) · LW · GW

YouTube's transcript (with significant editing by me, mostly to clean and format):

Now the guys that are building the autonomous cars, they don't think they're building autonomous cars. They know perfectly well what they're doing. They're building fleets of mutually intercommunicating autonomous robots and each of them will to be able to teach the other because their nervous system will be the same and when there's ten million of them, when one of them learns something all ten million of them will learn it at the same time. They're not gonna have to be very bright before they're very very very smart.
Because us, you know, we'll learn something. You have to imitate it, God that's hard. Or I have to explain it to you and you have to understand it and then you have to act it out. We're not connected wirelessly with the same platform, but robots they are and so once those things get a little bit smart they're not going to stop at a little bit smart for very long they're gonna be unbelievably smart like overnight.
And they're imitating the hell out of us right now too because we're teaching them how to understand us every second of every day the net is learning what we're like. It's watching us, it's communicating with us, it's imitating us and it's gonna know. It already knows in some ways more about us than we know about ourselves. There's lots of reports already of people getting pregnancy ads or ads for infants, sometimes before they know they're pregnant, but often before they've told their families. The way that that happens is the net is watching what they're looking at and inferring with its artificial intelligence and so maybe you're pregnant that's just tilting you a little bit to interest in things that you might not otherwise be interested in. The net tracks that, then it tells you what you're after it does that by offering an advertisement. It's reading your unconscious mind.
Well, so that's what's happening.
Comment by vaniver on How does one get invited to the alignment forum? · 2019-06-25T04:07:16.628Z · score: 10 (5 votes) · LW · GW

We've been in something of a transition period with the alignment forum, where no one was paying active attention to promoting comments or posts or adding users, but starting soon I should be doing that. The primary thing that happens when someone's an AF member is that they can add posts and comments without approval (and one's votes also convey AF karma); I expect I'll mostly go through someone's comments on AF posts and ask "would I reliably promote content like this?" (or, indeed, "have I reliably promoted this person's comments on AF posts?").

Details about what sort of comments I'll think are helpful or insightful are, unfortunately, harder to articulate.

Comment by vaniver on Research Agenda v0.9: Synthesising a human's preferences into a utility function · 2019-06-20T18:35:03.151Z · score: 23 (8 votes) · LW · GW

Overall, I was pretty impressed by this; there were several points where I thought "sure, that would be nice, but obstacle X," and then the next section brought up obstacle X.

I remain sort of unconvinced that utility functions are the right type signature for this sort of thing, but I do feel convinced that "we need some sort of formal synthesis process, and a possible end product of that is a utility function."

That is, most of the arguments I see for 'how a utility function could work' go through some twisted steps. Suppose I'm trying to build a robot, and I want it to be corrigible, and I have a corrigibility detector whose type is 'decision process' to 'score'. I need to wrap that detector with a 'world state' to 'decision process' function and a 'score' to 'utility' function, and then I can hand it off to a robot that does a 'decision process' to 'world state' prediction and optimizes utility. If the robot's predictive abilities are superhuman, it can trace out whatever weird dependencies I couldn't see; if they're imperfect, then each new transformation provides another opportunity for errors to creep in. And it may be the case that this is a core part of reflective stability (because if you map through world-histories you bring objective reality into things in a way that will be asymptotically stable with increasing intelligence) that doesn't have another replacement.

I do find myself worrying that embedded agency will require dropping utility functions in a deep way that ends up connected to whether or not this agenda will work (or which parts of it will work), but remain optimistic that you'll find out something useful along the way and have that sort of obstacle in mind as you're working on it.

Comment by vaniver on Research Agenda v0.9: Synthesising a human's preferences into a utility function · 2019-06-20T18:02:42.275Z · score: 3 (1 votes) · LW · GW

Fixed a typo.

Comment by vaniver on Recommendation Features on LessWrong · 2019-06-17T05:45:52.623Z · score: 9 (4 votes) · LW · GW

So, I was just recommended Plastination is Maturing and Needs Funding. I considered putting some effort into "what's the state of plastination in 2019, 7 years later?" and commenting, but hit a handful of obstacles, one of which was "is the state of plastination in 2019 long content?". Like, the relevant fund paid out its prizes at various times, and it'd take a bit more digging to figure out if the particular team in Hanson's post was the one that won, and it's not really obvious if it matters. (Suppose we discover that the prize wasn't won by that team, after the evaluation was paid for; what does that imply?)

This makes me more excited about John's idea that shows posts with some simultaneity between users; like the Sequences Reruns, for example. It might be worth it to have a comment writing up what's changed for the other people clicking on it in 2019 who don't know where to look or aren't that committed to figuring things out, where it doesn't make sense to push that post into 'recent discussion' on my own (if this was randomly picked for me).

Comment by vaniver on Some Ways Coordination is Hard · 2019-06-14T22:26:12.053Z · score: 4 (2 votes) · LW · GW

I fixed it more.

Comment by vaniver on [Answer] Why wasn't science invented in China? · 2019-06-11T20:57:31.243Z · score: 7 (3 votes) · LW · GW

Ruby, you might also want to borrow Why the West Rules--for Now from me; it focuses less on the scientific question and more on the economic and technological one (which ends up being connected), but I'm not sure it'll be all that different from Huff.

Comment by vaniver on [Answer] Why wasn't science invented in China? · 2019-06-11T20:41:53.380Z · score: 5 (2 votes) · LW · GW
It’s been asserted [source] that having Latin as a lingua franca was important for Europe integrated market for ideas. Makes sense if scholars who otherwise speak different languages are going to be able to communicate.

But the Muslim world was much better off in this regard, with Arabic, and while China has major linguistic variation I think it also had a 'shared language' in basically the same way Latin was a shared language for Europe.

It seems to me like the thing that's important is not so much that the market is integrated, but that there are many buyers and sellers. The best works of Chinese philosophy, as far as I can tell, come from the period when there was major intellectual and military competition between competing factions; the contention between the Hundred Schools of Thought. And then after unification the primary work available for scholars was the unified bureaucracy, which was interested in the Confucian-Legalist blend that won the unification war, and nothing else.

Comment by vaniver on Steelmanning Divination · 2019-06-08T19:32:18.249Z · score: 4 (2 votes) · LW · GW

I would imagine so, because it means you learn the cards as opposed to the sequence of cards. ("In French, chateau always follows voiture.")

Comment by vaniver on Steelmanning Divination · 2019-06-08T19:30:41.436Z · score: 5 (3 votes) · LW · GW

I mean, I think it would be more accurate to say something like "the die roll, as it's uncorrelated with features of the decision, doesn't give me any new information about which action is best," but the reason why I point to CoEE is because it is actually a valid introspective technique to imagine acting by coinflip or dieroll and then see if you're hoping for a particular result, which rhymes with the "if you can predictably update in direction X, then you should already be there."