Posts

Alex Irpan: "My AI Timelines Have Sped Up" 2020-08-19T16:23:25.348Z
Property as Coordination Minimization 2020-08-04T19:24:15.759Z
Rereading Atlas Shrugged 2020-07-28T18:54:45.272Z
A reply to Agnes Callard 2020-06-28T03:25:27.378Z
Public Positions and Private Guts 2020-06-26T23:00:52.838Z
How alienated should you be? 2020-06-14T15:55:24.043Z
Outperforming the human Atari benchmark 2020-03-31T19:33:46.355Z
Mod Notice about Election Discussion 2020-01-29T01:35:53.947Z
Circling as Cousin to Rationality 2020-01-01T01:16:42.727Z
Self and No-Self 2019-12-29T06:15:50.192Z
T-Shaped Organizations 2019-12-16T23:48:13.101Z
ialdabaoth is banned 2019-12-13T06:34:41.756Z
The Bus Ticket Theory of Genius 2019-11-23T22:12:17.966Z
Vaniver's Shortform 2019-10-06T19:34:49.931Z
Vaniver's View on Factored Cognition 2019-08-23T02:54:00.915Z
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:58.633Z
Commentary On "The Abolition of Man" 2019-07-15T18:56:27.295Z
Is there a guide to 'Problems that are too fast to Google'? 2019-06-17T05:04:39.613Z
Steelmanning Divination 2019-06-05T22:53:54.615Z
Public Positions and Private Guts 2018-10-11T19:38:25.567Z
Maps of Meaning: Abridged and Translated 2018-10-11T00:27:20.974Z
Compact vs. Wide Models 2018-07-16T04:09:10.075Z
Thoughts on AI Safety via Debate 2018-05-09T19:46:00.417Z
Turning 30 2018-05-08T05:37:45.001Z
My confusions with Paul's Agenda 2018-04-20T17:24:13.466Z
LW Migration Announcement 2018-03-22T02:18:19.892Z
LW Migration Announcement 2018-03-22T02:17:13.927Z
Leaving beta: Voting on moving to LessWrong.com 2018-03-11T23:40:26.663Z
Leaving beta: Voting on moving to LessWrong.com 2018-03-11T22:53:17.721Z
LW 2.0 Open Beta Live 2017-09-21T01:15:53.341Z
LW 2.0 Open Beta starts 9/20 2017-09-15T02:57:10.729Z
Pair Debug to Understand, not Fix 2017-06-21T23:25:40.480Z
Don't Shoot the Messenger 2017-04-19T22:14:45.585Z
The Quaker and the Parselmouth 2017-01-20T21:24:12.010Z
Announcement: Intelligence in Literature Prize 2017-01-04T20:07:50.745Z
Community needs, individual needs, and a model of adult development 2016-12-17T00:18:17.718Z
Contra Robinson on Schooling 2016-12-02T19:05:13.922Z
Downvotes temporarily disabled 2016-12-01T17:31:41.763Z
Articles in Main 2016-11-29T21:35:17.618Z
Linkposts now live! 2016-09-28T15:13:19.542Z
Yudkowsky's Guide to Writing Intelligent Characters 2016-09-28T14:36:48.583Z
Meetup : Welcome Scott Aaronson to Texas 2016-07-25T01:27:43.908Z
Happy Notice Your Surprise Day! 2016-04-01T13:02:33.530Z
Posting to Main currently disabled 2016-02-19T03:55:08.370Z
Upcoming LW Changes 2016-02-03T05:34:34.472Z
LessWrong 2.0 2015-12-09T18:59:37.232Z
Meetup : Austin, TX - Petrov Day Celebration 2015-09-15T00:36:13.593Z
Conceptual Specialization of Labor Enables Precision 2015-06-08T02:11:20.991Z
Rationality Quotes Thread May 2015 2015-05-01T14:31:04.391Z
Meetup : Austin, TX - Schelling Day 2015-04-13T14:19:21.680Z

Comments

Comment by vaniver on “PR” is corrosive; “reputation” is not. · 2021-02-15T19:39:35.241Z · LW · GW

My dictionary has "dishonor" in it, as both a noun and a verb.

Comment by vaniver on Lean Startup Reading Comprehension Quiz · 2021-02-02T18:01:39.514Z · LW · GW

When I was in graduate school, they let you take the qualifying exams twice. So I didn't study at all the first time, confident that if I failed, I could study and pass the second time. So in that spirit, here's me answering the questions without having read the book, and without Googling anything.
 

1. Learning is a change in your mental model; validated learning is a change in your model that is either tied to a core metric or that you then confirm through tests.

2. I'm going to guess True. That is, both of those metrics (revenue and number of customers) are pretty hard to fake and so feel validated, and exponential growth means 'something real is happening' in startup land. It feels a bit like it's a trick question, because the metrics are between clearly real/good metrics (like profit) and more fake metrics (like number of users, as opposed to number of active users), but my guess is these are real according to the author.

3. Hmmm. "by forcing you to iterate quickly, and by keeping you in contact with users / learning from reality."

4. Innovation accounting presumably has something more like "net present value" as calculated from projections; I'm going to guess this means stuff like tracking growth rate instead of current level (since the number of users will still look explosive even if your growth is dropping from 5% to 4% to 3%).

5. Push has stations doing work whenever they get the inputs and then sending it on; so maybe a supplier makes screws and just sends a hundred box of screws a week or w/e. The engine station makes engines and then sends them to assembly, assembly starts at the start and pushes things down the line, etc.; basically you have lots of work-in-progress (WIP), fungible workers working on the wrong thing (someone who could assemble doors and windows is just doing whatever they're assigned, which might not be the important thing), and potentially imbalanced flows / stations starved of necessary inputs.
Pull instead flows backwards from the outputs. I'm going to describe a particular way to implement this to make it concrete, but there are lots of different ways to do this / this is only sometimes applicable. You have a number of cars you want to get out the door, and so you look at the final station (polishing, say), and have a sense of what rate it can polish at that determines how many cars needing polishing should be sitting in the 'in box' for that station. Whenever the in box drops below that, the previous station (upholstery, say) gets an order, which it then tries to fulfill, which maybe drops some of its inboxes below their level, which then makes previous stations generate more (screws, seats, etc.).
A thing that's nice about pull is that you've put a control system on the WIP, hoping to make sure that everyone is able to do the work that's most useful at the moment. If you don't actually need any more screws, you don't make any more screws. If you have a thousand different parts, the WIP control system is less good, and instead you just want to send orders directly to all the stations, tho prioritization is then more of a challenge.

6. Presumably the pull is something like "growth"; like, you have whatever core metric you care about (like % increase in revenue week-on-week), and then you try to reason backwards from that to everything else the company is doing. You don't have an engineer who just comes in and cleans up the code every day and then goes home (a more push-based model), you have a story that goes from % increase in revenue to shipping new features to what the engineer needs to be doing this week (which might be cleaning up the codebase so that you can make new features).

7. True in that you're letting the system plan for you, instead of needing your human planners to be forecasting demand correctly. But obviously the WIP cost is a function of the underlying uncertainty.

8. False; lean often points you towards flexibility instead of rigidity, and rigidity is baked into a lot of 'economies of scale' things. Instead of getting a deal on 10,000 screws, you buy them in boxes of 100 whenever you open one and only have five boxes on hand. This helps when you suddenly stop needing as many screws, and also if you suddenly need lots of screws (since you can easily buy more boxes, where it may be difficult to shift the delivery date on your huge crate of screws).

9. First, the dad is able to ship his first letter sooner. Second, the dad learns things from going through the whole cycle; if, for example, the fold of the first letter doesn't quite fit in the envelope, he can improve the fold for next time, whereas the kids will discover that all of their letters don't quite fit in the envelope.

10. True. Employees will spend more time switching, which drops productivity, and maybe even waiting, which drops productivity. This is the cost of flexibility, and ends up paying for itself because the increased prioritization means they're getting less output out the door but the output is more valuable.

11. Hmmm, I don't think I've heard this phrase before, but I assume it means something like trying to do lots of things at once (like the kids doing the letters in an assembly-line way without feedback), such that the product is late and low-quality, and in particular having to abandon lots of WIP when the market changes underneath you. "Well, we didn't send all of the letters in time for Christmas, and now we have to start our Valentine's letters, which really can't use much of the WIP we have lying around from Christmas."

12. It's a negative feedback system for faults, errors, and incidents. Something goes wrong, and you try to get information from that event fed back into five generating systems (as defined by levels of abstraction). This then drives down (you hope) the number of future errors, eventually controlling it to 0 (in the absence of changes that then introduce new faults).

13. Hmm, I can see two meanings here. The first one, that I'm more confident in, is the "any worker can halt the line at any time" system, where giving anyone the power to identify problems immediately means that you are always either 1) producing good product or 2) fixing the line such that it produces good product. "Production" consists of 1 and 2 together, and not of 3) producing bad product, since the outputs of 3 will just have to be thrown away.
The other meaning is that if your station doesn't have any needed output, you shouldn't do something just to not be idle; this is so that if a needed order does come in, you can immediately start on it so that it's done as soon as possible. 

Comment by vaniver on The GameStop Situation: Simplified · 2021-02-01T17:53:09.641Z · LW · GW

In particular, my understanding is that most people who shorted in the early days are now out (including, for some, giving up on shorting entirely) and have realized billion dollar losses, but short interest remains approximately the same, because new funds have taken their place. It was quite risky to think a stock at $4 would decline to $0, but it's not very risky to think a stock at $350 will decline to $40. It remains to be seen where the price will stabilize (and, perhaps more importantly, when) but I think the main story is going to be "early shorts lost money, late shorts gained money, retail investors mostly lost money).

Comment by vaniver on Vaniver's Shortform · 2021-01-28T19:42:27.408Z · LW · GW

I am confused about how to invest in 2021. 

I remember in college, talking with a friend who was in a class on technical investing, and he was mentioning that the class was talking about momentum investing on 7 day and 30 day timescales; I said "wait, those numbers are obviously suspicious; can't we figure out what it should actually be from the past?", downloading a dataset of historical S&P500 returns, and measuring the performance of simple momentum trading algorithms on that data. I discovered that basically all of the returns came from before 1980; there was a period where momentum investing worked, and then it stopped, but before I drilled down into the dataset (like if I just looked at the overall optimization results), it looked like momentum investing worked on net.

Part of my suspicion had also been an 'efficient markets' sense; if my friend was learning in his freshman classes about patterns in the market, presumably Wall Street also knew about those patterns, and was getting rid of them? I believed in the dynamic form of efficient markets: you could get rich by finding mispricings, but mostly by putting in the calories, and I thought I had better places to put calories. But this made it clear to me that there were shifts in how the market worked; if you were more sophisticated than the market, you could make money, but then at some point the market would reach your level of sophistication, and the opportunity would disappear.

I learned how to invest about 15 years ago (and a few years before the above anecdote). At the time, I was a smart high-schooler; my parents had followed a lifelong strategy of "earn lots of money, save most of it, and buy and hold", and in particular had invested in a college fund for me; they told me (roughly) "this money is yours to do what you want with, and if you want to pay more for college, you need to take out loans." I, armed with a study that suggested colleges were mostly selection effect instead of treatment effect, chose the state school (with top programs in the things I was interested in) that offered me a full ride instead of the fancier school that would have charged me, and had high five figures to invest.

I did a mixture of active investing and buying index funds; overall, they performed about as well, and I grew more to believe that active investing was a mistake whereas opportunity investing wasn't. That is, looking at the market and trying to figure out which companies were most promising at the moment took more effort than I was going to put into it, whereas every five years or so a big opportunity would come along, that was worth betting big on. I was more optimistic about Netflix than the other companies in my portfolio, but instead of saying "I will be long Netflix and long S&P and that's it", I said "I will be long these ten stocks and long S&P", and so Netflix's massive outperformance over that time period only made me slightly in the black compared to the S&P instead of doing much better than it.

It feels like the stock market is entering a new era, and I don't know what strategy is good for that era. There are a few components I'll try to separate:

First, I'm not actually sure I believe the medium-term forward trend for US stocks is generically good in the way it has been for much of the past. As another historical example, my boyfriend, who previously worked at Google, has a bunch of GOOG that he's never diversified out of, mostly out of laziness. About 2.5 years ago (when we were housemates but before we were dating), I offered to help him just go through the chore of diversification to make it happen. Since then GOOG has significantly outperformed the S&P 500, and I find myself glad we never got around to it. On the one hand, it didn't have to be that way, and variance seems bad--but on the other hand, I find myself much more optimistic about Alphabet than I am about the US as a whole.

[Similarly, there's some standard advice that tech workers should buy less tech stocks, since this correlates their income and assets in a way that's undesirable. But this feels sort of nuts to me--one of the reasons I think it makes sense to work in tech is because software is eating the world, and it wouldn't surprise me if in fact the markets are undervaluing the growth prospects of tech stocks.]

So this sense that tech is eating the world / is turning more markets into winner-takes-all situations means that I should be buying winners, because they'll keep on winning because of underlying structural factors that aren't priced into the stocks. This is the sense that if I would seriously consider working for a company, I should be buying their stock because my seriously considering working for them isn't fully priced in. [Similarly, this suggests real estate only in areas that I would seriously consider living in: as crazy as the SFBA prices are, it seems more likely to me that they will become more crazy rather than become more sane. Places like Atlanta, on the other hand, I should just ignore rather than trying to include in an index.]

Second, I think the amount of 'dumb money' has increased dramatically, and has become much more correlated through memes and other sorts of internet coordination. I've previously become more 'realist' about my ability to pick opportunities better than the market, but have avoided thinking about meme investments because of a general allergy to 'greater fool theory'. But this is making me wonder if I should be more of a realist about where I fall on the fool spectrum. [This one feels pretty poisonous to attention, because the opportunities are more time-sensitive. While I think I have a scheme for selling in ways that would attention-free, I don't think I have a scheme for seeing new opportunities and buying in that's attention-free.]

[There's a related point here about passive investors, which I think is less important for how I should invest but is somewhat important for thinking about what's going on. A huge component of TSLA's recent jump is being part of the S&P 500, for example.]

Third, I think the world as a whole is going to get crazier before it gets saner, which sort of just adds variance to everything. A thing I realized at the start of the pandemic is that I didn't have a brokerage setup where I could sell my index fund shares and immediately turn them into options, and to the extent I think 'opportunity investing' is the way to go / there might be more opportunities with the world getting crazier, the less value I get out of "this will probably be worth 5% more next year", because the odds that I see a 2x or 5x time-sensitive opportunity really don't have to be very high for it to be worthwhile to have it in cash instead of locked into a 5% increase.

Comment by vaniver on Lessons I've Learned from Self-Teaching · 2021-01-25T19:29:23.320Z · LW · GW

Interestingly, this comment made me more excited about using Anki again (my one great success with it was memorizing student names, which it's well-suited for, and I found it pretty useless for other things), because this comment has a great idea with a citation that I probably won't be able to find again unless I remember some ancillary keywords (searching "blurry to sharp" on Google won't help very much). But if I have it in an Anki deck, not only will be it more likely to be remembered, but also I'll have the citation recorded somewhere easy to search.

Comment by vaniver on Alex Ray's Shortform · 2021-01-25T01:11:09.374Z · LW · GW

When choosing between policies that have the same actions, I prefer the policies that are simpler.

Could you elaborate on this? I feel like there's a tension between "which policy is computationally simpler for me to execute in the moment?" and "which policy is more easily predicted by the agents around me?", and it's not obvious which one you should be optimizing for. [Like, predictions about other diachronic people seem more durable / easier to make, and so are easier to calculate and plan around.] Or maybe the 'simple' approaches for one metric are generally simple on the other metric.

Comment by vaniver on Deutsch and Yudkowsky on scientific explanation · 2021-01-23T18:06:50.919Z · LW · GW

I'm responding to claims that SI can solve long standing philosophical puzzles such as the existence of God or the correct interpretation of quantum mechanics.

Ah, I see. I'm not sure I would describe SI as 'solving' those puzzles, rather than recasting them in a clearer light.

Like, a program which contains Zeus and Hera will give rather different observations than a program which doesn't. On the other hand, when we look at programs that give the same observations, one of which also simulates a causally disconnected God and the other of which doesn't, then it should be clear that those programs look the same from our stream of observations (by definition!) and so we can't learn anything about them through empirical investigation (like with p-zombies).

So in my mind, the interesting "theism vs. atheism?" question is the question of whether there are activist gods out there; if Ares actually exists, then you (probably) profit by not displeasing him. Beliefs should pay rent in anticipated experiences, which feels like a very SI position to have. 

Of course, it's possible to have a causally disconnected afterlife downstream of us, where things that we do now can affect it and nothing we do there can affect us now. [This relationship should already be familiar from the relationship between the past and the present.] SI doesn't rule that out--it can't until you get relevant observations!--but the underlying intuition notes that the causal disconnection makes it pretty hard to figure out which afterlife. [This is the response to Pascal's Wager where you say "well, but what about anti-God, who sends you to hell for being a Christian and sends you to heaven for not being one?", and then you get into how likely it is that you have an activist God that then steps back, and arguments between Christians as to whether or not miracles happen in the present day.]

But I think the actual productive path, once you're moderately confident Zeus isn't on Olympus, is not trying to figure out if invisi-Zeus is in causally-disconnected-Olympus, but looking at humans to figure out why they would have thought Zeus was intuitively likely in the first place; this is the dissolving the question approach.

With regard to QM, when I read through this post, it is relying pretty heavily on Occam's Razor, which (for Eliezer at least) I assume is backed by SI. But it's in the normal way where, if you want to postulate something other than the simplest hypothesis, you have to make additional choices, and that each choice that could have been different loses you points in the game of Follow-The-Improbability. But a thing that I hadn't noticed before this conversation, which seems pretty interesting to me, is that whether you prefer MWI might depend on whether you use the simplicity prior or the speed prior, and then I think the real argument for MWI rests more on the arguments here than on Occam's Razor grounds (except for the way in which you think a physics that follows all the same principles is more likely because of Occam's Razor on principles, which might be people's justification for that?).

Comment by vaniver on Deutsch and Yudkowsky on scientific explanation · 2021-01-23T00:27:38.523Z · LW · GW

Given the assumptions that you have an infinite number of prgrammes, and that you need to come to a determinate result in finite time, then you need to favour shorter programmes.

If you need to come to a determinate result in a finite number of computational steps (my replacement for 'time'), then SI isn't the tool for you. It's the most general and data-efficient predictor possible, at the cost of totally exploding the computational budget.

I think if you are trying to evaluate a finite set of programs in finite time, it's not obvious that program length is the thing to sort them by; I think the speed prior makes more sense, and I think actual humans are doing something meaningfully different.

---

I currently don't see all that much value in responding to "You haven't shown / established" claims; like, SI is what it is, you seem to have strong opinions about how it should label particular things, and I don't think those opinions are about the part of SI that's interesting, or about why it's only useful as a hypothetical model (I think attacks from this angle are more compelling on that front). If you're getting value out of this exchange, I can give responding to your comments another go, but I'm not sure I have new things to say about the association between observations and underlying reality or aggregation of possibilities through the use of probabilities. (Maybe I have elaborations that would either more clearly convey my point, or expose the mistakes I'm making?)

Comment by vaniver on Deutsch and Yudkowsky on scientific explanation · 2021-01-22T20:08:46.554Z · LW · GW

You haven't shown that programmes are hypotheses, and what an SI is doing is assigning different non zero order probabilities, not a uniform one, and it is doing so based on programme length, although we don't know that reality is a programme, and so on.

SI only works for computable universes; otherwise you're out of luck. If you're in an uncomputable universe... I'm not sure what your options are, actually. [If you are in a computable universe, then there must be a program that corresponds to it, because otherwise it would be uncomputable!]

You can't assign a uniform probability to all the programs, because there are infinitely many, and while there is a mathematically well-defined "infinitely tall function" there isn't a mathematically well-defined "infinitely flat function."

[Like, I keep getting the sense you want SI to justify the assumption that God prefers shorter variable names, and not liking the boring answer that in a condition of ignorance, our guess has to be that way because there are more ways to have long variable names than short variable names, and that our guess isn't really related to the implementation details of the actual program God wrote.]

Do you think scientists are equally troubled?

I mean, I don't know how muscles work, outside of the barest details, and yet I can use mine just fine. If I had to build some without being able to order them from a catalog, I'd be in trouble. I think AI researchers trying to make artificial scientists are equally troubled, and that's the standard I'm trying to hold myself to.

You'.re saying realism is an illusion?

I don't know what you mean by realism or illusion? Like, there's a way in which "my hand" is an illusion, in that it's a concept that exists in my mind, only somewhat related to physical reality. When I probe the edges of the concept, I can see the seams and how it fails to correspond to the underlying reality. 

Like, the slogan that seems relevant here is "the map is not the territory." If realism means "there's a territory out there", I'm sold on there being a territory out there. If realism means "there is a map in the territory", I agree in the boring sense that a map exists in your head which exists in the territory, but think that makes some classes of arguments about the map confused.

For example, I could try to get worked up over whether, when I put hand sanitizer on my hands, the sanitizer becomes "part of my hand", but it seems wiser to swap out the coarse "hand" model with a finer "molecules and permeable tissue" model, where the "part of my hand" question no longer feels sensible, and "where are these molecules?" question does feel sensible.

One of the ways in which SI seems conceptually useful is that it lets me ask questions like "would different programs say 'yes' and 'no' to the question of 'is the sanitizer part of my hand'?". If I can't ground out the question in some deep territorial way, then it feels like the question isn't really about the territory.

Comment by vaniver on Deutsch and Yudkowsky on scientific explanation · 2021-01-21T19:03:49.011Z · LW · GW

I agree with those issues.

I think the way you expressed issue 3 makes it too much of a clone of issue 1; if I tell you the bounds for the question in terms of programs, then I think there is a general way to apply SI to get a sensible bounded answer. If I tell you the bounds in terms of functions, then there would be a general way to incorporate that info into SI, if you knew how to move between functions and programs.

The way I think about those issues that (I think?) separates them more cleanly is that we both have to figure out the 'compression' problem of how to consider 'models' as families of programs (at some level of abstraction, at least) and the 'elaboration' problem of how to repopulate our stable of candidates when we rule out too many of the existing ones. SI bypasses the first and gives a trivial answer to the second, but a realistic intelligence will have interesting answers to both.

Comment by vaniver on Deutsch and Yudkowsky on scientific explanation · 2021-01-21T18:50:26.945Z · LW · GW

The issues are whether the quantity, which you have called a probability , actually is probability , and whether the thing you at treating as a model of reality, is actually a such a model , in the sense of scientific realism, or merely something that churns out predictions, in the sense of instrumentalism.

I'm not quite sure how to respond to this; like, I think you're right that SI is not solving the hard problem, but I think you're wrong that SI is not solving the easy problem. Quite possibly we're in violent agreement on both points, and disagree on whether or not the easy problem is worth solving?

For example, I think the quantity actually is a probability, in that it satisfies all of the desiderata that probability theory places on quantities. Do I think it's the probability that the actual source code of the universe is that particular implementation? Well, it sure seems shaky to have as an axiom that God prefers shorter variable names, but since probability is in the mind, I don't want to rule out any programs a priori, and there are more programs with longer variable names than programs with shorter variable names, I don't see any other way to express what my state of ignorance would be given infinite cognitive resources.

Also, I'm not really sure what a model of reality in the sense of scientific realism is, whereas I'm pretty sure I know what python programs are. So SI doesn't make the problem of finding the scientific realism models any easier if you're confident that those models are not just programs.

But are you saying that the "deep structure" is the ontologcal content?

I think the answer to this is "yes." That is, I generated an example to try to highlight what I think SI can do and what its limitations are, and that those limitations are fundamental to not being able to observe all of the world, and then realized I had written the example in terms of models instead of in terms of programs. (For this toy example, switching between them was easy, but obviously that doesn't generalize.)

My current suspicion is that we're having this discussion, actually; it feels to me like if you were the hypercomputer running SI, you wouldn't see the point of the ontological content; you could just integrate across all the hypotheses and have perfectly expressed your uncertainty about the world. But if you're running an algorithm that uses caching or otherwise clumps things together, those intermediate variables feel like they're necessary objects that need to be explained and generated somehow.

[Like, it might be interesting to look at the list of outputs that a model in the sense of scientific realism could give you, and ask if SI could also give you those outputs with minimal adjustment.]

Comment by vaniver on Deutsch and Yudkowsky on scientific explanation · 2021-01-21T00:51:46.436Z · LW · GW

If the upshot is that a short programme is more likely to correspond to reality, then SI is indeed formalised epistemology.

I think there are two different things going on:

First, if you want to use probabilities, then you need your total probability to sum to one, and there's not a way to make that happen unless you assign higher probabilities to shorter programs than longer programs.

Second, programs that don't correspond to observations get their probability zeroed out.

All of SI's ability to 'locate the underlying program of reality' comes from the second point. The first point is basically an accounting convenience / necessity.

Incidentally, it seems worth pointing out that I think critical rationalists and bayesian rationalists are mostly talking past each other when it comes to Solomonoff Induction. There's an interesting question of how you manage your stable of hypotheses under consideration, and the critrats seem to think it's philosophically untenable, and so induction is suspect, whereas the bayesians seem to think that it's philosophically trivial, as you can just assume away the problem by making your stable of hypotheses infinitely large. From the perspective of psychology / computer science, the details seem very interesting! But from the perspective of philosophy, I think the bayesians mostly have it right, and in fact induction in the abstract is pretty simple. 

But why should an SI have the ability to correspond to reality, when the only thing it is is designed to do is predict observations?

By assumption, your observations are generated by reality. What grounds that assumption out is... more complicated, but I'm guessing not the thing you're interested in?

Maybe it's a category error to say of programmes that they have some level of probability.

I mean, I roughly agree with this in my second paragraph, but I think 'category error' is too harsh. Like, there are lots of equivalent programs, right? [That is, if I do SI by considering all text strings interpreted as python source code by a hypercomputer, then for any python-computable mathematical function, there's an infinite class of text strings that implement that function.] And so actually what we care about is closer to "the integrated probability of an upcoming token across all programs", and if you looked at your surviving programs for a sufficiently complicated world, you would likely notice that they have some deep structural similarities that suggest they're implementing roughly the same function.

[I believe it's possible to basically ignore this, and only consider the simplest implementation of each function, but this gets you into difficulty with enumerating all possible functions which you could ignore when enumerating all possible text strings. The accounting convenience pays off, at the price of making the base unit not very interesting; you want the formula underneath physics, not an infinite family of text strings that all implement the formula underneath physics, perhaps in an opaque way.]

Comment by vaniver on How The West Was Won · 2021-01-21T00:34:21.684Z · LW · GW

In the second group... not at all. Rural British and American Rednecks aren't certainly seeing their resources appropriated by the powers behind the immigrants.

A common complaint about immigration is "they're taking our jobs." For a group whose primary asset is their ability to do labor, this seems pretty fair to characterize as "our resources are being appropriated," and it's easy to notice that many billionaires who are made better off by mass immigration support decreasing regulatory barriers to immigration.

[Of course, open borders seem like a good idea to economists, and billionaires are more likely to have economist-approved views on economic policy, so I don't think this is just a 'self-interest' story; I just think it's worth noticing that the same "disenfranchised group having their resources appropriated" story does in fact go through for those groups.]

Most of the places where universal culture is replacing their own were first thorn apart to exploit the hell out of them.

I feel like this is missing the core point of the article, which is that the "colonizer / colonized" narrative misses the transition from the 'traditional cultures' of Britain and America to universal culture. Why did universalism win in Britain and America? If it was because those places were torn apart in order to exploit the hell out of them, then the flavor of this analysis changes significantly.

Comment by vaniver on Deutsch and Yudkowsky on scientific explanation · 2021-01-20T23:36:06.883Z · LW · GW

What does it mean to say that one programme is more probable than another? That it is short?

For a Solomonoff inductor, yes. [Basically you have "programs that have failed to predict the past" and "programs that have predicted the past", and all of the latter group that are of equal lengths are equally probable, and it must be the case that longer programs are less probable than shorter programs for the total probability to sum to 1, tho you could have a limited number of exceptions if you wanted.]

That said, in the SI paradigm, the probability of individual programs normally isn't very interesting; what's interesting is the probability of the next token, and the probability of 'equivalence classes' of programs. You might, for example, have programs that are pairs of system dynamics and random seeds (or boundary conditions or so on), and so care about "what's the aggregate remaining probability of dynamical systems of type A vs type B?", where perhaps 99% of the random seeds for type A have been ruled out, but 99.99% of the random seeds for type B have been ruled out, meaning that you think it's 100x more likely that type A describes the data source than type B.

[SI, in its full version, doesn't need to do any compression tricks like think about 'types of dynamical systems', instead just running all of them, but this is the sort of thing you might expect from a SI approximator.]

Comment by vaniver on What is going on in the world? · 2021-01-19T17:50:02.680Z · LW · GW

I think it would have been way less popular to say "Western Civilization is declining on the scale of half a century"; I think they were clearly much better off than 1920. I think they could have told stories about moral decline, or viewed the West as not rising to the challenge of the cold war or so on, but they would have been comparing themselves to the last 20-30 years instead of the last 60-100 years.

Comment by vaniver on The Schelling Choice is "Rabbit", not "Stag" · 2021-01-15T21:56:56.567Z · LW · GW

In particular, I felt the need to emphasize the idea that Stag Hunts frame coordination problems as going against incentive gradients and as being maximally fragile and punishing, by default. 

In my experience, the main thing that happens when people learn about Stag Hunts is that they realize that it's a better fit for a lot of situations than the Prisoner's Dilemma, and this is generally an improvement. (Like Duncan, I wish we had used this frame at the start of Dragon Army.)

Yes, not every coordination problem is a stag hunt, and it may be a bad baseline or push in the wrong direction. It isn't the right model for starting a meetup, where (as you say) one person showing up alone is not much worse than hunting rabbit, and organic growth can get you to better and better situations. I think it's an underappreciated move to take things that look like stag hunts and turn them into things that are more robust to absence or have a smoother growth curve.

All that said, it still seems worth pointing out that in the absence of communication, in many cases the right thing to assume is that you should hunt rabbit.

Comment by vaniver on The Great Karma Reckoning · 2021-01-15T18:02:50.372Z · LW · GW

I think this is a little sad (in years past, I definitely put more effort into posts because of that sweet sweet 10x). I remember thinking that this doesn't do all that much to change the relative ranking of users, and so it's not clear it's worth the code complexity, but if it were free I personally would like some smoother gradation (like 2x for posts, another 2x for frontpage, another 2x for curated).

Comment by vaniver on The Great Karma Reckoning · 2021-01-15T17:59:15.500Z · LW · GW

From Wiktionary:

reckoning (plural reckonings)

  1. The action of calculating or estimating something.

Sometimes words are meant literally :P

Comment by vaniver on Two explanations for variation in human abilities · 2021-01-15T00:14:00.259Z · LW · GW

Given equal background and motivation, there is a lot less inequality in the rates human learn new tasks, compared to the inequality in how humans perform learned tasks.

Huh, my guess is the opposite. That is, all expert plumbers are similarly competent at performing tasks, and the thing that separate a bright plumber from a dull plumber is how quickly they become expert.

Quite possibly we're looking at different tasks? I'd be interested in examples of domains where this sort of thing has been quantized and you see the hypothesized relationship (where variation in learning speed is substantially smaller than variation in performance). Most of the examples I can think of that seem plausible are exercise-related, where you might imagine people learn proper technique with a tighter distribution than the underlying strength distribution, but this is cheating by using intellectual and physical faculties as separate sources of variation.

Comment by vaniver on Coherent decisions imply consistent utilities · 2021-01-14T23:51:45.729Z · LW · GW

Incidentally, a handful of things have crossed my path at the same time, such that I think I have a better explanation for the psychology underlying the Allais Paradox. [I'm not sure this will seem new, but something about the standard presentation seems to be not giving it the emphasis it deserves, or speaking generally instead of particularly.]

The traditional explanation is that you're paying for certainty, which has some value (typically hugely overestimated). But I think 'certainty' should really be read as something more like "not being blameworthy." That is, connect it to handicapping so that you have an excuse for poor performance. The person who picks 1B and loses know that they missed out on a certain $1M, whereas the person who picks 1A can choose to focus their attention on the possibility of losing the $1M they did get instead of the $4M they might have had and don't.

As Matt Levine puts it,

I admit that I occasionally envy the people who bought Bitcoin early for nothing and are now billionaires and retired. One thing that soothes this envy is reading about people who bought Bitcoin early for nothing and are now theoretical centimillionaires but lost their private keys and can’t access the money. I may have no Bitcoins, but at least I haven’t misplaced a fortune in Bitcoins.

"At least I haven't misplaced a fortune in Bitcoins"! Or, in other words, two different ways to "gain $0" with different Us.

[For what it's worth, I think this sort of "protecting yourself against updates" is mostly a mistake, and think it's better to hug reality as closely as possible, which means paying more attention to your mistakes instead of less, and being more open to making them instead of less. I think seeing the obstacles more clearly makes them easier to overcome.]

Comment by vaniver on Coherent decisions imply consistent utilities · 2021-01-14T22:52:36.623Z · LW · GW

I like this comment, but I feel sort of confused about it as a review instead of an elaboration. Yes, coherence theorems are very important, but did people get it from this post? To the extent that comments are evidence, they look like no, the post didn't quite make it clear to them what exactly is going here.

Comment by vaniver on The Sense-Making Web · 2021-01-04T16:43:15.627Z · LW · GW

Especially when describing groups of people, I think it's better to start off with the extensional definition ('the Sequences fanclub') rather than the intensional definition ('people devoted to refining the art of human rationality'). If I haven't heard of the Sensemaking scene, but have heard of the Stoa, seeing them as an example early in the post makes it easier to contextualize the rest.

Comment by vaniver on Vaniver's Shortform · 2021-01-02T19:22:59.360Z · LW · GW

A year ago, I wrote about an analogy between Circling and Rationality, commenters pointed out holes in the explanation, and I was excited to write more and fill in the holes, and haven't yet. What gives?

First was the significant meta discussion about moderation, which diverted a lot of the attention I could spare for LW, and also changed my relationship to the post somewhat. Then the pandemic struck, in a way that killed a lot of my ongoing inspiration for Circling-like things. Part of this is my low interest in non-text online activities; while online Circling does work, and I did it a bit over the course of the pandemic, I Circled way less than I did in the previous year, and there was much less in the way of spontaneous opportunities to talk Circling with experts. I put some effort into deliberate conversations with experts (thanks Jordan!), and made some progress, but didn't have the same fire to push through and finish things.

A common dynamic at the start of a new project is that one is excited and dumb; finishing seems real, and the problems seem imaginary. As one thinks more about the problem, the more one realizes that the original goal was impossible, slowly losing excitement. If pushed quickly enough, the possible thing (that was adjacent to the impossible goal) gets made; if left to sit, the contrast is too strong for work on the project to be compelling. Something like this happened here, I think.

So what was the original hope? Eliezer wrote The Simple Truth, which explained in detail what it means for truth to be a correspondence between map and territory, what sort of systems lead to the construction and maintenance of that correspondence, and why you might want it. I think one sort of "authenticity" is a similar correspondence, between behavior and preferences, and another sort of "authenticity" is 'relational truth', or that correspondence in the context of a relationship.

But while we can easily talk about truth taking preferences for granted (you don't want your sheep eaten by wolves, and you don't want to waste time looking for sheep), talking about preferences while not taking them for granted puts us in murkier territory. An early idea I had here was a dialogue between the 'hippie' arguing for authenticity against a 'Confucian' arguing for adoption of a role-based persona, which involves suppressing one's selfish desires, but this ended up seeming unsatisfactory because it was an argument between two particular developmental levels. I later realized that I could step back, and just use the idea of "developmental levels" to compartmentalize a lot of the difficulty, but moving up a level of abstraction would sacrifice the examples, or force me to commit to a particular theory of developmental levels (by using it to supply the examples).

I also got more in touch with the difference between 'explanatory writing' and 'transformative writing'; consider the difference between stating a mathematical formula and writing a math textbook. The former emits a set of facts or a model, and the user can store it in memory but maybe not much else; the latter attempts to construct some skill or ability or perspective in the mind of the reader, but can only do so by presenting the reader with the opportunity to build it themselves. (It's like mailing someone IKEA furniture or a LEGO set.) Doing the latter right involves seeing how the audience might be confused, and figuring out how to help them fix their own confusion. My original goal had been relatively simple--just explain what is going on, without attempting to persuade or teach the thing--but I found myself more and more drawn towards the standard of transformative writing.

I might still write this, especially in bits and pieces, but I wanted to publicly note that I slipped the deadline I set for myself, and if I write more on the subject it will be because the spirit strikes me instead of because I have a set goal to. [If you were interested in what I had to say about this, maybe reach out and let's have a conversation about it, which then maybe might seed public posts.]

Comment by vaniver on 2021 New Year Optimization Puzzles · 2021-01-01T18:27:03.934Z · LW · GW

[I edited in spoiler tags to the above comment.]

 Well, I would clearly not last long among the pebble-sorters. I think your criticism is almost right.

Unless we can come up with some scheme whereby there's only one prime left for particular regions, which I think would require 2*337+3*337+n*2*3 to be prime, which it looks to me like 1697=337*5+12 is.

Comment by vaniver on 2021 New Year Optimization Puzzles · 2020-12-31T18:38:33.777Z · LW · GW

That's what I get for searching for 'factors' instead of 'prime factors'!

Comment by vaniver on 2021 New Year Optimization Puzzles · 2020-12-31T18:37:20.676Z · LW · GW

This procedure uses 8 dice.

Presumably you mean coins?

Comment by vaniver on 2021 New Year Optimization Puzzles · 2020-12-31T18:17:25.089Z · LW · GW

Puzzle 2 (I now think my approach is actually right):

[new]:
Roll a d1697 (which is prime, I double-checked). Either the result is in the first 674 (337*2), or the middle 1011 (337*3), or the last 12 (2*2*3).
If it's in the first 674, you now need to roll a d3 to figure out whether it's 1-674, 675-1348, or 1349-2022.
If it's in the middle 1011, you now need to roll a d2 to figure out whether it's 1-1011 or 1012-2022.
If it's in the last 12, you need to mod 6 and then roll a d337 to figure out whether it's 1-337, 338-674, etc.
---
[edit: shown wrong by kuhanj]
Roll a d1089. Either the result is in the first 1011, or the latter 78.
If in the 1011, roll a d2 to determine whether they're in 1-1011 or 1012-2022.
If in the 78, mod by 6 to get a number between 0 and 5, which determines whether the winner is in 1-337, 338-674, etc; now roll a d337 to determine which one in the group. Two rolls, guaranteed.
---
[original, shown wrong by Measure] Can't you just roll a d2 and a d1011, which identifies the relevant person in two rolls guaranteed? (i.e. the d2 tells you whether they're in 1-1011 or 1012-2022, and then the second roll tells you which person.)
I don't think you can do better than this, because there's no way to do it in 1 roll (you need a die of size 2022), and I don't think there's a way to do it in less than 2 rolls (because while you can use your first roll to switch what your second roll will be, you can't allocate probability on the first roll such that you don't need a second roll, because otherwise you won't be uniformly selecting from all of the people). 

Puzzle 3:

So here's a partial sketch pointing towards a solution, but it needs a lot of work and maybe more coins.

Pick p=1/2, and generate a binary string of length 11 with 11 flips. If it's in the first 2021 numbers, that identifies your person. If it's not, subtract 2021 to get a number between 0 and 26, and repeat. [This doesn't have an upper bound yet.]

Line of attack one: shift p such that you can make a very long string which does divide into 2021 parts evenly. I think this ends up being a giant pain because you need to carefully adjust for the different probabilities of all of the different table elements. [Edit: Unexpected_Values found this solution and had an elegant proof of it.]

Line of attack two: shift the number of flips such that the remainder ends up a multiple of 43 (and separately) 47. If you can get both, then you can do the first series of flips, stop if it identifies someone and save the remainder mod 43 if it doesn't, and then do the second series of flips, which either identifies someone directly or gives you a remainder mod 47, and then the two remainders identify someone, and you have an upper bound. [Some brief searching through numbers in python makes me somewhat pessimistic that this will work.]

Comment by vaniver on One Year of Pomodoros · 2020-12-31T18:07:36.480Z · LW · GW

If someone knows any time-series stats I could run on it, let me know.

There's a few different things you could be interested in here for 'time series segmentation', which slightly shift what sort of method you want to approach.

  1. Identifying the structural breaks. Basically, you can view pom count as drawn from some distribution, but which distribution changes over time.
  2. Identifying local factors. For example, maybe Mondays are persistently different from Tuesdays, or days when you log more poms are followed by a day when you work fewer poms.

Often, people use ARMA models for time series because they can easily capture lots of different local factors, and HMMs for structural breaks (when you have as many as you do). I'm not aware of standardized methods that are good for this problem because often there's lots of tweaks inherent to your distribution; take a look at all the detail in this accepted answer, for example. But you also might be able to stick your data into seglearn and get something cool out of it.

Comment by vaniver on Against GDP as a metric for timelines and takeoff speeds · 2020-12-29T23:48:17.709Z · LW · GW

none capable of accelerating world GWP growth.

Or, at least, accelerating world GWP growth faster than they're already doing. (It's not like the various powers with nukes and bioweapons programs are not also trying to make the future richer than the present.)

Comment by vaniver on Morality as "Coordination", vs "Do-Gooding" · 2020-12-29T17:36:46.665Z · LW · GW

But... you just don't want to do that anymore, because of empathy, or because you've come to believe in principles that say to treat all humans with dignity.

On the one hand, I think the history of abolition in Britain and the US is inspiring and many of the people involved laudable, and many of the actions taken (like the West Africa Squadron) net good for the world and worth memorializing. On the other hand, when I look around the present, I see a lot of things that (cynically) look like a culture war between elites, where the posturing is more important than the positions, or the fact that it allows one to put down other elites is more important than the fact that it raises up the less fortunate. And so when I turn that cynical view on abolition, it makes me wonder how much the anti-slavery efforts were attempts by the rich and powerful of type A to knock down the rich and powerful of type B, as opposed to genuine concern (as a probable example of the latter, John Laurens, made famous by Hamilton, was an abolitionist from South Carolina and son of a prominent slave trader and plantation owner, so abolition was probably going to be bad for him personally).

Another example of this is the temperance movement; one can much more easily make the empathetic case for banning alcohol than allowing it, I think (especially in a much poorer time, when many more children were going hungry because their father chose to spent limited earnings on alcohol instead), and yet as far as I can tell the political victory of the temperance movement was largely due to the shifting tides of fortune for various large groups, some of which were more pro-alcohol than others, rather than a sense that "this is beneath us now."

Comment by vaniver on Great minds might not think alike · 2020-12-27T17:47:41.867Z · LW · GW

As it happens, I discovered this point in high school; I thought of myself as "the smartest kid at school," and yet the mental gymnastics required to justify that I was smarter than one of my friends were sufficiently outlandish that they stood out and I noticed the general pattern. "Sure, he knows more math and science than I do, and is a year younger than me, but I know more about fields X, Y, and Z!" [Looking back at it now, there's another student who also had a credible claim, but who was much easier to dismiss, and I wouldn't be surprised if he had dismissed me for symmetric reasons.]

Comment by vaniver on Steelmanning Divination · 2020-12-21T23:06:22.829Z · LW · GW

Rereading this post, I'm a bit struck by how much effort I put into explaining my history with the underlying ideas, and motivating that this specifically is cool. I think this made sense as a rhetorical move--I'm hoping that a skeptical audience will follow me into territory labeled 'woo' so that they can see the parts of it that are real--and also as a pedagogical move (proofs may be easy to verify, but all of the interesting content of how they actually discovered that line of thought in concept space has been cleaned away; in this post, rather than hiding the sprues they were part of the content, and perhaps even the main content. [Some part of me wants to signpost that a bit more clearly, tho perhaps it is obvious?]

There's something that itches about this post, where it feels like I never turn 'the idea' into a sentence. "If one regards it as proper form, one will have good fortune." Sure, but that leaves much of the work to the reader; this post is more like a log of me as a reader doing some more of the work, and leaving yet more work to my reader. It's not a clear condensation of the point, it doesn't address previous scholarship, it doesn't even clearly identify the relevant points that I had identified, and it doesn't transmit many of the tips and tricks I picked up. A sentence that feels like it would have fit (at least some of what I wanted to convey?) is this description of Tarot readings: "they are not about fortelling your inevitable future, but taking control of it through self knowledge and awareness." [But in reading that, there's something pleasing about the holistic vagueness of "proper form"; the point of having proper form is not just 'taking control'!]

For example, an important point that came up when reading AllAmericanBreakfast's exploration of using divination was the 'skill of discernment', and that looking at random perspectives and lenses helps train this as well. Once I got a Tarot reading that I'll paraphrase as "this person you're having an interpersonal conflict with is 100% in the wrong," and as soon as I saw the card I burst out laughing; it was clearly not resonant with my experience or the situation, and yet there was still something useful out of seeing myself react to that perspective in that way. Other times, it really is just noise. Somehow, it reminds me of baseball, where an important feature of the game is that the large majority of at-bats do not result in hits. Demonstrating the skill of discernment is present in the original post--but only when I talk about sifting through the sections of Xunzi, giving the specific reasons why I dismissed some parts and thought that the quoted section was a hit that justified additional research, contemplation, and exploration. The ongoing importance of that skill to divination is basically left out.

I also hadn't realized until reading AllAmericanBreakfast's exploration how much it might help to convey that the little things mattered; my translation of the I Ching surely impacted my experience of doing divination with it deeply, like my Tarot deck impacted my experience of doing Tarot readings. I make a related point in passing ("the particular claim made by a source is what distribution is most useful"), and SaidAchmiz's excellent comment explains it much more fully. When I think of where to go from here, matching the 'advice distribution' to some mixture of the reader and the world feels like a central point.

Expanding on that, I think the traditional style of Tarot reading mostly cares about which cards end up in which positions, drawing on the mythical associations of the card's name more than the features of the cards themselves. Whether or not cards are 'reversed' is significant, but as an orderly person and a longtime Magic: The Gathering player, I can't stand shuffling methods that randomize the orientation of the cards. The tableau defines the relationships between the cards and constructs the overall perspective.

So the way I do Tarot readings is often quite simple: three cards, one for me, one for the other party, and the third for the relationship. [Another common three-card spread is 'past, present, and future'; this page that I found while writing this review suggests a few spreads and questions that I am excited to try. As is perhaps obvious, the questions seeding how to relate to the cards you draw will have a huge impact on the variety and usefulness of perspectives you will generate.] But this works in part because my deck is so beautiful and detailed; as an example, I'll do (and 'live-blog') a reading about my relationship to this post.

I am the "princet of ground"; in a cave, his staff planted firmly behind him, a glowing triangle (the symbol of ground) floating in his hands. The clear story is that I'm delving deep into the past, looking for treasure while maintaining my grounding; the subtler point is that I initially wrote "holding a glowing triangle" and then realized that it in fact wasn't being touched by the princet, in a way that rhymes with my sense that I don't actually understand this fully, or haven't distilled it crisply enough.

The post is the 3 of wind; three swords piercing a heart in the middle of a storm. The meaning of this is, uh, obscure. And yet, perhaps that obscurity is the relevance? The post is not clear--it is three 'stabs at the heart of the matter', which while they touch the point, they have not cleared away the stormclouds or lightning or rain.

The relationship is "The Fountain" (or "Wheel of Fortune"); a fat figure eats and drinks at a party while their reflection is emaciated and surrounded by flame. Water pours from the fountain, causing ripples in the pool. I try out a few stories here, and none of them resonate strongly; am I the fortunate figure, reaping the rewards of having written a solid post? Is the reader the emaciated figure? Is the reflection the illusion of transparency, where there's a sense that they got it, but actually they missed it, or the post itself is insight porn? [As I say in a comment, "if someone is going to get something out of the I Ching, they're going to do it through practice, not through a summary".] The fountain resonates with gradual change, with water moving, in a way that seems hard to articulate but somehow rhymes with people reading this post, seeing things more clearly, or having more tools to see things more broadly and understand more perspectives. [One of my favorite comments about this post was posted by a friend to Facebook, where she attended a Red Tent event that involved someone doing a Tarot reading, and reading this post gave her a clear affordance of how to relate to it in a healthy and connective way, instead of being forced into a dilemma of whether to suppress her disbelief or cause a conflict with the other women there.]

The perspective that seems most resonant, after thinking about it for a few minutes, is that the relationship is a mixture of pride and shame. I like this post; I think it's good, I think it's an example of one of my comparative advantages as a rationalist, I am glad to have written it, and I am glad that people liked it. And also... I am ashamed that this is a 2019 post, instead of a 2015 one; that it is just an advertisement seeking to "open a door that was rightfully closed on the likes of fortune cookies and astrology to rescue things like the I Ching, that seem superficially similar but have a real depth to them," instead of having much depth of its own. And while I still pull out the deck or the I Ching for some major instances, the regular habit never quite stuck in a way that made me suspect I wasn't being creative enough, or pushing enough towards my growth edge. ['If you am bored of reading Tarot cards,' that perspective says, 'you are not asking spicy enough questions.']

Comment by vaniver on New Eliezer Yudkowsky interview on We Want MoR, the HPMOR Podcast · 2020-12-21T19:56:11.621Z · LW · GW

RE: "did anyone notice?" https://www.reddit.com/r/HPMOR/comments/5fswdh/anyone_else_notice_that_hermione_mcgonagall_are/
 

Comment by vaniver on What is it good for? But actually? · 2020-12-17T17:15:10.303Z · LW · GW

One of the forces present in society is people striving to not just meet some moral standard, but be seen as more moral than others. This is often present both in demonstrating virtue and demonstrating the absence of vice. ["Let me show you how not racist I am!"] For much of human history, courage and strength have been important virtues, and cowardice and weakness important vices.

In England during World War I, as thousands were dying pointlessly in the trenches, pretty girls went around handing white feathers — a symbol of cowardice — to men who weren’t in uniform. [src]

Participation in and advocacy for war are often seen as evidence against personal cowardice and for personal bravery. (The slur "chickenhawk" deflates advocacy without participation, by separating out hollow signaling from substantiated signaling.) Like Kaj, my sense is that people have a baked-in sense of how good war is that's more tuned to our long evolutionary history than the recent, present, or future bits.

Comment by vaniver on The AI Timelines Scam · 2020-12-12T20:00:39.351Z · LW · GW

At the time, I argued pretty strongly against parts of this post, and I still think my points are valid and important. That said, I think in retrospect this post had a large impact; I think it kicked off several months of investigation of how language works and what discourse norms should be in the presence of consequences. I'm not sure it was the best of 2019, but it seems necessary to make sense of 2019, or properly trace the lineage of ideas?

Comment by vaniver on Relevance Norms; Or, Gricean Implicature Queers the Decoupling/Contextualizing Binary · 2020-12-12T19:55:21.621Z · LW · GW

There's a set of posts by Zack_M_Davis in a similar vein that came out in 2019; some examples are Maybe Lying Doesn't Exist, Firming Up Not-Lying Around Its Edge Cases Is Less Broadly Useful Than One Might Think, Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk, and this one. Overall, this was the one I liked the most at the time (as evidenced by strong-upvoting it, and only weak or not upvoting the others). It points clearly at a confusion underlying a common dichotomy, in a way that I think probably changed the discourse afterwards for the better.

Comment by vaniver on Total horse takeover · 2020-12-12T19:27:53.035Z · LW · GW

"Total horse takeover" is a phrase I've used several times since reading this post, and seems useful for building gears-level models of what transformative change might look like, and which steps involve magic that need to be examined further.

Comment by vaniver on Mental Mountains · 2020-12-12T19:24:22.952Z · LW · GW

I think this post is a useful companion to Kaj's posts; it feels like much of what 'feels settled now' that was fresh at the time was this sort of conceptual work on what's going on with therapy and human psychology in general.

Comment by vaniver on Make more land · 2020-12-12T17:55:39.691Z · LW · GW

I think one of my favorite things about LW is that it has a clear-eyed view of the future, and things will be different and we should pick which way to make them different. While I don't think the theory of change underlying this specific proposal is here, I think having these sorts of proposals around, and being the sort of people who share these proposals instead of write them off, is important, and I think I've moved more in this direction over the intervening year, in part because of how positive my reaction was to this post.

Comment by vaniver on "The Bitter Lesson", an article about compute vs human knowledge in AI · 2020-12-12T16:34:21.219Z · LW · GW

I'm confused about whether or not it makes sense to put a linkpost in the review, but it seems worth considering for The Bitter Lesson. I think this might be the 'most important lesson of the 2010s', let alone 2019 specifically; I don't think the insight is unique to Sutton, but I think the combination of "obvious to unbiased onlookers" and "resisted by biased insiders" feels very familiar from the LW perspective, and it has often been useful as a handle for me when thinking about the future or my personal plans. It's the sort of thing that I expect I will look back on and think "yes, I was taking it seriously, but I should have been taking it even more seriously."

Comment by vaniver on Selection vs Control · 2020-12-12T16:19:50.934Z · LW · GW

In my wayward youthformal education, I studied numerical optimization, controls systems, the science of decision-making, and related things, and so some part of me was always irked by the focus on utility functions and issues with them; take this early comment of mine and the resulting thread as an example. So I was very pleased to see a post that touches on the difference between the approaches and the resulting intuitions bringing it more into the thinking of the AIAF.

That said, I also think I've become more confused about what sorts of inferences we can draw from internal structure to external behavior, when there are Church-Turing-like reasons to think that a robot built with mental strategy X can emulate a robot built with mental strategy Y, and both psychology and practical machine learning systems look like complicated pyramids built out of simple nonlinearities that can approximate general functions (but with different simplicity priors, and thus efficiencies). This sort of distinction doesn't seem particularly useful to me from the perspective of constraining our expectations, while it does seem useful for expanding them. [That is, the range of future possibilities seems broader than one would expect if they only thought in terms of selection, or only thought in terms of control.]

Comment by vaniver on Being the (Pareto) Best in the World · 2020-12-12T15:58:49.458Z · LW · GW

Like Dr_Manhattan, I had already grokked this from Scott Adams' advice, but like Zack_M_Davis, I think the post conveys the insight well.

Comment by vaniver on Complex Behavior from Simple (Sub)Agents · 2020-12-12T15:48:54.364Z · LW · GW

With more modest preference for the company of other agents, and with partially-overlapping goals (Blue agent wants to spend time around the top and rightmost target, Red agent wants to spend time around the top and leftmost target) you get this other piece of art that I call "Healthy Friendship". It looks like they're having fun, doesn't it?

Surely both agents want to spend time around the rightmost target? Or is this in fact a rather uneven friendship?

Comment by vaniver on The Hard Work of Translation (Buddhism) · 2020-12-12T15:34:29.678Z · LW · GW

There are two things that I really like about this post; being somewhat self-aware about the type of work that it's trying to do, and also this specific attempt.

That is, contra nsheppard, I do see this as trying to do the hard work of translation, not in the sense of demonstrating that the original author meant what is rendered here in English (as, say, lsusr's translation of 'Sunzi's <<Methods of War>>' tries to do), but in the sense of attempting to regenerate the same underlying concept in a new environment. What dependencies can be used, and which can't? romeostevensit doesn't attempt to explain causation, locus of control, or cognitive behavior therapy, and just causally refers to them, as they are assumable here, and he carefully spells out the phrase "unpleasant mental contents that don't seem to serve any purpose." Perhaps it should be called something like 'recompilation' instead of 'translation', but I think this sort of thing is an important method of rationality [like moses's comment here, I think one of the ways in which rationalists are better than skeptics is in having less of their allergy to woo].

My personal relationship to meditation is somewhat strange--lifelong sleep onset insomnia gave me something like an involuntary self-taught meditation practice. So not only am I unfamiliar with many of the traditional meditation terms, I'm also unfamiliar with many of the experiences of problems of the normal student or avoider of meditation. This makes it hard for me to tell whether or not this post helped anyone defragment their minds; for me, it mostly helped me frame why I hadn't gotten much out of my few attempts to deliberately meditate, by giving me a clearer conceptualization of what that meditation was supposed to do.

Comment by vaniver on Luna Lovegood and the Chamber of Secrets - Part 6 · 2020-12-11T19:49:28.434Z · LW · GW

I think Luna is tuned into the difference between Harry's epistemology and Hermione's, and so thinks the question is worth asking in the one case and not the other; alternatively, she is tuned into how they respond to questions of that sort, and Hermione invites them whereas Harry does not.

Comment by vaniver on Luna Lovegood and the Chamber of Secrets - Part 7 · 2020-12-11T19:44:26.148Z · LW · GW

I suppose what's missing is unnecessary information--each scene is stripped to its bare essentials.

Which, for what it's worth, I really like as a window to 'what life is like for Luna'.

Comment by vaniver on 5 Axioms of Decision Making · 2020-12-11T18:08:58.828Z · LW · GW

That is, where did 8/15 and 9/15 come from?

Let's step through the B case. I only need to track probability of TS, because probability of PP is 1 minus that. The RD turns into .9/3 = 9/30ths, the R turns into 6/30ths, and the FS turns into 3/30ths. Add those together and you get 9+6+3=18, and 18/30 simplifies to 9/15ths.

What about the A case? Here the underlying probabilities are 1, .6, and 0, so 10/30, 6/30, and 0/30. 16/30 simplifies to 8/15ths.

Comment by vaniver on Quick Thoughts on Immoral Mazes · 2020-12-09T21:23:39.555Z · LW · GW

However, in a vertical system of advancement, where management hires are made from within, the middle ranks will at least have a memory of working on the object-level concerns of the firm.

I think this is part of why the startup nature of tech firms is important, even if all of the important firms are no longer in childhood; it means some fraction of the executive class are people who were around from the beginning, or who built the systems themselves, and so do have the relevant memory.

That said, my sense is that FAANGM are somewhere between 50% and 80% as mazy as corporations as a whole; enough less to be noticeable, but it's still probably the best lens through which to view the situation.

Comment by vaniver on Quick Thoughts on Immoral Mazes · 2020-12-09T21:19:52.399Z · LW · GW

Dictator's Handbook sees everyone as just following their incentives. Moloch is very strong in organizations closer to the dictatorship end of the spectrum, and relatively weak in organizations close to the democracy end of the spectrum.

I think this is either using a version of 'Moloch' that's too specific, or is ignoring what's bad about democracies. Moloch in democracies looks like politicians who give voters what they think they want, instead of what they 'actually' want, or who drive turnout mostly by demonizing the opposition rather than credibly promising to improve things.

Comment by vaniver on Cultural accumulation · 2020-12-09T00:17:59.802Z · LW · GW

A big way this might fail is if 2020 society knows everything between them needed to use 2020 artifacts to get more 2020 artifacts, but don’t know how to use 1200 artifacts to get 2020 artifacts.

I think this is missing another important way in which this fails: even if they knew how to use 1200 artifacts to get 2020 artifacts, by moving them back in time (without also moving their stuff) you've made them massively poorer. If, say, you transported Elon Musk and all the relevant experts back to 1200, you might find that suddenly a much higher fraction of them are working in construction and agriculture than were previously.