Posts

2020 Review Article 2022-01-14T04:58:02.456Z
The Debtor's Revolt 2021-12-26T19:32:32.980Z
2020 Review: The Discussion Phase 2021-12-15T01:12:44.746Z
[Lecture Club] Awakening from the Meaning Crisis 2021-03-08T15:22:22.626Z
Alex Irpan: "My AI Timelines Have Sped Up" 2020-08-19T16:23:25.348Z
Property as Coordination Minimization 2020-08-04T19:24:15.759Z
Rereading Atlas Shrugged 2020-07-28T18:54:45.272Z
A reply to Agnes Callard 2020-06-28T03:25:27.378Z
Public Positions and Private Guts 2020-06-26T23:00:52.838Z
How alienated should you be? 2020-06-14T15:55:24.043Z
Outperforming the human Atari benchmark 2020-03-31T19:33:46.355Z
Mod Notice about Election Discussion 2020-01-29T01:35:53.947Z
Circling as Cousin to Rationality 2020-01-01T01:16:42.727Z
Self and No-Self 2019-12-29T06:15:50.192Z
T-Shaped Organizations 2019-12-16T23:48:13.101Z
ialdabaoth is banned 2019-12-13T06:34:41.756Z
The Bus Ticket Theory of Genius 2019-11-23T22:12:17.966Z
Vaniver's Shortform 2019-10-06T19:34:49.931Z
Vaniver's View on Factored Cognition 2019-08-23T02:54:00.915Z
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:58.633Z
Commentary On "The Abolition of Man" 2019-07-15T18:56:27.295Z
Is there a guide to 'Problems that are too fast to Google'? 2019-06-17T05:04:39.613Z
Steelmanning Divination 2019-06-05T22:53:54.615Z
Public Positions and Private Guts 2018-10-11T19:38:25.567Z
Maps of Meaning: Abridged and Translated 2018-10-11T00:27:20.974Z
Compact vs. Wide Models 2018-07-16T04:09:10.075Z
Thoughts on AI Safety via Debate 2018-05-09T19:46:00.417Z
Turning 30 2018-05-08T05:37:45.001Z
My confusions with Paul's Agenda 2018-04-20T17:24:13.466Z
LW Migration Announcement 2018-03-22T02:18:19.892Z
LW Migration Announcement 2018-03-22T02:17:13.927Z
Leaving beta: Voting on moving to LessWrong.com 2018-03-11T23:40:26.663Z
Leaving beta: Voting on moving to LessWrong.com 2018-03-11T22:53:17.721Z
LW 2.0 Open Beta Live 2017-09-21T01:15:53.341Z
LW 2.0 Open Beta starts 9/20 2017-09-15T02:57:10.729Z
Pair Debug to Understand, not Fix 2017-06-21T23:25:40.480Z
Don't Shoot the Messenger 2017-04-19T22:14:45.585Z
The Quaker and the Parselmouth 2017-01-20T21:24:12.010Z
Announcement: Intelligence in Literature Prize 2017-01-04T20:07:50.745Z
Community needs, individual needs, and a model of adult development 2016-12-17T00:18:17.718Z
Contra Robinson on Schooling 2016-12-02T19:05:13.922Z
Downvotes temporarily disabled 2016-12-01T17:31:41.763Z
Articles in Main 2016-11-29T21:35:17.618Z
Linkposts now live! 2016-09-28T15:13:19.542Z
Yudkowsky's Guide to Writing Intelligent Characters 2016-09-28T14:36:48.583Z
Meetup : Welcome Scott Aaronson to Texas 2016-07-25T01:27:43.908Z
Happy Notice Your Surprise Day! 2016-04-01T13:02:33.530Z
Posting to Main currently disabled 2016-02-19T03:55:08.370Z
Upcoming LW Changes 2016-02-03T05:34:34.472Z
LessWrong 2.0 2015-12-09T18:59:37.232Z

Comments

Comment by Vaniver on Is AI Alignment a pseudoscience? · 2022-01-23T22:08:18.253Z · LW · GW

Yeah, but also this is the sort of response that goes better with citations.

Like, people used to make a somewhat hand-wavy argument that AIs trained on goal X might become consequentialists which pursued goal Y, and gave the analogy of the time when humans 'woke up' inside of evolution, and now are optimizing for goals different from evolution's goals, despite having 'perfect training' in some sense (and the ability to notice the existence of evolution, and its goals). Then eventually someone wrote Risks from Learned Optimization in Advanced Machine Learning Systems in a way that I think involves substantially less hand-waving and substantially more specification in detail.

Of course there are still parts that remain to be specified in detail--either because no one has written it up yet (Risks from Learned Optimization came from, in part, someone relatively new to the field saying "I don't think this hand-wavy argument checks out", looking into it a bunch, being convinced, and then writing it up in detail), or because we don't know what we're looking for yet. (We have a somewhat formal definition of 'corrigiblity', but is it the thing that we actually want in our AI designs? It's not yet clear.)

Comment by Vaniver on Hyperpalatable Food Hypothesis: A LessWrong Study? · 2022-01-23T21:47:16.494Z · LW · GW

I enjoy my first MealSquare. My fourth (I eat one meal a day) is generally "fine." Whether or not I eat a fifth (or sixth) depends on how hungry I am in a manner much more pronounced than it is for other foods.

Comment by Vaniver on Hyperpalatable Food Hypothesis: A LessWrong Study? · 2022-01-23T16:53:03.533Z · LW · GW

Well the point isn't meant to be that the food is inherently unsatisfying. The point is meant to be that the food is stuff that is within the normal range of palatability we are adapted for. 

IMO you either want to go the 'French women' approach as described in another comment, or you want to select a food that is 'bland'. The specific property I mean is a psychological reaction, and so it's going to fire for different foods for different people, but basically: when you're starting a meal you want to eat the food, and then when you've eaten enough of the food, you look at more on your plate and go "I'm not finishing that." [This is different from the "I'm too full" reaction; there have been many times that I have put MealSquares back in the fridge when I would have eaten more bread.]

One thing that I've tried, but not for long enough to get shareable data, is having the 'second half' of my day's calories be bland food. (That is, cook / order 1000 calories of tasty food, and then eat as many MealSquares as I want afterwards.) This is less convenient than a "cheat day" style of diet, but my guess is it's more psychologically easy.

Comment by Vaniver on 2020 Review: The Discussion Phase · 2022-01-18T22:24:42.072Z · LW · GW

Fourth set of prizes (which is larger than previous sets to reflect the longer time period since the third set of prizes, apologies for the delay):

$200 to:

$100 each to:

Honorable mention to Bucky on Nuclear war is unlikely to cause human extinction, which I liked but isn't eligible as Bucky was separately hired to write reviews.  

Comment by Vaniver on 2020 Review Article · 2022-01-18T22:09:20.833Z · LW · GW

Fixed.

Comment by Vaniver on A non-mystical explanation of "no-self" (three characteristics series) · 2022-01-14T18:11:03.806Z · LW · GW

Another thing that I don't quite like about that definition is that it looks like it's saying "not and" which is not quite the thing? Like I can look at that and go "oh, okay, my separate independent acausal autonomous self can be in reality, because it's impermanent." Instead I want it to be something like "the self is temporary instead of permanent, embedded instead of separate, dependent instead of independent, causal instead of acausal, <> instead of autonomous" (where I'm not quite sure what Ingram is hoping to contrast autonomous with).

Also, since I'm thinking about this, one of the things that I like about "observation" / think is a big part of Buddhist thinking that is useful to clearly explain to people, is that this is (as I understand it) not an axiom that you use to build your model of the world, but a hypothesis that you are encouraged to check for yourself (in the same way that we might have physics students measure the amount of time it takes for objects to drop, and (ideally) not really expect them to believe our numbers without checking them themselves). "You think your self isn't made of parts? Maybe you should pay attention to X, and see if you still think that afterwards."

Comment by Vaniver on Circling as Cousin to Rationality · 2022-01-13T18:20:03.719Z · LW · GW

This post is hard for me to review, because I both 1) really like this post and 2) really failed to deliver on the IOUs. As is, I think the post deserves highly upvoted comments that are critical / have clarifying questions; I give some responses, but not enough that I feel like this is 'complete', even considering the long threads in the comments.

[This is somewhat especially disappointing, because I deliberately had "December 31st" as a deadline so that this would get into the 2019 review instead of the 2020 review, and had hoped this would be the first post in a sequence that would be remembered fondly instead of something closer to 'a shout into the void'; also apparently I was tricked by the difference between server time and local time or something, and so it's being reviewed now instead of last year, one of the oldest posts instead of one of the newest.]

And so it's hard to see the post without the holes; it's hard to see the holes without guilt, or at least a lingering yearning.

The main thing that changed after this post is some Circlers reached out to me; overall, I think the reception of this post in the Circling world was positive. I don't know if the rationalist world thought much differently about Circling; I think the pandemic killed most of the natural momentum it had, and there wasn't any concerted push (that I saw) to use Circle Anywhere, which might have kept the momentum going (or spread it).

Comment by Vaniver on The "Outside the Box" Box · 2022-01-13T16:38:09.323Z · LW · GW

I think it's not the case that "neural networks" as discussed in this post made AlphaGo. That is, almost of the difficulty in making AlphaGo happen was picking which neural network architecture would solve the problem / buying fast enough computers to train it in a reasonable amount of time. A more recent example might be something like "model-based reinforcement learning"; for many years 'everyone knew' that this was the next place to go, while no one could write down an algorithm that actually performed well.

I think the underlying point--if you want to think of new things, you need to think original thoughts instead of signalling "I am not a traditionalist"--is broadly correct even if the example fails.

That said, I agree with you that the example seems unfortunately timed. In 2007, some CNNs had performed well on a handful of tasks; the big wins were still ~4-5 years in the future. If the cached wisdom had been "we need faster computers," I think the cached wisdom would have looked pretty good.

Comment by Vaniver on A non-mystical explanation of "no-self" (three characteristics series) · 2022-01-13T00:47:24.158Z · LW · GW

I like what this post is trying to do more than I like this post. (I still gave it a +4.)

That is, I think that LW has been flirting with meditation and similar practices for years, and this sort of 'non-mystical explanation' is essential to make sure that we know what we're talking about, instead of just vibing. I'm glad to see more of it.

I think that no-self is a useful concept, and had written a (shorter, not attempting to be fully non-mystical) post on the subject several months before. I find myself sort of frustrated that there isn't a clear sentence that I can point to, which identifies what no-self is, like "no-self is the observation that the 'self' can be reduced to constituent parts instead of being ontologically basic."

But when I imagine Kaj reading the previous paragraph, well, can't he point out that there's actually a class of insights here, rather than just a single concept? For example, I didn't include in that sentence that you can introspect into the process by which your mind generates your perception of self, or the way in which a sense of self is critical to the planning apparatus, or so on. I'm making the mistake he describes in the second paragraph, of pointing to something and saying "this is enlightenment" instead of thinking about the different enlightenments.

Even after that (imagined) response, I still have some sense that something is backwards. The section heading ("Early insights into no-self") seems appropriate, but the post title ("a non-mystical explanation") seems like overreach. The explanation is there, in bits and pieces, but it reads somewhat more like an apology for not having a real explanation.

[For example, the 'many insights' framing makes more sense to me if we have a map or a list of those insights, which I think we don't have (or, even if some Buddhist experts have it, it's not at all clear we'd trust their ontology or epistemology). To be fair, I think we haven't build that map/list for rationality either, but doing that seems like an important task for the field as a whole.]

Comment by Vaniver on Brain Efficiency: Much More than You Wanted to Know · 2022-01-07T19:24:49.466Z · LW · GW

But if the brain is already near said practical physical limits, then merely achieving brain parity in AGI at all will already require using up most of the optimizational slack, leaving not much left for a hard takeoff - thus a slower takeoff.

While you do talk about stuff related to this in the post / I'm not sure you disagree about facts, I think I want to argue about interpretation / frame.

That is, efficiency is a numerator over a denominator; I grant that we're looking at the right numerator, but even if human brains are maximally efficient by denominator 1, they might be highly inefficient by denominator 2, and the core value of AI may be being able to switch from denominator 1 to denominator 2 (rather than being a 'straightforward upgrade').

The analogy between birds and planes is probably useful here; birds are (as you would expect!) very efficient at miles flown per calorie, but if it's way easier to get 'calories' through chemical engineering on petroleum, then a less efficient plane that consumes jet fuel can end up cheaper. And if what's economically relevant is "top speed" or "time it takes to go from New York to London", then planes can solidly beat birds. I think we were living in the 'fast takeoff' world for planes (in a technical instead of economic sense), even tho this sort of reasoning would have suggested there would be slow takeoff as we struggled to reach bird efficiency.

The easiest disanalogy between humans and computers is probably "ease of adding more watts"; my brain is running at ~10W because it was 'designed' in an era when calories were super-scarce and cooling was difficult. But electricity is super cheap, and putting 200W through my GPU and then dumping it into my room costs basically nothing. (Once you have 'datacenter' levels of compute, electricity and cooling costs are significant; but again substantially cheaper than the costs of feeding similar numbers of humans.)

A second important disanalogy is something like "ease of adding more compute in parallel"; if I want to add a second GPU to my computer, this is a mild hassle and only takes some tweaks to work; if I want to add a second brain to my body, this is basically impossible. [This is maybe underselling humans, who make organizations to 'add brains' in this way, but I think this is still probably quite important for timeline-related concerns.]

Comment by Vaniver on What are sane reasons that Covid data is treated as reliable? · 2022-01-01T23:04:05.923Z · LW · GW

Primary source material (CDC data tracker) is better than secondary source interpretation (CNN COVID newsfeed).

One of the points of OP to be that aggregations like the CDC data tracker are not themselves primary source material. Like, the chain goes "person provides sample" -> "sample gets processed" -> "result gets recorded locally" -> "result gets aggregated nationally", and each of those steps feels like it has some possibility for error or bias or whatever. That CNN is even further connected from ground seems useful to know, but doesn't tell us how connected the CDC is.

Comment by Vaniver on 2020 Review: The Discussion Phase · 2022-01-01T18:51:16.436Z · LW · GW

I continue to be impressed by the reviews that are coming out; keep it up! :D

Third set of prizes:

Honorable mention (since he works for Lightcone, and so is ineligible for prizes) to Ben Pace's Controversial Picks for the 2020 Review.

$200 to:

$100 each to:

Comment by Vaniver on Studies On Slack · 2022-01-01T18:23:05.761Z · LW · GW

Looking at the paper, I think I wasn't tracking an important difference.

I still think that genes that have reached fixation among a population aren't selected for, because you don't have enough variance to support natural selection. The important thing that's happening in the paper is that, because they have groups that colonize new groups, traits can reach fixation within a group (by 'accident') and then form the material for selection between groups. The important quote from the paper:

The total variance in adult numbers for a generation can be partitioned on the basis of the parents in the previous generation into two components: a within-populations component of variance and a between-populations component of variance. The within-populations component is evaluated by calculating the variance among D populations descended from the same parent in the immediately preceding generation. The between-populations component is evaluated by calculating the variance among groups of D populations descended from different parents. The process of random extinctions with recolonization (D) was observed to convert a large portion of the total variance into the between-populations component of the variance (Fig. 2b), the component necessary for group selection. 

So even tho low fecundity is punished within every group (because your groupmates who have more children will be a larger part of the ancestor distribution), some groups by founder effects will have low fecundity, and be inbred enough that there's not enough fecundity variance to differentiate between members of the population of that group, (even if fecundity varies among all beetles, because they're not a shared breeding population).

 

[EDIT] That is, I still think it's correct that foxes sharing 'the fox genome' can't fix boom-bust cycles for all foxes, but that you can locally avoid catastrophe in an unstable way.

For example, there's a gene for some species that causes fathers to only have sons. This is fascinating because it 1) is reproductively successful in the early stage (you have twice as many chances to be a father in the next generation as someone without the copy of the gene, and all children need to have a father) and it 2) leads to extinction in the later stage (because as you grow to be a larger and larger fraction of the population, the total number of descendants in the next generation shrinks, with there eventually being a last generation of only men). The reason this isn't common everywhere is group selection; any subpopulations where this gene appeared died out, and failed to take other subpopulations down with them because of the difficulty of traveling between subpopulations. But this is 'luck' and 'survivor recolonization', which are pretty different mechanisms than individual selection.

Comment by Vaniver on Studies On Slack · 2022-01-01T05:26:18.995Z · LW · GW

An explanation here is that the inbred beetles of the study are becoming progressively more inbred with each generation, meaning that genetic-controlled fecundity-limiting changes will tend to be shared and passed down. Individual differences will be progressively erased generation by generation, meaning that as time goes by, group selection may increasingly dominate individual competition as a driver of selection.

I don't think this adds up. Yes, species share many of their genes--but then those can't be the genes that natural selection is working on! And so we have to explain why the less fecund individuals survived more than the more fecund individuals. If that's true, then this is just an adaptive trait going to fixation, as in common (and isn't really a group selection thing).

Comment by Vaniver on Why did Europe conquer the world? · 2021-12-29T06:30:48.241Z · LW · GW

Think about the economic pressures that promote mechanization. Optimal conditions combine tremendous wealth with a labor shortage. In ancient China, technology was used to harness excess human labor. Paul Polak built a poverty-alleviation program out of harnessing cheap labor in modern India. You're not going to invest in primitive steam engines when human labor is cheaper than coal.

I guess I don't see why I would expect mechanization to be important, given this argument. If labor is expensive, I get why it makes sense to invest more in substitutes for labor. But... shouldn't that just lower the cost of labor to the point of places where labor is cheap? If labor is cheaper than coal, why didn't the other places make the things with labor that Britain made with coal?

I think there's an argument that the ceiling for mechanization is much higher, because you can plug machines into other machines more easily than you can plug human laborers into other human laborers, and there's transfer between applications for different machines, or something like this. But I somehow think this is the interesting story, and the 'but they had cheap labor so they didn't need machinists' isn't the interesting story. Like, I almost have an easier time buying "Britain, as a colder country, had higher demand for domestic use of coal than the Ottomans / China / India, and so invested more heavily in coal mining tech, which then turned out to be useful for industrialism more generally." Or, "Britain, as a country with more useful water power, had an easier time making powered machines and had more of a maritime culture than those three countries."

Comment by Vaniver on Why did Europe conquer the world? · 2021-12-29T06:18:15.569Z · LW · GW

I have not read Gregory Clark. What kind of "genetic changes" and "middle-class values" does Gregory Clark write about?

This is my memory of reading it years ago, and perhaps I'm wrong in details. That said, the book roughly argues:

England has very good records for wills, which tell you both 1) how rich someone was at death and 2) how many surviving children they had. Also, England had primogeniture, where the bulk of parental wealth passes to the oldest child, instead of being split (as is more common in China). So he's able to figure out the relationship between wealth and fertility, and roughly finds that there's significant downward social mobility in Britain over this time period, as richer people have more surviving children, and later children are more likely to become members of the lower social strata (the third son of a wealthy landholder themselves becoming a smallholder, as they don't inherit any of the major estate, for example). As well, he has evidence that things like the death penalty for murder was pursued somewhat more effectively in Britain than other places, further having an effect on the distribution of ancestors.

The punchline is that the "nation of shopkeepers" quote (from Napoleon) is sort of genetically accurate, in that today's farmers were more likely to be descended from people one social strata higher than farmers, and so on.

I think the weakest part of the book is his analysis of China; some commentary I've seen is that we should expect the situation in China to be even more this way than the situation in Britain.

Comment by Vaniver on microCOVID.org: A tool to estimate COVID risk from common activities · 2021-12-23T16:17:12.091Z · LW · GW

I think microCOVID was a hugely useful tool, and probably the most visibly useful thing that rationalists did related to the pandemic in 2020.

In graduate school, I came across micromorts, and so was already familiar with the basic idea; the main innovation for me in microCOVID was that they had collected what data was available about the infectiousness of activities and paired it with a updating database on case counts.

While the main use I got out of it was group house harmony (as now, rather than having to carefully evaluate and argue over particular activities, people could just settle on a microCOVID budget and trust each other to do calculations), I think this is an example of a generally useful tool of 'moving decision-relevant information closer to decision-making,' a particularly practical sort of fighting against ignorance. If someone only has a vague sense of what things carry what risks, they will probably not make as good choices as someone who sees the price tag on all of those activities. 

Comment by Vaniver on What Money Cannot Buy · 2021-12-23T16:02:53.423Z · LW · GW

I think this post labels an important facet of the world, and skillfully paints it with examples without growing overlong. I liked it, and think it would make a good addition to the book.

There's a thing I find sort of fascinating about it from an evaluative perspective, which is that... it really doesn't stand on its own, and can't, as it's grounded in the external world, in webs of deference and trust. Paul Graham makes a claim about taste; do you trust Paul Graham's taste enough to believe it? It's a post about expertise that warns about snake oil salesmen, while possibly being snake oil itself. How can you check? "there is no full substitute for being an expert yourself."

And so in a way it seems like the whole rationalist culture, rendered in miniature: money is less powerful than science, and the true science is found in carefully considered personal experience and the whispers of truth around the internet, more than the halls of academia.

Comment by Vaniver on Get Set, Also Go · 2021-12-23T15:40:40.177Z · LW · GW

OpenAI is giving their AI access to the internet in a known-to-be-exploitable-way during training. If you thought we were going to get killed by an AGI but at least maybe we would die with dignity, this is the exact opposite of dignity. I know many of my readers, especially new readers, aren’t that up on or invested in the question of AI Safety, but even a completely average person should be able to understand why rule number one is ‘for the love of God at a bare minimum you don’t give your AI access to the internet,’ seriously, what the hell. Could we at least pretend to try to take some precautions?

While I agree that giving your AGI-in-training access to the internet is quite possibly a "you lose" style of mistake, I... feel like there has to be some line, and OpenAI explicitly mentioned that they thought they were on the "it's fine" side of the line, and that treating the situation like they aren't pretending to try to take some precautions is a mistake.

I think there's a deeper argument that you might be trying to 'imply by italics', or something, which is that there's winner's curse reasons to think that dangerous research will be done by the people least able to assess the danger of the research. Also, specialists in a field might not see a reason to do society-wide cost-benefit analyses, instead of local cost-benefit analyses (which will probably diminish the scale of costs more than the scale of gains). See coronavirus research happening in a BSL-2 lab, for example. 

But as written this paragraph sounds like "as soon as you start thinking about AI, you should just unplug your computer from the internet, regardless of what program you're running." Which... I can sort of see the case for, but requires more explained inferential steps than you're laying out here to seem reasonable.

Comment by Vaniver on Worst-case thinking in AI alignment · 2021-12-23T15:17:55.511Z · LW · GW

When you’re considering between a project that gives us a boost in worlds where P(doom) was 50% and projects that help out in worlds where P(doom) was 1% or 99%, you should probably pick the first project, because the derivative of P(doom) with respect to alignment progress is maximized at 50%.

Many prominent alignment researchers estimate P(doom) as substantially less than 50%. Those people often focus on scenarios which are surprisingly bad from their perspective basically for this reason.

And conversely, people who think P(doom) > 50% should aim their efforts at worlds that are better than they expected.

This section seems reversed to me, unless I'm misunderstanding it. If "things as I expect" are P(doom) 99%, and "I'm pleasantly wrong about the usefulness of natural abstractions" is P(doom) 50%, the first paragraph suggests I should do the "better than expected" / "surprisingly good" world, because the marginal impact of effort is higher in that world. 

[Another way to think about it is surprising in the direction you already expect is extremizing, but logistic success has its highest derivative in the middle, i.e. is a moderating force.]

Comment by Vaniver on 2020 Review: The Discussion Phase · 2021-12-23T03:11:33.220Z · LW · GW

Great reviews so far! :D

Second set of prizes:

$200 to:

$100 each to:

[edited to increase Vanessa's prize amount; Ray convinced me that rather than going back at the end to give out larger prizes to a wider pool, more signal as we go is more useful.]

Comment by Vaniver on “A Harmful Idea” · 2021-12-22T18:36:02.201Z · LW · GW

I would strongly encourage all readers to contact, e.g., a professor whose work focuses on white supremacy and ask them about Beckstead’s passage. Indeed, give them the whole chapter to read. That’s what I did. That’s actually how I came to this conclusion.

I think 'white supremacy' is, unfortunately, a pretty loaded term in a culture war, which will almost necessarily lead to people talking past each other. ["I'm not the KKK!" -> "I wasn't saying you're like the KKK, I'm saying that you're perpetuating a system of injustice."]

I think that often when this accusation is levied, it's done by someone who is trying to being less selfish against someone who is probably being more selfish. For example, if I were to talk about immigration restrictions as being white supremacist because they structurally benefit (more white) citizens at the expense of (less white) non-citizens, you could see how the label might fit (even tho it might not, at all, be the frame chosen by the immigration restrictionist side, especially in a place like France which has done quite a lot to detach citizenship and race), and also how someone interested in fairness might immediately lean towards the side paying for the transfer instead of receiving it.

I think this is probably not the case here, where I think Bostrom and Beckstead and others have identified moral patients who we could help if we chose to, and people interested in social justice have identified moral patients who we could help if we chose to, and so both sides are pushing against selfishness and towards a more fair, more equal future; the question is how to compare the two, and I think terms of abuse probably won't help.

Comment by Vaniver on 2020 Review: The Discussion Phase · 2021-12-21T20:09:59.854Z · LW · GW

2020 was a weird year due to COVID, with lots of posts that were important and urgent but not timeless, which had led some people to ask: how should that impact the review?

I say: it's useful to reflect on things even if they weren't timeless, if they were important, which I think many early writings on COVID were. What impact did the post have? What could have been written instead? I'd focus less on "this is what I would have written if I knew then what I know now", but instead "how well were we reasoning under uncertainty?".

As well, the review is connected to making a book but is not just about the book. As lsusr points out, we're not going to put Harry Potter fanfiction in a book sold for revenue, but it's still worth thinking about that post. We're experimenting with donation buttons this year, and I suspect that those will see some use for things that might not be useful to read again but which were very useful to read the first time.

Comment by Vaniver on Confusion about Sequences and Review Sequences · 2021-12-21T18:26:33.296Z · LW · GW

It looks like the sequences from the 2018 book (like Curiosity) are both 1) not visible in the Library and 2) made last month. [The Sequence The Less Wrong Review is of announcements, not the posts from the review.]

Comment by Vaniver on We'll Always Have Crazy · 2021-12-15T06:07:50.945Z · LW · GW

Five hundred years ago, Christianity specifically felt free to claim that

Also that they alone could appoint legitimate monarchs; that they could free people from their oaths of loyalty; that they could execute people at whim; that it would be improper for any clergyman to be tried in regular criminal courts; and so on.

Comment by Vaniver on We'll Always Have Crazy · 2021-12-15T06:00:22.115Z · LW · GW

See also Emile Durkheim, and the "suitable level of crime." That is, as behavior gets better, standards increase, so there is always the same amount of "crime" happening; "moral panics" often occurred when the crime rate was unusually low (by previous standards), and so society could now pay attention to a new class of infractions (that previously had merely been annoyances).

Comment by Vaniver on 2020 Review: The Discussion Phase · 2021-12-15T01:50:07.633Z · LW · GW

Also, preregistering a comment (to be posted after this phase) with the sha256 of 0f586288e793603c6275fc11dd11960efd236fb106bf1ec91f780f6a88306fba 

Comment by Vaniver on 2020 Review: The Discussion Phase · 2021-12-15T01:37:23.107Z · LW · GW

First set of prizes:

$100 each to:

I was going to give Ruby a prize for his review of How uniform is the neocortex?, but Raemon informs me that Lightcone Infrastructure Team members are ineligible, so retracting that prize.

Comment by Vaniver on 2020 Review: The Discussion Phase · 2021-12-15T01:31:35.531Z · LW · GW

Review prizes will be posted in a response to this comment.

Some logistical notes: I'm going to be announcing prizes roughly every week during the discussion phase, focusing on reviews made since the last prizes were announced. By default, we'll wait until the end of the period to actually award the prizes (so that we only need to send people one larger transfer if they win multiple prizes), but if you'd like it sooner just send a private message to me or Raemon with your paypal address. 

Comment by Vaniver on Rereading Atlas Shrugged · 2021-12-13T18:34:45.279Z · LW · GW

Self-review: I still like this post a lot; I went through and changed some punctuation typos, but besides that I think it's pretty good.

There are a few things I thought this post did.

First, be an example of 'rereading great books', tho it feels a little controversial to call Atlas Shrugged a great book. The main thing I mean by that is that it captures some real shard of reality, in a way that looks different as your perspective changes and so is worth returning to, rather than some other feature related to aesthetics or skill or morality/politics.

Second, point at an ongoing struggle and yearn for a synthesis of two contradictory responses to it. This is where I most hoped to see followup, and don't think I saw (or made) much.

Third, maybe make Rand more accessible to people turned off by various features of her work (or the reaction to it).

As most of the post is about 1) my reaction to 2) a fictional work, I'm not sure there's much that can be tested. Most of the comments were from people who were already Objectivists, either drawn to LW by this post or lurkers who popped up to comment; I'd be curious to see how much of what I saw in the book other people would see, if they read through it now.

Comment by Vaniver on Double Crux — A Strategy for Mutual Understanding · 2021-12-13T00:48:42.159Z · LW · GW

I think you mean lifelonglearner.

Comment by Vaniver on There is essentially one best-validated theory of cognition. · 2021-12-10T21:52:36.267Z · LW · GW

Why do they separate out the auditory world and the environment?

Comment by Vaniver on Biology-Inspired AGI Timelines: The Trick That Never Works · 2021-12-03T17:48:34.070Z · LW · GW

FWIW I agree that bit also rang hollow to me--my sense was also that neurons are basically as energy-efficient as you can get--but by "computational efficiency" one means something like "amount of energy expended to achieve a computational result."

For example, imagine multiplying two four-digit numbers in your head vs. in a calculator. Each transistor operation in the calculator will be much more expensive than each neuron spike, however the calculator needs many fewer transistor operations than the brain needs neuron spikes, because the calculator is optimized to efficiently compute those sorts of multiplications whereas the brain needs to expensively emulate the calculator. Overall the calculator will spend fewer joules than the brain will. 

Comment by Vaniver on Frame Control · 2021-12-01T02:22:56.222Z · LW · GW

The thing is, if “frame” is just another way of saying [insert list of various ways of saying “people sometimes think about a thing in one way and sometimes in another way”], then the concept is so diffuse, general, and banal as to not be worth elevating to any special status.

Huh, I find this surprising, mostly because I'm not sure about the "special status" claim.

It seems to me like there's something of a dilemma here--either the concept is obvious (at which point being diffuse or general is not much of a drawback), and so the problem with the post is that it is 'reinventing the wheel', or the concept is nonobvious (and thus we can't be sure we're pointing at the same thing, and being diffuse now makes this communication much more difficult). Up until this point, I had gotten the second impression from you (stuff like "Without knowing what you mean by the word, I cannot answer your question."), and not something like "wait, is this just rediscovering 'maps' from the map-territory distinction?".

Also, I think that while this sort of "noticing maps" is basic rationality, it empirically does not seem obvious to everyone, and I think people finding it non-obvious or difficult to talk about or so on is interesting. That is, I don't see this post as trying to make "frame" any more special a word than "perspective" or "standpoint" or so on; I see this post as trying to make more people both 1) see frame differences and 2) see frame manipulation, especially the sort of frame manipulation that tries to not be seen as frame manipulation.

[To be clear, I share some of your sense that 'someone who had traumatic experiences around frame manipulation' is probably not an unbiased source of information/frames about frames, and is likely more allergic / less likely to see that the same knife can be used constructively and destructively. I nevertheless put frames in the "general, basic, and useful concept" category, whereas you seem pretty sure they're a bad frame.]

Comment by Vaniver on Frame Control · 2021-11-30T19:05:38.562Z · LW · GW

Are these “frames”, or “frame shifts”, etc.? If not: why not? If so: why did you not recognize them as such?

Sorry, I think my previous sentence was unclear. I think 2-Place and 1-Place Words uses without formalizing the thing I am trying to point at with "frames", and so when I imagine that article without any pointers to frames, I don't think it's convincing (and I'm not sure how Eliezer would have thought of it in the first place without something like frames).

For example, in the paragraph you quote he uses the word "standpoint." When I interpret that as "the position and orientation of the metaphorical camera through which the situation is observed", i.e. a stand-in for frames, the sentence compiles and the paragraph makes sense. When I delete that meaning, the paragraph now seems confused.

[Put another way, if I don't come into that article with the sense that different observers can assign sexiness differently, the article doesn't generate that sense. It uses that sense to explain something about language. This would maybe be more obvious if we swapped out 'sexiness' for something like 'justice', and imagine the article being read by a moral realist who is convinced that there is one true Justice.]

The fact is that “frames” comes with all sorts of conceptual baggage, which, it seems to me, is clearly inapplicable in the case of the linked post (and many—perhaps most?—other cases).

This seems interesting to me. Let's consider the alternative post Aella could have written which talks about "perspective control"; I suspect it hits many of the same points and has many of the same conclusions. [If it seems more or less valid to you, that seems like it would be good to hear!]

In particular, imagine an architect trying to get their building design to win a competition, but they think their building is pretty from the south and ugly from the east; they might make lots of moves that by themselves are innocuous and yet add up to controlling the judges so that they have an overly positive view of the design. If we wanted to talk about what that architect is doing wrong, I think 'perspective control' might be a solid label.

I think what happens when we use 'frame' instead of 'perspective' is that we're generalizing. Our architect controlled which part of the design the judges saw, but they could also try to control something like "how the judges think about design"; saying something about how minimalism is futuristic might cause the judges to not dock points for the lack of embellishments because they don't want to be seen as stuck in the past. The strategic aim is roughly the same as the architect trying to not have the judges see the east face of the building, but the tactical methodology is quite different and operating on a different level of cognition. [One could still talk about "minimalism as futuristic" as being part of one's perspective or standpoint or so on, but this is now clearly in a metaphorical rather than literal sense.]

Possibly this is where the conceptual baggage comes in? Now, rather than just having a simple physical analogy for visual cognition, we have to analogize across the whole cognitive and interpersonal stack. It might be better to keep different layers and regions separate, tho this is genuinely harder because not everyone will have arranged their cognitive and interpersonal stacks in the same way, and organisms live end-to-end in a way that makes the systems less truly modular than the human reverse-engineer would hope.

Comment by Vaniver on Omicron Variant Post #2 · 2021-11-30T18:23:58.754Z · LW · GW

Kathy Hochul has no idea how exponential growth works. Clearing hospital capacity now and stopping elective surgery now is exactly the opposite of any reasonable procedure. Even in the worst case scenarios, Omicron won’t have much impact on case numbers for several weeks. After that, however long we have, we will need all the capacity we can get. The time to get other stuff out of the way is now.

I think probably the most important "other stuff" is health care workers taking vacations (so that they don't quit / kill themselves / etc. in the next few months), and some sort of coordinated reduction in workload to enable that seems like a good idea.

Comment by Vaniver on Christiano, Cotra, and Yudkowsky on AI progress · 2021-11-30T18:07:34.965Z · LW · GW

So it looks like the R-7 (which launched Sputnik) was the first ICBM, and the range is way longer than the V-2s of ~15 years earlier, but I'm not easily finding a graph of range over those intervening years. (And the R-7 range is only about double the range of a WW2-era bomber, which further smooths the overall graph.)

[And, implicitly, the reason we care about ICBMs is because the US and the USSR were on different continents; if the distance between their major centers was comparable to England and France's distance instead, then the same strategic considerations would have been hit much sooner.]

Comment by Vaniver on Soares, Tallinn, and Yudkowsky discuss AGI cognition · 2021-11-30T17:55:42.522Z · LW · GW

People often refer to this idea as a "lonely engineer", tho I see only some discussion of it on LW (like here).

Comment by Vaniver on Soares, Tallinn, and Yudkowsky discuss AGI cognition · 2021-11-30T17:54:18.903Z · LW · GW

what confuses me though is that "is general reasoner" and "can support cultural evolution" properties seemed to emerge pretty much simultaneously in humans -- a coincidence that requires its own explanation (or dissolution).

David Deutsch (in The Beginning of Infinity) argues, as I recall, that they're basically the same faculty. In order to copy someone else / "carry on a tradition", you need to model what they're doing (so that you can copy it), and similarly for originators to tell whether students are correctly carrying on the tradition. The main thing that's interesting about his explanation is how he explains the development of general reasoning capacity, which we now think of as a tradition-breaking faculty, in the midst of tradition-promoting selection.

If you buy that story, it ends up being another example of treacherous turn from human history (where individual thinkers, operating faster than cultural evolution, started pursuing their own values).

Comment by Vaniver on Christiano, Cotra, and Yudkowsky on AI progress · 2021-11-30T17:44:11.264Z · LW · GW

presumably we saw a discontinuous jump in flight range when Sputnik entered orbit.

While I think orbit is the right sort of discontinuity for this, I think you need to specify 'flight range' in a way that clearly favors orbits for this to be correct, mostly because about a month before was the manhole cover launched/vaporized with a nuke.

[But in terms of something like "altitude achieved", I think Sputnik is probably part of a continuous graph, and probably not the most extreme member of the graph?]

Comment by Vaniver on Frame Control · 2021-11-30T17:29:13.366Z · LW · GW

The epistemics are super hard, because the thing being pointed to is subtle and there isn’t really a recipe for identifying it. Different people are affected by things in different ways, so one person may feel/be “frame controlled” in a certain context while someone else doesn’t/isn’t. And, being able to identify frame-control-type dynamics doesn’t by itself say anything about what can or should be done about them. (When I encounter someone who I realize affects me in this way, I tend to avoid ~completely.)

I think another difficulty in the epistemics is "where to place the focus" is potentially a political question. For example, choosing between Aella's father "was an abuser" and their relationship "was an abusive dynamic" seems like it could have consequences (both for what happens, how your relationships shift, and how you understand the situation). [The situation wherein both statements are clearly associated with perspectives, instead of reified truths, seems like it's most conducive to understanding.]

As you point out, different people will be affected differently by the 'same thing', but an otherwise-laudable commitment to avoid victim-blaming can move focus away from those differences and obscure part of what's happening. [But also perhaps we are well-served by an allergy to attempts to move focus, as suggested by the example of the student pointing out the teacher's error and the teacher redirecting attention.]

Comment by Vaniver on Frame Control · 2021-11-30T17:01:49.276Z · LW · GW

Such a phenomenon might be bad; it might be good, and desirable; it might be neutral; it would depend on the particulars. I can imagine two different people coming up with a characterization of something that they each called “frame control”, but with the two descriptions being of two totally different (perhaps largely unrelated!) phenomena—and one of these might be clearly good, one clearly bad, etc.

My guess is the 'natural' version of frame control is neutral, and is mostly about interpersonal dependency. (That is, what Alice thinks about X is downstream of what Bob thinks about X, and we can look at the mechanisms by which the influence flows.) There's then another natural distinction into the various sorts of influence relationships, some of which are mutualistic ("leadership") and some of which are predatory or exploitative or simply destructive, and in order to differentiate between those you need a large and complicated theory of ethics and interpersonal relationships, and these things will be interdependent. (Whether or not something counts as an 'attack' might depend on the relationship between two of the parties, but you might want to figure out their relationship by counting up the number of attacks.)

You can probably imagine an employer-employee relationship that's good for both parties, and then smoothly vary features until you get a relationship that's only good for one party, and continue varying features until you get a relationship that's good for neither party. There will be some areas where you're uncertain in between the areas where you're certain, and probably substantial disagreement between observers on where those boundaries actually are.

Comment by Vaniver on Frame Control · 2021-11-30T16:50:02.646Z · LW · GW

Certainly it seems to me that if I think and talk about the world without mentioning “frames” (nor any stand-in concept), I will not have any large holes in my portrayal.

I'm curious how you would argue something like 2-Place and 1-Place Words without using frames or a stand-in. [According to me (and another), the word 'perspective' is a stand-in.]

When I go through and try to figure out where Eliezer does it, I'm not sure he does, but also I don't think it really counts as an argument. He simply asserts Fred's error in treating sexiness as a function of two arguments instead of a function of one argument, or in identifying Fred::Sexiness as the one true Sexiness. But if Fred responds "I'm not making an error, I am using the one true Sexiness", I think pointing out what failure of imagination Fred is doing will go much faster if talking about 'perspective'.

Comment by Vaniver on Frame Control · 2021-11-30T16:42:34.514Z · LW · GW

My reply was getting long, so I'm going to break it into a few different comments. (woo threading)

The problem is: sure, maybe you could think of things in that way. But so what? You could also just as easily not think of things in that way.

Yeah; suppose I said "you can think of an elephant as a very large person with a single tentacle for a hand." This will both capture something real about elephants, imply some things that are false about elephants, and point at many possibilities that are not realized on Earth. Without some actual elephants (and non-elephants) to look at, you'll end up like the medieval bestiary artist.

What’s so special about this one? Does it allow you to make unusually accurate predictions? Does it allow you to compress / transmit information unusually efficiently / accurately? Or does it, perhaps, instead provoke you into false analogies, mistaken conclusions, salience distortion errors, or flawed reasoning of other sorts?

IMO having frames as a model helps counteract a naive bias in language, which is pointed at with 2-Place and 1-Place Words. If Fred describes a woman as sexy, I see that as a fact both about Fred's frame and about the woman's projection into Fred's frame (in the geometrical / mathematical sense). General semantics makes a big deal out of this sort of 'consciousness of projection', and they recommend including markers of it in speech (as seems helpful when one isn't operating in a context where the listeners would insert that by default). A bit from People in Quandaries:

Semantically, there is a great difference, for example, between saying "Poetry is silly" and "Poetry is silly--to me." The latter leaves poetry a leg to stand on, as it were. It reminds both the speaker and the listener that the speaker is necessarily talking about himself as well as about poetry.

I think the majority of the value comes not from simple communication tricks, but the inferences upstream and downstream of communication; "what frame could cause Fred to emit that sort of sentence?", "what can I say that will land in Fred's frame?", "how can I direct Fred's attention to his own frame?", "what's going on with my frame around this?", or so on.

It does not do to forget that this is just one perspective, and not at all a uniquely compelling one.

Yeah, I do think there's something pretty ironic about taking a device that's designed to ward against projective universality and project that it's universal. 

That said, I think there is a limited sort of universality. Suppose we're talking about point objects in a 3d space, all of them will have position coordinates, but not everything will have position coordinates (because not everything is a point object in 3d space).

I feel pretty good about statements like "humans sense the world (the 'territory') through their sensorium and infer mental constructs (the 'map') from those sensations in a multi-layered way" and see how frames fit into that picture (roughly, the whole strategy of sensation -> mental constructs, tho often we'll be interested in the consciously accessible bit at the end that goes from percepts to concepts, or how concepts relate to each other, or how our memories relate to concepts). 

That picture has some flexibility to it that makes it not very constraining. For example, the "sensorium" is defined by what it does rather than what it is, so when you show me a new sense organ the picture adapts instead of breaks, which means it's not asserting I've found all the sense organs. 

Comment by Vaniver on Frame Control · 2021-11-30T06:45:35.367Z · LW · GW

So, according to me frames are a part of how people think about the world, and so it's sort of hard to ground in words, mostly because of cognitive diversity. The concept is the mental generalization of frame of reference in physics and camera position and orientation in computer graphics (or real-world photography) to human perspectives.

So often people will have a 'frame' when they're navigating the world; some things are salient, some things are ignored, there's generally a dimension of value and relevance. This is particularly important for communication, because I'll have some perception or conception in my frame, attempt to encode it into words, and then the reader will attempt to decode those words back into percepts and concepts. Sentences only make sense in context. The previous sentence was in English, for example, and someone trying to decode it using another language will be confused, but other, subtler contexts are also important. If I say something harsh to someone, this might be evidence that we're enemies, or evidence that we're close, and figuring out my meaning requires that additional detail.

Of course, with cameras we can talk about things like position and orientation and field of view and so on, and there are only a handful of variables. For human frames, there are many more variables that we understand in a less formal way, and so it becomes much harder to discuss.


IMO if you don't think frames are real, you're probably not going to think frame control is real. I think frames are a useful model, and so I think frame control (wherein one participant in a conversation is attempting to take control of the other participant's frames) is also a useful model. [It is not obvious to me that frames are "the most obvious" model, or clearly carve reality at the joints, but I don't have a better model yet.] 

That said, I think there are lots of 'design details' that are hard to be clear on. Most communication, for example, involves trying to add details to the other person's mind, and adding details is a way to 'take control', and so being totally against frame control basically means being totally against communication. One might try some simple rules like "things that attempt to delete details from the other person's mind, or prevent them from changing their camera position, are frame control", but I think this ends up proving too much, in that it militates against policing contradictions (and dissuading biases more generally). [Incidentally, I think this is where some 'woke' pushback against 'rationality' and 'logic' comes from, as it rhymes with frame control / is used to counter "lived experience".]

Comment by Vaniver on Frame Control · 2021-11-29T23:04:44.330Z · LW · GW

It describes a made-up concept that, as far as I can tell, does not actually map to any real phenomenon (mostly this is because Aella, perplexingly, lumps together obviously outright abusive behaviors with normal, unproblematic things that normal people do every day, and then declares this heterogeneous lump to be A Bad Thing)

Hmm, do you think frames are real phenomena / natural concepts? (As all concepts are made up, I assume you mean something like "natural" as the opposite.)

Comment by Vaniver on Frame Control · 2021-11-29T23:01:32.222Z · LW · GW

The comments here seem less charitable than I would have expected, especially given the post’s score.

I think one of the important sources of pushback is this:

And this is why my general philosophy for people who frame control is “burn it with fire.” ... In this, I am a conflict theorist; this is not a mistake, this is war.

If someone wants to declare war, it seems good for people to double-check the casus belli, and point out the gaps instead of silently filling them in. ("Frame control is a thing to watch out for" and "we should exile the frame controllers" are pretty different claims.)

Comment by Vaniver on The bonds of family and community: Poverty and cruelty among Russian peasants in the late 19th century · 2021-11-29T16:11:46.291Z · LW · GW

Good to know! [I was mostly working off of remembering kulak as a term of abuse, rather than a detailed knowledge of how the Revolution went down.]

Comment by Vaniver on The bonds of family and community: Poverty and cruelty among Russian peasants in the late 19th century · 2021-11-28T18:51:50.319Z · LW · GW

If Russian peasants were more cruel than average, are they far worse than average?

This is also interesting to me because most of the historical evidence of "how Communism actually turned out" we have is from Russia and China. One could imagine that communism won in Russia (as opposed to Germany or Britain or various other countries) because there was an unusually high level of this sort of peasant envy, and that this then made it more horrific than it otherwise would have been. [The main info I'd look for here is if we have this sort of ethnography about peasants in China before 1930 or so.]

[My guess is that Russian peasants were not outside of distribution, tho they might have been on the cruel end of the spectrum. In particular, Malthusian logic of the sort that applies to agrarian peasant societies suggests that there should always be marginal people on the edge of subsistence, and the question is just how densely packed they are.]

Comment by Vaniver on Yudkowsky and Christiano discuss "Takeoff Speeds" · 2021-11-25T05:24:10.568Z · LW · GW

your point is simply that it's hard to predict when that will happen when you just look at the Penn Treebank trend.

This is a big part of my point; a smaller elaboration is that it can be easy to trick yourself into thinking that, because you understand what will happen with PTB, you'll understand what will happen with economics/security/etc., when in fact you don't have much understanding of the connection between those, and there might be significant discontinuities. [To be clear, I don't have much understanding of this either; I wish I did!]

For example, I imagine that, by thirty years from now, we'll have language/code models that can do significant security analysis of the code that was available in 2020, and that this would have been highly relevant/valuable to people in 2020 interested in computer security. But when will this happen in the 2020-2050 range that seems likely to me? I'm pretty uncertain, and I expect this to look a lot like 'flicking a switch' in retrospect, even tho the leadup to flicking that switch will probably look like smoothly increasing capabilities on 'toy' problems.

[My current guess is that Paul / people in "Paul's camp" would mostly agree with the previous paragraph, except for thinking that it's sort of weird to focus on specifically AI computer security productivity, rather than the overall productivity of the computer security ecosystem, and this misplaced focus will generate the 'flipping the switch' impression. I think most of the disagreements are about 'where to place the focus', and this is one of the reasons it's hard to find bets; it seems to me like Eliezer doesn't care much about the lines Paul is drawing, and Paul doesn't care much about the lines Eliezer is drawing.]

However, I suspect that the revenue curve will look pretty continuous, now that it's gone from zero to one. Do you disagree?

I think I agree in a narrow sense and disagree in a broad sense. For this particular example, I expect OpenAI's revenues from GPT-3 to look roughly continuous now that they're selling/licensing it at all (until another major change happens; like, the introduction of a competitor would likely cause the revenue trend to change).

More generally, suppose we looked at something like "the total economic value of horses over the course of human history". I think we would see mostly smooth trends plus some implied starting and stopping points for those trends. (Like, "first domestication of a horse" probably starts a positive trend, "invention of stirrups" probably starts another positive trend, "introduction of horses to America" starts another positive trend, "invention of the automobile" probably starts a negative trend that ends with "last horse that gets replaced by a tractor/car".)

In my view, 'understanding the world' looks like having a causal model that you can imagine variations on (and have those imaginations be meaningfully grounded in reality), and I think the bits that are most useful for building that causal model are the starts and stops of the trends, rather than the smooth adoption curves or mostly steady equilibria in between. So it seems sort of backwards to me to say that for most of the time, most of the changes in the graph are smooth, because what I want out of the graph is to figure out the underlying generator, where the non-smooth bits are the most informative. The graph itself only seems useful as a means to that end, rather than an end in itself.