Posts

Alex Irpan: "My AI Timelines Have Sped Up" 2020-08-19T16:23:25.348Z · score: 44 (14 votes)
Property as Coordination Minimization 2020-08-04T19:24:15.759Z · score: 39 (17 votes)
Rereading Atlas Shrugged 2020-07-28T18:54:45.272Z · score: 118 (55 votes)
A reply to Agnes Callard 2020-06-28T03:25:27.378Z · score: 91 (25 votes)
Public Positions and Private Guts 2020-06-26T23:00:52.838Z · score: 21 (6 votes)
How alienated should you be? 2020-06-14T15:55:24.043Z · score: 35 (18 votes)
Outperforming the human Atari benchmark 2020-03-31T19:33:46.355Z · score: 59 (23 votes)
Mod Notice about Election Discussion 2020-01-29T01:35:53.947Z · score: 63 (22 votes)
Circling as Cousin to Rationality 2020-01-01T01:16:42.727Z · score: 72 (35 votes)
Self and No-Self 2019-12-29T06:15:50.192Z · score: 47 (17 votes)
T-Shaped Organizations 2019-12-16T23:48:13.101Z · score: 51 (14 votes)
ialdabaoth is banned 2019-12-13T06:34:41.756Z · score: 31 (18 votes)
The Bus Ticket Theory of Genius 2019-11-23T22:12:17.966Z · score: 66 (20 votes)
Vaniver's Shortform 2019-10-06T19:34:49.931Z · score: 10 (1 votes)
Vaniver's View on Factored Cognition 2019-08-23T02:54:00.915Z · score: 41 (9 votes)
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:58.633Z · score: 43 (11 votes)
Commentary On "The Abolition of Man" 2019-07-15T18:56:27.295Z · score: 65 (15 votes)
Is there a guide to 'Problems that are too fast to Google'? 2019-06-17T05:04:39.613Z · score: 49 (15 votes)
Steelmanning Divination 2019-06-05T22:53:54.615Z · score: 156 (64 votes)
Public Positions and Private Guts 2018-10-11T19:38:25.567Z · score: 95 (30 votes)
Maps of Meaning: Abridged and Translated 2018-10-11T00:27:20.974Z · score: 54 (22 votes)
Compact vs. Wide Models 2018-07-16T04:09:10.075Z · score: 32 (13 votes)
Thoughts on AI Safety via Debate 2018-05-09T19:46:00.417Z · score: 88 (21 votes)
Turning 30 2018-05-08T05:37:45.001Z · score: 75 (24 votes)
My confusions with Paul's Agenda 2018-04-20T17:24:13.466Z · score: 90 (22 votes)
LW Migration Announcement 2018-03-22T02:18:19.892Z · score: 139 (37 votes)
LW Migration Announcement 2018-03-22T02:17:13.927Z · score: 2 (2 votes)
Leaving beta: Voting on moving to LessWrong.com 2018-03-11T23:40:26.663Z · score: 6 (6 votes)
Leaving beta: Voting on moving to LessWrong.com 2018-03-11T22:53:17.721Z · score: 139 (42 votes)
LW 2.0 Open Beta Live 2017-09-21T01:15:53.341Z · score: 23 (23 votes)
LW 2.0 Open Beta starts 9/20 2017-09-15T02:57:10.729Z · score: 24 (24 votes)
Pair Debug to Understand, not Fix 2017-06-21T23:25:40.480Z · score: 8 (8 votes)
Don't Shoot the Messenger 2017-04-19T22:14:45.585Z · score: 11 (11 votes)
The Quaker and the Parselmouth 2017-01-20T21:24:12.010Z · score: 6 (7 votes)
Announcement: Intelligence in Literature Prize 2017-01-04T20:07:50.745Z · score: 9 (9 votes)
Community needs, individual needs, and a model of adult development 2016-12-17T00:18:17.718Z · score: 12 (13 votes)
Contra Robinson on Schooling 2016-12-02T19:05:13.922Z · score: 4 (5 votes)
Downvotes temporarily disabled 2016-12-01T17:31:41.763Z · score: 17 (18 votes)
Articles in Main 2016-11-29T21:35:17.618Z · score: 3 (4 votes)
Linkposts now live! 2016-09-28T15:13:19.542Z · score: 27 (30 votes)
Yudkowsky's Guide to Writing Intelligent Characters 2016-09-28T14:36:48.583Z · score: 4 (5 votes)
Meetup : Welcome Scott Aaronson to Texas 2016-07-25T01:27:43.908Z · score: 1 (2 votes)
Happy Notice Your Surprise Day! 2016-04-01T13:02:33.530Z · score: 14 (15 votes)
Posting to Main currently disabled 2016-02-19T03:55:08.370Z · score: 22 (25 votes)
Upcoming LW Changes 2016-02-03T05:34:34.472Z · score: 46 (47 votes)
LessWrong 2.0 2015-12-09T18:59:37.232Z · score: 92 (96 votes)
Meetup : Austin, TX - Petrov Day Celebration 2015-09-15T00:36:13.593Z · score: 1 (2 votes)
Conceptual Specialization of Labor Enables Precision 2015-06-08T02:11:20.991Z · score: 10 (11 votes)
Rationality Quotes Thread May 2015 2015-05-01T14:31:04.391Z · score: 9 (10 votes)
Meetup : Austin, TX - Schelling Day 2015-04-13T14:19:21.680Z · score: 1 (2 votes)

Comments

Comment by vaniver on Open & Welcome Thread - September 2020 · 2020-09-20T01:46:06.935Z · score: 11 (5 votes) · LW · GW

For what it's worth, I think a decision to ban would stand on just his pursuit of conversational norms that reward stamina over correctness, in a way that I think makes LessWrong worse at intellectual progress. I didn't check out this page, and it didn't factor into my sense that curi shouldn't be on LW.

I also find it somewhat worrying that, as I understand it, the page was a combination of "quit", "evaded", and "lied", of which 'quit' is not worrying (I consider someone giving up on a conversation with curi understandable instead of shameful), and that getting wrapped up in the "&c." instead of being the central example seems like it's defining away my main crux.

Comment by vaniver on Draft report on AI timelines · 2020-09-19T17:11:19.497Z · score: 4 (2 votes) · LW · GW

Part 1 page 15 talks about "spending on computation", and assumes spending saturates at 1% of the GDP of the largest country. This seems potentially odd to me; quite possibly the spending will be done by multinational corporations that view themselves as more "global" than "American" or "British" or whatever, and whose fortunes are more tied to the global economy than to the national economy. At most this gives you a factor of 2-3 doublings, but that's still 4-6 years on a 2-year doubling time.

Overall I'm not sure how much to believe this hypothesis; my mainline prediction is that corporations grow in power and rootlessness compared to nation-states, but it also seems likely that bits of the global economy will fracture / there will be a push to decentralization over centralization, where (say) Alphabet is more like "global excluding China, where Baidu is supreme" than it is "global." In that world, I think you still see approximately a 4x increase.

I also don't have a great sense how we should expect the 'ability to fund large projects' to compare between the governments of the past and the megacorps of the future; it seems quite plausible to me that Alphabet, without pressure to do welfare spending / fund the military / etc. could put a much larger fraction of its resources towards building TAI, but also presumably this means Alphabet has many fewer resources than the economy as a whole (because there still will be welfare spending and military funding and so on), and on net this probably works out to 1% of total gdp available for megaprojects.

Comment by vaniver on Draft report on AI timelines · 2020-09-19T16:48:40.295Z · score: 8 (4 votes) · LW · GW

Thanks for sharing this draft! I'm going to try to make lots of different comments as I go along, rather than one huge comment.

[edit: page 10 calls this the "most important thread of further research"; the downside of writing as I go! For posterity's sake, I'll leave the comment.]

Pages 8 and 9 of part 1 talk about "effective horizon length", and make the claim:

Prima facie, I would expect that if we modify an ML problem so that effective horizon length is doubled (i.e, it takes twice as much data on average to reach a certain level of confidence about whether a perturbation to the model improved performance), the total training data required to train a model would also double. That is, I would expect training data requirements to scale linearly with effective horizon length as I have defined it.

I'm curious where 'linearly' came from; my sense is that "effective horizon length" is the equivalent of "optimal batch size", which I would have expected to be a weirder function of training data size than 'linear'. I don't have a great handle on the ML theory here, tho, and it might be substantially different between classification (where I can make batch-of-the-envelope estimates for this sort of thing) and RL (where it feels like it's a component of a much trickier system with harder-to-predict connections).

Quite possibly you talked with some ML experts and their sense was "linearly", and it makes sense to roll with that; it also seems quite possible that the thing to do here is have uncertainty over functional forms. That is, maybe the effective horizon scales linearly, or maybe it scales exponentially, or maybe it scales logarithmically, or inverse square root, or whatever. This would help double-check that the assumption of linearity isn't doing significant work, and if it is, point to a potentially promising avenue of theoretical ML research.

[As a broader point, I think this 'functional form uncertainty' is a big deal for my timelines estimates. A lot of people (rightfully!) dismissed the standard RL algorithms of 5 years ago for making AGI because of exponential training data requirements, but my sense is that further algorithmic improvement is mostly not "it's 10% faster" but "the base of the exponent is smaller" or "it's no longer exponential.", which might change whether or not it makes sense to dismiss it.]

Comment by vaniver on Draft report on AI timelines · 2020-09-19T16:11:24.839Z · score: 7 (4 votes) · LW · GW

A simple, well-funded example is autonomous vehicles, which have spent considerably more than the training budget of AlphaStar, and are not there yet.

I am aware of other examples that do seem to be happening, but I'm not sure what the cutoff for 'massive' should be. For example, a 'call center bot' is moderately valuable (while not nearly as transformative as autonomous vehicles), and I believe there are many different companies attempting to do something like that, altho I don't know how their total ML expenditure compared to AlphaStar's. (The company I'm most familiar with in this space, Apprente, got acquired by McDonalds last year, who I presume is mostly interested in the ability to automate drive-thru orders.)

Another example that seems relevant to me is robotic hands (plus image classification) at sufficient level that warehouse pickers could be replaced by robots. 

Comment by vaniver on Open & Welcome Thread - September 2020 · 2020-09-19T15:51:34.605Z · score: 12 (4 votes) · LW · GW

I think you're denying him an important chance to do error correction via that decision. (This is a particularly important concept in CR/FI)

I agree that if we wanted to extend him more opportunities/resources/etc., we could, and that a ban is a decision to not do that.  But it seems to me like you're focusing on the benefit to him / "is there any chance he would get better?", as opposed to the benefit to the community / "is it reasonable to expect that he would get better?". 

As stewards of the community, we need to make decisions taking into account both the direct impact (on curi for being banned or not) and the indirect impact (on other people deciding whether or not to use the site, or their experience being better or worse).

Comment by vaniver on Open & Welcome Thread - September 2020 · 2020-09-18T16:46:03.989Z · score: 4 (2 votes) · LW · GW

So why hasn't civilization figured that out already? Or is not teaching moral uncertainty some kind of Chesterton's Fence, and teaching it widely would make the world even worse off on expectation?

This is sort of a rehash of sibling comments, but I think there are two factors to consider here.

The first is the rules. It is very important that people drive on the correct side of the road, and not have uncertainty about which side of the road is correct, and not very important whether they have a distinction between "correct for <country> in <year>" and "correct everywhere and for all time."

The second is something like the goal. At one point, people thought it was very important that society have a shared goal, and worked hard to make it expansive; things like "freedom of religion" are the things civilization figured out to both have narrow shared goals (like "keep the peace") and not expansive shared goals (like "as many get to Catholic Heaven as possible"). It is unclear to me whether we're better off with moral uncertainty as generator for "narrow shared goals", whether narrow shared goals is what we should be going for.

Comment by vaniver on Open & Welcome Thread - September 2020 · 2020-09-18T16:39:59.309Z · score: 6 (3 votes) · LW · GW

Sometimes people are warned, and sometimes they aren't, depending on the circumstances. By volume, the vast majority of our bans are spammers, who aren't warned. Of users who have posted more than 3 posts to the site, I believe over half (and probably closer to 80%?) are warned, and many are warned and then not banned. [See this list.]

Comment by vaniver on Vaniver's Shortform · 2020-09-12T17:57:59.721Z · score: 22 (8 votes) · LW · GW

My boyfriend: "I want a version of the Dune fear mantra but as applied to ugh fields instead"

Me:

I must not flinch.
Flinch is the goal-killer.
Flinch is the little death that brings total unproductivity.
I will face my flinch.
I will permit it to pass over me and through me.
And when it has gone past I will turn the inner eye to see its path.
Where the flinch has gone there will be nothing. Only I will remain.

Tho they later shortened it, and I think that one was better:

I will not flinch.
Flinch is the goal-killer.
I will face my flinch.
I will let it pass through me.
When the flinch has gone,
there shall be nothing.
Only I will remain.

Him: Nice, that feels like flinch towards

Comment by vaniver on [AN #115]: AI safety research problems in the AI-GA framework · 2020-09-02T21:59:30.308Z · score: 2 (1 votes) · LW · GW

Currently this is fixed manually for each crosspost by converting it to draft-js and then deleting some extra stuff. I'm not sure how high a priority it is to make that automatic.

Comment by vaniver on Prediction = Compression [Transcript] · 2020-09-02T18:44:00.149Z · score: 9 (3 votes) · LW · GW

This talk was announced on LW; check the upcoming events tab for more.

Comment by vaniver on Why is Bayesianism important for rationality? · 2020-09-01T18:20:04.061Z · score: 13 (3 votes) · LW · GW

I think "probabilistic reasoning" doesn't quite point at the thing; it's about what type signature knowledge should have, and what functions you can call on it. (This is a short version of Viliam's reply, I think.)

To elaborate, it's different to say "sometimes you should do X" and "this is the ideal". Like, sometimes I do proofs by contradiction, but not every proof is a proof by contradiction, and so it's just a methodology; but the idea of 'doing proofs' is foundational to mathematics / could be seen as one definition of 'what mathematical knowledge is.'

Comment by vaniver on Why is Bayesianism important for rationality? · 2020-09-01T04:50:58.554Z · score: 21 (10 votes) · LW · GW

See Eliezer's post Beautiful Probability, and Yvain on 'probabilism'; there's a core disagreement about what sort of knowledge is possible, and unless you're thinking about things in Bayesian terms, you will get hopelessly confused.

Comment by vaniver on Covid 8/27: The Fall of the CDC · 2020-08-27T21:29:37.225Z · score: 6 (4 votes) · LW · GW

I'm sure there's something stopping us, but I'm having trouble pinpointing what it is.

Presumably much of the usefulness of the CDC comes from data collection and reacting to that data; I wouldn't expect the Taiwanese CDC to be collecting data on American COVID cases.

Comment by vaniver on How More Knowledge is Making Us Dumber · 2020-08-27T20:54:25.409Z · score: 4 (2 votes) · LW · GW

See Why The Tails Come Apart, which I think is a more compelling take than "if you have too much of a good thing, you get trapped."

Comment by vaniver on Rereading Atlas Shrugged · 2020-08-27T03:55:46.062Z · score: 2 (1 votes) · LW · GW

In reality, the strike would never work, because the actual leaders of industrial society aren't all implicit Objectivists, and can't be convinced even in one of John Galt's three-hour conversations.

This doesn't seem like an obstacle to me; in the story, there are plenty of 'leaders of industrial society' who stick around until the bitter end.

And worse, if it did work, I think it would be an utter disaster—society would collapse, and it would not be easy for the strikers to come back and pick up the pieces.

I do think Rand is pretty clear about this also, although I think she still undersells it. One of the basic arguments from Adam Smith is that specialization of labor is a huge productivity booster, and the size of the market determines how much specialization it could support. If you reduced the 'market size' of the Earth from roughly one billion participants to roughly one million participants, you should expect things to get way worse, and even more so if the market size shrinks to roughly one thousand participants. (Given the number of people who are mentioned working for the various named strikers, I think this is a better estimate for the number of the people in Galt's Gulch than ten or a hundred, but she might have had a hundred in mind.) You can sort of get around this with imported capital, but then it's a long and lonely road back up.

Time has also been very unkind to this; you're not going to have a semiconductor industry with a thousand people, and I'm not sure about a million, either.

Comment by vaniver on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-21T21:30:36.196Z · score: 4 (2 votes) · LW · GW

in part since I didn't see much disagreement.

FWIW, I appreciated that your curation notice explicitly includes the desire for more commentary on the results, and that curating it seems to have been a contributor to there being more commentary. 

Comment by vaniver on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-21T21:28:51.863Z · score: 9 (5 votes) · LW · GW

I imagine this was not your intention, but I'm a little worried that this comment will have an undesirable chilling effect.

Note that there are desirable chilling effects too. I think it's broadly important to push back on inaccurate claims, or ones that have the wrong level of confidence. (Like, my comment elsewhere is intended to have a chilling effect.)

Comment by vaniver on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-21T03:52:52.351Z · score: 5 (3 votes) · LW · GW

A realistic example of this is that many onsen ban tattoos as an implicit ban on yakuza, which also ends up hitting foreign tourists with tattoos.

It feels to me like there's a plausible deniability point that's important here ("oh, it's not that we have anything against yakuza, we just think tattoos are inappropriate for mysterious reasons") and a simplicity point that's important here (rather than a subjective judgment of whether or not a tattoo is a yakuza tattoo, there's the objective judgment of whether or not a tattoo is present).

I can see it going both ways, where sometimes the more complex rule doesn't pay for itself, and sometimes it does, but I think it's important to take into account the costs of rule complexity.

Comment by vaniver on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-20T22:39:28.908Z · score: 8 (4 votes) · LW · GW

And the update should be fairly strong, given that this was (prior to my comment) the highest-upvoted post ever by AF karma.

Given karma inflation (as users gain more karma, their votes are worth more, but this doesn't propagate backwards to earlier votes they cast, and more people become AF voters than lose AF voter status), I think the karma differences between this post and these other 4 50+ karma posts [1 2 3 4] are basically noise. So I think the actual question is "is this post really in that tier?", to which "probably not" seems like a fair answer.

[I am thinking more about other points you've made, but it seemed worth writing a short reply on that point.]

Comment by vaniver on Alex Irpan: "My AI Timelines Have Sped Up" · 2020-08-20T00:06:27.806Z · score: 4 (2 votes) · LW · GW

Changed.

Comment by vaniver on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-19T22:06:55.158Z · score: 26 (8 votes) · LW · GW

This is extremely basic RL theory.

I note that this doesn't feel like a problem to me, mostly because of reasons related to Explainers Shoot High. Aim Low!. Even among ML experts, many of them haven't touched much RL, because they're focused on another field. Why expect them to know basic RL theory, or to have connected that to all the other things that they know?

More broadly, I don't understand what people are talking about when they speak of the "likelihood" of mesa optimization.

I don't think I have a fully crisp view of this, but here's my frame on it so far:

One view is that we design algorithms to do things, and those algorithms have properties that we can reason about. Another is that we design loss functions, and then search through random options for things that perform well on those loss functions. In the second view, often which options we search through doesn't matter very much, because there's something like the "optimal solution" that all things we actually find will be trying to approximate in one way or another.

Mesa-optimization is something like, "when we search through the options, will we find something that itself searches through a different set of options?". Some of those searches are probably benign--the bandit algorithm updating its internal value function in response to evidence, for example--and some of those searches are probably malign (or, at least, dangerous). In particular, we might think we have restrictions on the behavior of the base-level optimizer that turn out to not apply to any subprocesses it manages to generate, and so those properties don't actually hold overall.

But it seems to me like overall we're somewhat confused about this. For example, the way I normally use the word "search", it doesn't apply to the bandit algorithm updating its internal value function. But does Abram's distinction between mesa-search and mesa-control actually mean much? There's lots of problems that you can solve exactly with calculus, and solve approximately with well-tuned simple linear estimators, and thus saying "oh, it can't do calculus, it can only do linear estimates" won't rule out it having a really good solution; presumably a similar thing could be true with "search" vs. "control," where in fact you might be able to build a pretty good search-approximator out of elements that only do control.

So, what would it mean to talk about the "likelihood" of mesa optimization? Well, I remember a few years back when there was a lot of buzz about hierarchical RL. That is, you would have something like a policy for which 'tactic' (or 'sub-policy' or whatever you want to call it) to deploy, and then each 'tactic' is itself a policy for what action to take. In 2015, it would have been sensible to talk about the 'likelihood' of RL models in 2020 being organized that way. (Even now, we can talk about the likelihood that models in 2025 will be organized that way!) But, empirically, this seems to have mostly not helped (at least as we've tried it so far).

As we imagine deploying more complicated models, it feels like there are two broad classes of things that can happen during runtime:

  1. 'Task location', where they know what to do in a wide range of environments, and all they're learning is which environment they're in. The multi-armed bandit is definitely in this case; GPT-3 seems like it's mostly doing this.
  2. 'Task learning', where they are running some sort of online learning process that gives them 'new capabilities' as they encounter new bits of the world.

The two blur into each other; you can imagine training a model to deal with a range of situations, and yet it also performs well on situations not seen in training (that are interpolations between situations it has seen, or where the old abstractions apply correctly, and thus aren't "entirely new" situations). Just like some people argue that anything we know how to do isn't "artificial intelligence", you might get into a situation where anything we know how to do is task 'location' instead of task 'learning.'

But to the extent that our safety guarantees rely on the lack of capability in an AI system, any ability for the AI system to do learning instead of location means that it may gain capabilities we didn't expect it to have. That said, merely restricting it to 'location' may not help us very much, because if we misunderstand the abstractions that govern the system's generalizability, we may underestimate what capabilities it will or won't have.

There's clearly been a lot of engagement with this post, and yet this seemingly obvious point hasn't been said.

I think people often underestimate the degree to which, if they want to see their opinions in a public forum, they will have to be the one to post them. This is both because some points are less widely understood than you might think, and because even if the someone understands the point, that doesn't mean it connects to their interests in a way that would make them say anything about it.

Comment by vaniver on Mesa-Search vs Mesa-Control · 2020-08-19T19:15:50.171Z · score: 8 (2 votes) · LW · GW
The inner RL algorithm adjusts its learning rate to improve performance.

I have come across a lot of learning rate adjustment schemes in my time, and none of them have been 'obviously good', altho I think some have been conceptually simple and relatively easy to find. If this is what's actually going on and can be backed out, it would be interesting to see what it's doing here (and whether that works well on its own).

This is more concerning than a thermostat-like bag of heuristics, because an RL algorithm is a pretty agentic thing, which can adapt to new situations and produce novel, clever behavior.

Most RL training algorithms that we have look to me like putting a thermostat on top of a model; I think you're underestimating deep thermostats.

Comment by vaniver on Alignment By Default · 2020-08-16T04:58:49.412Z · score: 12 (3 votes) · LW · GW
Currently, my first-pass check for "is this probably a natural abstraction?" is "can humans usually figure out what I'm talking about from a few examples, without a formal definition?". For human values, the answer seems like an obvious "yes". For evolutionary fitness... nonobvious. Humans usually get it wrong without the formal definition.

Hmm, presumably you're not including something like "internal consistency" in the definition of 'natural abstraction'. That is, humans who aren't thinking carefully about something will think there's an imaginable object even if any attempts to actually construct that object will definitely lead to failure. (For example, Arrow's Impossibility Theorem comes to mind; a voting rule that satisfies all of those desiderata feels like a 'natural abstraction' in the relevant sense, even though there aren't actually any members of that abstraction.)

Comment by vaniver on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-13T21:36:26.574Z · score: 17 (6 votes) · LW · GW
All natural selection does is gradient descent (hill climbing technically), with no capacity for lookahead.

I think if you're interested in the analysis and classification of optimization techniques, there's enough differences between what natural selection is doing and what deep learning is doing that it isn't a very natural analogy. (Like, one is a population-based method and the other isn't, the update rules are different, etc.)

Comment by vaniver on Will OpenAI's work unintentionally increase existential risks related to AI? · 2020-08-13T01:38:40.786Z · score: 6 (3 votes) · LW · GW
thanks to the capped returns

Out of the various mechanisms, I think the capped returns are relatively low ranking; probably the top on my list is the nonprofit board having control over decision-making (and implicitly the nonprofit board's membership not being determined by investors, as would happen in a normal company).

Comment by vaniver on Will OpenAI's work unintentionally increase existential risks related to AI? · 2020-08-12T17:21:53.393Z · score: 12 (3 votes) · LW · GW

I agree that adding economic incentives is dangerous by default, but think their safeguards are basically adequate to overcome that incentive pressure. At the time I spent an hour trying to come up with improvements to the structure, and ended up not thinking of anything. Also remember that this sort of change, even if it isn't a direct improvement, can be an indirect improvement by cutting off unpleasant possibilities; for example, before the move to the LP, there was some risk OpenAI would become a regular for-profit, and the LP move dramatically lowered that risk.

I also think for most of the things I'm concerned about, psychological pressure to think the thing isn't dangerous is more important; like, I don't think we're in the cigarette case where it's mostly other people who get cancer while the company profits; I think we're in the case where either the bomb ignites the atmosphere or it doesn't, and even in wartime the evidence was that people would abandon plans that posed a serious chance of destroying humanity.

Note also that economic incentives quite possibly push away from AGI towards providing narrow services (see Drexler's various arguments that AGI isn't economically useful, and so people won't make it by default). If you are more worried about companies that want to build AGIs and then ask it what to do than you are about companies that want to build AIs to accomplish specific tasks, increased short-term profit motive makes OpenAI more likely to move in the second direction. [I think this consideration is pretty weak but worth thinking about.]

Comment by vaniver on Will OpenAI's work unintentionally increase existential risks related to AI? · 2020-08-12T17:03:47.923Z · score: 7 (2 votes) · LW · GW

Also apparently Megaman is less popular than I thought so I added links to the names.

Comment by vaniver on Will OpenAI's work unintentionally increase existential risks related to AI? · 2020-08-12T17:00:38.329Z · score: 2 (1 votes) · LW · GW
This might result in a different stance toward OpenAI

But part of the problem here is that the question "what's the impact of our stance on OpenAI on existential risks?" is potentially very different from "is OpenAI's current direction increasing or decreasing existential risks?", and as people outside of OpenAI have much more control over their stance than they do over OpenAI's current direction, the first question is much more actionable. And so we run into the standard question substitution problems, where we might be pretending to talk about a probabilistic assessment of an org's impact while actually targeting the question of "how do I think people should relate to OpenAI?".

[That said, I see the desire to have clear discussion of the current direction, and that's why I wrote as much as I did, but I think it has prerequisites that aren't quite achieved yet.]

Comment by vaniver on Will OpenAI's work unintentionally increase existential risks related to AI? · 2020-08-12T00:40:35.626Z · score: 39 (17 votes) · LW · GW

[Speaking solely for myself in this comment; I know some people at OpenAI, but don't have much in the way of special info. I also previously worked at MIRI, but am not currently.]

I think "increasing" requires some baseline, and I don't think it's obvious what baseline to pick here.

For example, consider instead the question "is MIRI decreasing the existential risks related to AI?". Well, are we comparing to the world where everyone currently employed at MIRI vanishes? Or are we comparing to the world where MIRI as an organization implodes, but the employees are still around, and find jobs somewhere else? Or are we comparing to the world where MIRI as an organization gets absorbed by some other entity? Or are we comparing to the world where MIRI still exists, the same employees still work there, but the mission is somehow changed to be the null mission?

Or perhaps we're interested in the effects on the margins--if MIRI had more dollars to spend, or less dollars, how would the existential risks change? Even the answers to those last two questions could easily be quite different--perhaps firing any current MIRI employee would make things worse, but there are no additional people that could be hired by MIRI to make things better. [Prove me wrong!]

---

With that preamble out of the way, I think there are three main obstacles to discussing this in public, a la Benquo's earlier post.

The main one is something like "appeals to consequences." Talking in public has two main functions: coordinating and information-processing, and it's quite difficult to separate the two functions. [See this post and the related posts at the bottom.] Suppose I think OpenAI makes humanity less safe, and I want humanity to be more safe; I might try to figure out which strategy will be most persuasive (while still correcting me if I'm the mistaken one!) and pursue that strategy, instead of employing a strategy that more quickly 'settles the question' at the cost of making it harder to shift OpenAI's beliefs. More generally, the people with the most information will be people closest to OpenAI, which probably makes them more careful about what they will or won't say. There also seem to be significant asymmetries here, as it might be very easy to say "here are three OpenAI researchers I think are making existential risk lower" but very difficult to say "here are three OpenAI researchers I think are making existential risk higher." [Setting aside the social costs, there's their personal safety to consider.]

The second one is something like "prediction is hard." One of my favorite math stories is the history of the Markov chain; in the version I heard, Markov's rival said a thing, Markov thought to himself "that's not true!" and then formalized the counterexample in a way that dramatically improved that field. Supposing Benquo's story of how OpenAI came about is true, and OpenAI will succeed at making beneficial AI, and (counterfactually) DeepMind wouldn't have succeeded. In this hypothetical world, then it would be the case that while the direct effect of DeepMind on existential AI risk would have been negative, the indirect effect would be positive (as otherwise OpenAI, which succeeded, wouldn't have existed). While we often think we have a good sense of the direct effect of things, in complicated systems it becomes very non-obvious what the total effects are.

The third one is something like "heterogeneity." Rather than passing a judgment on the org as a whole, it would make more sense to make my judgments more narrow; "widespread access to AI seems like it makes things worse instead of better," for example, which OpenAI seems to already have shifted their views on, instead focusing on widespread benefits instead of widespread access.

---

With those obstacles out of the way, here's some limited thoughts:

I think OpenAI has changed for the better in several important ways over time; for example, the 'Open' part of the name is not really appropriate anymore, but this seems good instead of bad on my models of how to avoid existential risks from AI. I think their fraction of technical staff devoted to reasoning about and mitigating risks is higher than DeepMind's, although lower than MIRI's (tho MIRI's fraction is a very high bar); I don't have a good sense whether that fraction is high enough.

I think the main effects of OpenAI are the impacts they have on the people they hire (and the impacts they don't have on the people they don't hire). There are three main effects to consider here: resources, direction-shifting, and osmosis.

On resources, imagine that there's Dr. Light, whose research interests point in a positive direction, and Dr. Wily, whose research interests point in a negative direction, and the more money you give to Dr. Light the better things get, and the more money you give to Dr. Wily, the worse things get. [But actually what we care about is counterfactuals; if you don't give Dr. Wily access to any of your compute, he might go elsewhere and get similar amounts of compute, or possibly even more.]

On direction-shifting, imagine someone has a good idea for how to make machine learning better, and they don't really care what the underlying problem is. You might be able to dramatically change their impact by pointing them at cancer-detection instead of missile guidance, for example. Similarly, they might have a default preference for releasing models, but not actually care much if management says the release should be delayed.

On osmosis, imagine there are lots of machine learning researchers who are mostly focused on technical problems, and mostly get their 'political' opinions for social reasons instead of philosophical reasons. Then the main determinant of whether they think that, say, the benefits of AI should be dispersed or concentrated might be whether they hang out at lunch with people who think the former or the latter.

I don't have a great sense of how those factors aggregate into an overall sense of "OpenAI: increasing or decreasing risks?", but I think people who take safety seriously should consider working at OpenAI, especially on teams clearly related to decreasing existential risks. [I think people who don't take safety seriously should consider taking safety seriously.]

Comment by vaniver on Will OpenAI's work unintentionally increase existential risks related to AI? · 2020-08-11T23:41:44.914Z · score: 29 (12 votes) · LW · GW
They recently spun-out a capped-profit company, which seems like the end goal is monetizing some of their recent advancements. The page linked in the previous sentence also has some stuff about safety and about how none of their day-to-day work is changing, but it doesn't seem that encouraging.

I found this moderately encouraging instead of discouraging. So far I think OpenAI is 2 for 2 on managing organizational transitions in ways that seem likely to not compromise safety very much (or even improve safety) while expanding their access to resources; if you think the story of building AGI looks more like assembling a coalition that's able to deploy massive resources to solve the problem than a flash of insight in a basement, then the ability to manage those transitions becomes a core part of the overall safety story.

Comment by vaniver on Fantasy-Forbidding Expert Opinion · 2020-08-10T18:25:18.812Z · score: 15 (5 votes) · LW · GW
If X had been proven good for your health with little room for doubt, it would have reached the ears of me or my peers, because I can't imagine something being definitely good for everyone and not being adopted into standard health practices.

You might be interested in Inadequate Equilibria, which is a book-length treatment of this sort of reasoning and when it is or isn't effective. While I think it does rule out, say, that Cheerios grant you immortality, I don't think it rules out things like "SAD can be effectively treated by more light" or "many people can get an extra hour a day by increasing their sleep quality with melatonin" or "you should sleep with a window open to have lower CO2 exposure."

I imagine that there are things that doctors know about, but don't actually try; The Last Psychiatrist's experience with Ramachandran's Mirror seems relevant here. The Epley Maneuver is taught in medical schools and the Reddit Tinnitus Cure isn't (as far as I know), and so even presented with a patient with tinnitus after having seen that video a doctor might not think of suggesting it. [Of course, people also comment that the Reddit Tinnitus Cure doesn't work for them, so who knows what the actual effect size is, or what other parts of the technique are essential and not adequately explained; one comment claims that you need your palms to actually have a tight seal with your ears, for example.]

I imagine there are even more things that doctors haven't heard of yet, either because they haven't been discovered or haven't been applied to that problem yet. (Like, even of currently approved drugs, do we have all the sensible off-label uses mapped out?)

Comment by vaniver on Property as Coordination Minimization · 2020-08-08T00:38:09.960Z · score: 3 (2 votes) · LW · GW
What's your source for this claim?

I first heard it... in a talk, I think? Which is where I picked up the narrower claim of "restaurants" over the more broad claim that you can find in the entry on Sybaris from A Classical Dictionary, which states:

On the other hand, great encouragement was held out to all who should discover any new refinement in luxury, the profits arising from which were secured to the inventor by patent for the space of a year.

That source I found from the Wikipedia page on patents, which I why I trusted my memory of the talk enough to include it in the OP. The other source Wikipedia cites is more direct:

And if any confectioner or cook invented any peculiar and excellent dish, no other artist was allowed to make this for a year; but he alone who invented it was entitled to all the profit to be derived from the manufacture of it for that time; in order that others might be induced to labour at excelling in such pursuits.
Comment by vaniver on The Quantitative Reasoning Deficit · 2020-08-07T21:44:05.090Z · score: 1 (2 votes) · LW · GW

Here's a table sorted for math. The US is 37th out of 78 on the list, below Spain and above Israel; the tiers are "rich Asian city-state", "small country in Asia or Europe", and then "large country in Europe or less impressive small country," and the US is low-ranked in that third tier. (The difference between Japan and the US is smaller than the difference between the US and Mexico.)

Comment by vaniver on The Quantitative Reasoning Deficit · 2020-08-07T05:36:22.915Z · score: 3 (3 votes) · LW · GW
This is a large part of why COVID-19 has hit the United States so hard.

Is the United States significantly less numerate than other countries? I agree quantitative reasoning is good and we want more of it, but I'm not sure it's the major contributor to the thing you're trying to explain here.

Comment by vaniver on Property as Coordination Minimization · 2020-08-06T21:19:59.425Z · score: 4 (2 votes) · LW · GW
Abandon all hope for a better past! I un-apologetically prioritize the future half of my light-cone.

Uh, does this also involve 2-boxing in Transparent Newcomb's Problem?

Comment by vaniver on Property as Coordination Minimization · 2020-08-06T19:19:03.309Z · score: 15 (8 votes) · LW · GW
I think prioritizing wishes of the dead over the those of the living is egregiously wrong.

The narrowing circle in action!

Comment by vaniver on Property as Coordination Minimization · 2020-08-06T19:06:36.412Z · score: 2 (3 votes) · LW · GW
you can pretty easily come to the conclusion that the optimal distribution of property rights is for Jeff Bezos and Elon Musk to own everything. But I don't think that Jeff Bezos's life changes one whit if his wealth changed from $185b to $300b, or $100b, or $10b, or even $1b.

So, let's set aside the contractual question, where society lets Jeff Bezos and Elon Musk keep their stuff because society agreed to do it, and holding up society's end of the bargain is important. Instead let's ask the question: what does society get out of Musk owning something instead of someone else owning something?

I argue society really doesn't get much benefit from Musk eating additional capital; like, if he buys really fancy steaks or really fancy yachts or whatever, this is mostly benefiting him instead of us (the indirect benefits, like being able to vicariously consume it on Twitter or Youtube or whatever, are probably pretty small and entertainers probably have a comparative advantage here).

I do think society gets a significant benefit from Musk owning additional capital, because he turns it into businesses that are plausibly beneficial, many of which seem like the sort of visionary longshots that otherwise wouldn't happen. Similarly for Bezos, altho his focus is pointed at a particular company. The world where Amazon has $100B of assets and MediocreCorp has $100B of assets will be poorer than the world where Amazon has $175B of assets and MediocreCorp has $25B of assets.


I think a similar story goes through for many historical titans. On the for-profit side, it's easy to see the creation of massive fortunes through increased efficiency; like, Ikea became massive and Kamprad hugely wealthy because they had a better way of doing things than the competition. The more money Kamprad was 'allowed' to direct, the more things he improved. On the non-profit side, it's also easy to see examples where they applied the same efficiency and long-sightedness where selecting programs was more difficult than 'just writing checks'. Carnegie's decision to fund libraries seems like a significant example here, but probably more central is Rockefeller, himself a fan of homeopathic medicine, setting up a medical research institute that would actually figure out the truth instead of just pushing his favorite. Both of these were pioneering projects in a way that seems easy to 'fade into the background', in the same way that Ikea might seem like "just how furniture is" to someone young enough. [Both of them, of course, also have efficiency stories behind how they made their wealth, but they're distant enough in the past that they might be hard to empathize with.]

Comment by vaniver on Property as Coordination Minimization · 2020-08-06T18:38:52.731Z · score: 2 (3 votes) · LW · GW
The point is that 2 and 3 aren't that different in terms of "corruption".

My impression is that this is mostly because of external competitive pressures; my impression is that when the Housing Bureau is the primary source of housing, it is mysteriously the case that better connected people get better housing. When you can buy your own house or enter the affordable housing lottery, most of the rich choose to buy their own house. (It might still be the case that the politically well-connected poor end up with disproportionately many affordable housing slots compared to the unconnected poor, but that's less of a corrupting force because the stakes are smaller overall.)

Like, a system where people are free to coordinate at whatever level makes local sense seems like it's obviously superior, and there are ways in which having corporations allows you to hit better points in the 'aggregated individual benefit minus coordination cost' space.

Consider if there is in fact a bunch of negative externalities that together outweigh the benefits of building another floor. Without this meeting how would all those affected people realistically coordinate (supposing none of them individually has enough incentive) to stop you?

The basic question here for me is something like "rule of law" vs. "rule of men"; for example, Washington DC has the Height Act that prohibits buildings above a certain height (actually related to the street width, but in general it's about 11 stories tall). This gives DC its particular character, and ensures the major government buildings remain impressive compared to their surroundings. When embarking on a construction project in DC, there's no question about how high the government will let you build; it simply won't be above the height cap.

Similarly, a rule that banned backyard cottages in general, or third floors in general, might make sense, as would a law that caused property taxes to be proportional to demand on public services (like traffic and sewer and trash) or to be periodically reassessed (so that improvements in the property lead to increased taxes) instead of simply reassessed at sale. Similarly it could make sense to tax ongoing construction proportional to the length of the construction. That way the externalities would be priced in, either with a clear policy restriction or a tax based on the estimated cost.

Instead, there's a system which has increased uncertainty and coordination cost. Does it make sense to canvass your neighbors before making a change, in order to reduce their opposition? Well, what if you're looking to buy a property in order to improve it? Now pricing a lot becomes much more uncertain, as it also involves estimating the development-friendliness of all neighborhoods in question. This also makes the rules apply unevenly between people; quite possibly more attractive people have an easier time convincing the commitee and their neighbors to let them build than uglier people, for example.

Comment by vaniver on Property as Coordination Minimization · 2020-08-06T00:56:38.921Z · score: 3 (2 votes) · LW · GW

So it's a little hard to say, because most of the historical evidence we have of them has them is situations where they're the only stable property rights, and so likely they were over-utilized. It also seems inflexible in important ways, and so I'm more of a fan of the modern American system of trusts.

But the central premise--that rather than willing your property to people, you could will it to a purpose--seems pretty great, once you've incorporated the lesson of lost purposes, and so can write a will that will fail gracefully with time.

Comment by vaniver on Property as Coordination Minimization · 2020-08-05T20:52:17.166Z · score: 2 (1 votes) · LW · GW

Tho it should be noted that given the way union / strike law is in the US, isn't it also the case that workers can close a factory unilaterally? [Like, even if the owners could find other workers, they're often prevented from being able to use those other workers instead.] And it is also the case that local governments can close a factory unilaterally (as happened recently for health reasons in many places).

So it's not obvious to me that the owners are uniquely privileged in this regard; for any deal that requires the continued consent of all parties, any of them could back out even though it affects others greatly.

Comment by vaniver on Property as Coordination Minimization · 2020-08-05T20:46:37.005Z · score: 2 (1 votes) · LW · GW
I have a much harder time supporting "yay dynasties".

I don't know, waqfs seem pretty great to me.

Comment by vaniver on Property as Coordination Minimization · 2020-08-05T20:44:12.267Z · score: 3 (2 votes) · LW · GW
Above, you seem to be arguing that property can encompass all sorts of rights to determine things.

Right; it's a social agreement, and so it could be altered if the relevant parties decide to alter it.

I think you're right to object to me calling it "too little property"; like, if the thing is a two-dimensional object, rather than saying "too little area" I should be more precise and say it's "too short". That is, a vetocracy is what you get when you have too much distributed ownership and too little concentrated ownership.

You can't get more total rights.

Seems right, although importantly you can maximize the sum of individual benefit minus coordination costs; I think my overall sense is that's how you determine what the correct level of rights is, but that's a longer argument (where I would mostly be leaning on Hayek, I suspect). Of course this gets you into the problems inherent in aggregating benefit across people, and other thorny territory; I'm not saying it's easy, just that there's a target.

The fundamental point of the socialist critique of private property is that it assigns more rights to relatively disinterested absentee capital-owners at the expense of the rights that could otherwise be assigned to the person who is most affected.

Eh, I'm not very sympathetic to this. I've rented something like a dozen apartments in a dozen years; to the best of my knowledge, all of those places are still standing and rented out to someone else now. It seems odd to claim that when I was there for a year, my interest in the place outweighed the landlord's interest, because it requires forgetting about the interests of every other renter that the landlord contracted with.

Comment by vaniver on Property as Coordination Minimization · 2020-08-05T18:52:50.843Z · score: 2 (1 votes) · LW · GW

As the end of the paragraph suggests, property can also be immaterial, but I agree that sentence should be tightened up a bit.

As far as I understand, property comes with varying degrees of excludability and are sometimes not excludable at all (e.g. public property, common property).

A lens on public property is that it's where the public uses its right to exclude others from taking ownership of the thing. As Sherlock Holmes is in the public domain, I can't say "Sherlock Holmes is my IP!", whereas I could say that about characters I invent that aren't in the public domain. And the public domain doesn't just extend to things that are currently known; there are whole swaths of intellectual effort where society has decided discoveries cannot be patented.

Comment by vaniver on Three mental images from thinking about AGI debate & corrigibility · 2020-08-05T02:12:46.096Z · score: 9 (5 votes) · LW · GW
The claims "If X drifts away from corrigibility along dimension {N}, it will get pulled back" are clearly structurally similar, and the broad basin of corrigibility argument is meant to be an argument that argues for all of them.

To be clear, I think there are two very different arguments here:

1) If we have an AGI that is corrigible, it will not randomly drift to be not corrigible, because it will proactively notice and correct potential errors or loss of corrigibility.

2) If we have an AGI that is partly corrigible, it will help us 'finish up' the definition of corrigibility / edit itself to be more corrigible, because we want it to be more corrigible and it's trying to do what we want.

The first is "corrigibility is a stable attractor", and I think there's structural similarity between arguments that different deviations will be corrected. The second is the "broad basin of corrigibility", where for any barely acceptable initial definition of "do what we want", it will figure out that "help us find the right definition of corrigibility and implement it" will score highly on its initial metric of "do what we want."

Like, it's not the argument that corrigibility is a stable attractor; it's an argument that corrigibility is a stable attractor with no nearby attractors. (At least in the dimensions that it's 'broad' in.)

I find it less plausible that missing pieces in our definition of "do what we want" will be fixed in structurally similar ways, and I think there are probably a lot of traps where a plausible sketch definition doesn't automatically repair itself. One can lean here on "barely acceptable", but I don't find that very satisfying. [In particular, it would be nice if we had a definition of corrigibility where could look at it and say "yep, that's the real deal or grows up to be the real deal," tho that likely requires knowing what the "real deal" is; the "broad basin" argument seems to me to be meaningful only in that it claims "something that grows into the real deal is easy to find instead of hard to find," and when I reword that claim as "there aren't any dead ends near the real deal" it seems less plausible.

1. Why aren't the dimensions symmetric?

In physical space, generally things are symmetric between swapping the dimensions around; in algorithm-space, that isn't true. (Like, permute the weights in a layer and you get different functional behavior.) Thus while it's sort of wacky in a physical environment to say "oh yeah, df/dx, df/dy, and dy/dz are all independently sampled from a distribution" it's less wacky to say that of neural network weights (or the appropriate medium-sized analog).

Comment by vaniver on Three mental images from thinking about AGI debate & corrigibility · 2020-08-04T19:10:24.802Z · score: 6 (3 votes) · LW · GW
And by the way, how do these AGIs come up with the best argument for their side anyway? Don't they need to be doing good deliberation internally? If so, can't we just have one of them deliberate on the top-level question directly? Or if not, do the debaters spawn sub-debaters recursively, or something?

This is an arbitrary implementation detail, and one of the merits of debate is that it lets the computer figure this out instead of requiring that the AGI designer figure this out.

Comment by vaniver on Three mental images from thinking about AGI debate & corrigibility · 2020-08-04T19:07:24.654Z · score: 11 (4 votes) · LW · GW
e.g. I could argue against "1 + 1 = 2" by saying that it's an infinite conjunction of "1 + 1 != 3" AND "1 + 1 != 4" AND ... and so it can't possibly be true.

Uh, when I learned addition (in the foundation-of-mathematics sense) the fact that 2 was the only possible result of 1+1 was a big part of what made it addition / made addition useful.

There's a huge structural similarity between the proof that '1 + 1 != 3' and '1+1 != 4'; like, both are generic instances of the class '1 + 1 != n \forall n != 2'. We can increase the number of numbers without decreasing the plausibility of this claim (like, consider it in Z/4, then Z/8, then Z/16, then...).

But if instead I make a claim of the form "I am the only person who uses the name 'Vaniver'", we don't have the same sort of structural similarity, and we do have to check the names of everyone else, and the more people there are, the less plausible the claim becomes.

Similarly, if we make an argument that something is an attractor in N-dimensional space, that does actually grow less plausible the more dimensions there are, since there are more ways for the thing to have a derivative that points away from the 'attractor,' if we think the dimensions aren't all symmetric. (If there's only gravity, for example, we seem in a better position to end up with attractors than if there's a random force field, even in 4d, 8d, 16d, etc.; similarly if there's a random potential function whose derivative is used to compute the forces.)

Comment by vaniver on Three mental images from thinking about AGI debate & corrigibility · 2020-08-04T18:50:41.819Z · score: 4 (2 votes) · LW · GW
I think there argument might be misleading in that local stability isn't that rare in practice

Surely this depends on the number of dimensions, with local stability being rarer the more dimensions you have. [Hence the argument that, in the infinite-dimensional limit, everything that would have been an "local minimum" is instead a saddle point.]

Comment by vaniver on Three mental images from thinking about AGI debate & corrigibility · 2020-08-04T18:48:52.840Z · score: 6 (3 votes) · LW · GW
To say "corrigibility is a broad basin of attraction", you need ALL of the following to be true:

At some point, either in an in-person conversation or a post, Paul clarified that obviously it will be 'narrow' in some dimensions and 'broad' in others. I think it's not obvious how the geometric intuition goes here, and this question mostly hinges on "if you have some parts of corrigibility, do you get the other parts?", to which I think "no" and Paul seems to think "yes." [He might think some limited version of that, like "it makes it easier to get the other parts." which I still don't buy yet.]

Comment by vaniver on Unifying the Simulacra Definitions · 2020-08-04T18:35:51.901Z · score: 7 (3 votes) · LW · GW
Can somebody enlighten me as to how the "we live in the Matrix, and it's inescapable" perspective might be reasonable, just on an everyday lived-experience level?

A lot of this depends on "what matters" to you; how much is lunch about the food, versus who you're eating it with, versus what you're talking about? It seems quite easy for someone to be in a realm where everything that matters to them relates to social reality instead of physical reality. (This is mostly because the aspects of physical reality that do matter to them have had their variance reduced significantly, or have been declared not to matter.)

Much of the pandemic response, for example, makes sense when you think of people acting in social reality, and little sense when you think of people as acting in physical reality.

And, particularly if you're ambitious, you might care a lot about what your era rewards. Military success? New discoveries? Building new systems? Careful and patient accumulation of capital? Timely decisions? Influence accumulation and deployment? Clever argumentation?

On that front, I feel confused about whether our era is good or bad. It still seems like we're still in one of the better eras for ambition being achieved through new discoveries / building new systems, but also complaints seem valid that startups are too much about marketing and getting money from investors instead of building a great product, and more broadly that we only have progress in bits instead of atoms.

Comment by vaniver on Rereading Atlas Shrugged · 2020-08-03T19:30:16.402Z · score: 2 (1 votes) · LW · GW
Ayn Rand wrote a ton of material on concept-formation: some of it is in ITOE, and some of it is scattered amongst essays on other topics. For example, her essay "The Art of Smearing" opens by examining the use of the flawed concept "extremism" by certain political groups to attack their opponents, and then opens out into a discussion of the formation of "anti-concepts" in general, and their effects on cognition. She has several essays of a similar nature.

My prediction, having read a few of these, is that I will agree with them more than I disagree with them; when she points to someone making an error, at least 90% of the time they'll actually be making an error. I think the phrase 'anti-pattern' is more common on LW than 'anti-concept', but they seem overall the same and to have similar usages. (37 Ways That Words Can Be Wrong feels like the good similar example from LW.)

That said, there's a somewhat complicated point here that was hammered home for me by thinking about causal reasoning. Specifically, humans are pretty good at intuitive causal reasoning, and so philosophers discussing the differences between causal decision theory and evidential decision theory found it easy to compute 'what CDT would say' for a particular situation, by checking what their intuitive causal sense of the situation was.

But some situations are very complicated; see figure 1 of this paper, for example. In order to do causal reasoning in an environment like that, it helps if it's 'math so simple a computer could do it,' which involves really getting to the heart of the thing and finding the simple core.

From what I can tell, the Objectivists are going after the right sort of thing (the point of concepts is to help with reasoning to achieve practical ends in the real world, i.e. rationalists should win and beliefs should pay rent in anticipated experience), and so I'm unlikely to actually uncover any fundamental disagreements in goals. [Even on the probabilistic front, you could go from Peikoff's "knowledge is contextual" to a set-theoretic definition of probability and end up Bayesian-ish.]

But it feels to me like... it should be easy to summarize, in some way? Or, like, the 'LW view' has a lot of "things it's against" (the whole focus on heuristics and biases seems important here), and "things it's for" ('beliefs should pay rent' feels like potentially a decent summary here), and it feels like it has a clear view of both of them. I feel like the Objectivist epistemology is less clear about the "things it's for", potentially obscured by being motivated mostly by the "things it's against." Like, I think LW gets a lot of its value by trying to get down to the level of "we could program a computer to do it," in a way that requires peering inside some cognitive modules and algorithms that Rand could assume her audience had.

Compare with examples in biology, where there is no confusion over whether, say, a duck-billed platypus is a bird or a mammal.

Tho I do think the history of taxonomy in biology includes many examples of confused concepts and uncertain boundaries. Even the question of "what things are alive?" runs into perverse corner cases.