Posts

Alex Irpan: "My AI Timelines Have Sped Up" 2020-08-19T16:23:25.348Z
Property as Coordination Minimization 2020-08-04T19:24:15.759Z
Rereading Atlas Shrugged 2020-07-28T18:54:45.272Z
A reply to Agnes Callard 2020-06-28T03:25:27.378Z
Public Positions and Private Guts 2020-06-26T23:00:52.838Z
How alienated should you be? 2020-06-14T15:55:24.043Z
Outperforming the human Atari benchmark 2020-03-31T19:33:46.355Z
Mod Notice about Election Discussion 2020-01-29T01:35:53.947Z
Circling as Cousin to Rationality 2020-01-01T01:16:42.727Z
Self and No-Self 2019-12-29T06:15:50.192Z
T-Shaped Organizations 2019-12-16T23:48:13.101Z
ialdabaoth is banned 2019-12-13T06:34:41.756Z
The Bus Ticket Theory of Genius 2019-11-23T22:12:17.966Z
Vaniver's Shortform 2019-10-06T19:34:49.931Z
Vaniver's View on Factored Cognition 2019-08-23T02:54:00.915Z
Conversation on forecasting with Vaniver and Ozzie Gooen 2019-07-30T11:16:58.633Z
Commentary On "The Abolition of Man" 2019-07-15T18:56:27.295Z
Is there a guide to 'Problems that are too fast to Google'? 2019-06-17T05:04:39.613Z
Steelmanning Divination 2019-06-05T22:53:54.615Z
Public Positions and Private Guts 2018-10-11T19:38:25.567Z
Maps of Meaning: Abridged and Translated 2018-10-11T00:27:20.974Z
Compact vs. Wide Models 2018-07-16T04:09:10.075Z
Thoughts on AI Safety via Debate 2018-05-09T19:46:00.417Z
Turning 30 2018-05-08T05:37:45.001Z
My confusions with Paul's Agenda 2018-04-20T17:24:13.466Z
LW Migration Announcement 2018-03-22T02:18:19.892Z
LW Migration Announcement 2018-03-22T02:17:13.927Z
Leaving beta: Voting on moving to LessWrong.com 2018-03-11T23:40:26.663Z
Leaving beta: Voting on moving to LessWrong.com 2018-03-11T22:53:17.721Z
LW 2.0 Open Beta Live 2017-09-21T01:15:53.341Z
LW 2.0 Open Beta starts 9/20 2017-09-15T02:57:10.729Z
Pair Debug to Understand, not Fix 2017-06-21T23:25:40.480Z
Don't Shoot the Messenger 2017-04-19T22:14:45.585Z
The Quaker and the Parselmouth 2017-01-20T21:24:12.010Z
Announcement: Intelligence in Literature Prize 2017-01-04T20:07:50.745Z
Community needs, individual needs, and a model of adult development 2016-12-17T00:18:17.718Z
Contra Robinson on Schooling 2016-12-02T19:05:13.922Z
Downvotes temporarily disabled 2016-12-01T17:31:41.763Z
Articles in Main 2016-11-29T21:35:17.618Z
Linkposts now live! 2016-09-28T15:13:19.542Z
Yudkowsky's Guide to Writing Intelligent Characters 2016-09-28T14:36:48.583Z
Meetup : Welcome Scott Aaronson to Texas 2016-07-25T01:27:43.908Z
Happy Notice Your Surprise Day! 2016-04-01T13:02:33.530Z
Posting to Main currently disabled 2016-02-19T03:55:08.370Z
Upcoming LW Changes 2016-02-03T05:34:34.472Z
LessWrong 2.0 2015-12-09T18:59:37.232Z
Meetup : Austin, TX - Petrov Day Celebration 2015-09-15T00:36:13.593Z
Conceptual Specialization of Labor Enables Precision 2015-06-08T02:11:20.991Z
Rationality Quotes Thread May 2015 2015-05-01T14:31:04.391Z
Meetup : Austin, TX - Schelling Day 2015-04-13T14:19:21.680Z

Comments

Comment by vaniver on Sunzi's《Methods of War》- Introduction · 2020-11-18T23:39:19.741Z · LW · GW

天 refers to material things which affect you but which you yourself lack the power to significantly influence.

I really like this phrasing!

Are there terms that split apart the level of abstraction on which the material thing exists? Like, I am affected by whether or not it's raining right now, without being able to do much in return; I'm also affected by whether or not my currency is undergoing inflation, able to do even less in return, and I'm also affected by whether or not 13 is prime, able to do nothing in return. [My guess is the distinction between these didn't really become crisp until this last century or so, and so probably there aren't specific terms in classical Chinese.]

Practically, I'm interested in getting a sense of how much Sunzi's distinction between Heaven and Earth is metaphorical vs. literal; in the first case, 'heaven' is about paying attention to things on a higher level of abstraction than the things 'earth' is directing you to pay attention to, in the second case, both of them are about the physical environment, but different parts of it (you need to make plans based on whether it rains or shines, and also you need to make plans based on whether there's a hill or there isn't).

Comment by vaniver on Should we postpone AGI until we reach safety? · 2020-11-18T18:54:13.871Z · LW · GW

I think it's obviously a bad idea to deploy AGI that has an unacceptably high chance of causing irreparable harm. I think the questions of "what chance is unacceptably high?" and "what is the chance of causing irreparable harm for this proposed AGI?" are both complicated technical questions that I am not optimistic will be answered well by policy-makers or government bodies. I currently expect it'll take serious effort to have answers at all when we need them, let alone answers that could persuade Congress. 

This makes me especially worried about attempts to shift policy that aren't in touch with the growing science of AI Alignment, but then there's something of a double bind: if the policy efforts are close to the safety research efforts, and so you're giving the best available advice to the policymakers, but you pay the price of backlash from AI researchers if they think regulation-by-policy is a mistake. If the two are distant, then the safety researchers can say their hands are clean, but now the regulation is even more likely to be a mistake. 

Comment by vaniver on Sunzi's《Methods of War》- Introduction · 2020-11-18T18:49:35.381Z · LW · GW

Heaven (climate)

I'm curious about the parenthetical; are there multiple words for Heaven, and this is the one that's meant? Or there's a generic word for Heaven that means lots of things, and here you think Sunzi is specifically referring to the climate?

Comment by vaniver on How Roodman's GWP model translates to TAI timelines · 2020-11-16T18:47:23.954Z · LW · GW

Presumably there are two different contour sources: the model fit on the historical data initialized at the beginning of the historical data, and the model fit on the historical data initialized at the end of the historical data. The 'background' lets you see how the actual history compared to what the model predicts, and the 'foreground' lets you see what the model predicts for the future.

And so the black line that zooms off the infinity somewhere around 1950 is the "singularity that got cancelled", or the left line on this simplistic graph.

Comment by vaniver on When Hindsight Isn't 20/20: Incentive Design With Imperfect Credit Allocation · 2020-11-08T20:28:05.069Z · LW · GW

Tho the deadweight loss of a "both pay" solution is probably optimized somewhere between "split evenly" and "both pay fully". For example, in the pirate case, I think there are schemes that you can do that result in honesty being the optimal policy and yet they only destroy some of the gold in the case of accidents (or dishonesty), tho this may be sensitive to how many pirates there are and what the wealth distribution is like.

Comment by vaniver on Multiple Worlds, One Universal Wave Function · 2020-11-05T19:51:23.298Z · LW · GW

The ontology doesn't feel muddled to me, although it does feel... not very quantum? Like a thing that seems to be happening with collapse postulates is that it takes seriously the "everything should be quantized" approach, and so insists on ending up with one world (or discrete numbers of worlds). MWI instead seems to think that wavefunctions, while having quantized bases, are themselves complex-valued objects, and so there doesn't need to be a discrete and transitive sense of whether two things are 'in the same branch', and instead it seems fine to have a continuous level of coherence between things (which, at the macro-scale, ends up looking like being in a 'definite branch').

[I don't think I've ever seen collapse described as "motivated by everything being quantum" instead of "motivated by thinking that only what you can see exists", and so quite plausibly this will fall apart or I'll end up thinking it's silly or it's already been dismissed for whatever reason. But somehow this does seem like a lens where collapse is doing the right sort of extrapolating principles where MWI is just blindly doing what made sense elsewhere. On net, I still think wavefunctions are continuous, and so it makes sense for worlds to be continuous too.]

Like, I think it makes more sense to think of MWI as "first many, then even more many," at which point questions of "when does the split happen?" feel less interesting, because the original state is no longer as special. When I think of the MWI story of radioactive decay, for example, at every timestep you get two worlds, one where the particle decayed at that moment and one where it held together, and as far as we can tell if time is quantized, it must have very short steps, and so this is very quickly a very large number of worlds. If time isn't quantized, then this has to be spread across continuous space, and so thinking of there being a countable number of worlds is right out.

Comment by vaniver on Where do (did?) stable, cooperative institutions come from? · 2020-11-04T22:29:58.647Z · LW · GW

do you have a story for why the public sector remained okay for ~200 years (if it did)?

I less have this sense for the last 200 years than for the preceding 2000 years, but I think for most of human history 'white collar' work has been heavily affiliated with the public sector (which, for most of human history, I think should count the church). Quite possibly the thing we're seeing is a long-term realignment where more and more administrative and intellectual ability is being deployed by the private sector instead of the public sector, both because the private sector is more able to compete on compensation and non-financial compensation has degraded in relative performance? [For example, ambitious people are less interested in the steady stability of a career track now than I think they were 100 years ago, and more and more public sector work is done in the 'steady career track' way. The ability to provide for a family mattered much more for finding a spouse before the default was a two-income family. Having a 'good enough' salary mattered more than having a shot at a stellar salary in a smaller world.]

Another thing I note is that there's variation in cultural push for various sorts of service; disproportionately many military recruits come from the South and rural areas, for example. Part of this is economic, but I think even more of it is cultural / social (in the sense of knowing and respecting more people who were in the military, coming from a culture that values martial virtues over pacifism, and so on). Hamming's book on doing scientific research, which was adapted from classes he taught at the Naval Postgraduate School, focuses on doing science for social good instead of private benefit, in a way that feels very different from modern Silicon Valley startup culture (and even from earlier Silicon Valley startup culture, which felt much more connected to the national defense system).

It wouldn't surprise me if there were simply more children who grew up wanting to be public servants in the past because it was viewed more favorably then. It also wouldn't surprise me if more bits of society are detaching from each other, where it's less and less likely that there are (say) police officers or members of the military in any particular social group, except for social groups that have very heavy representation of those groups. (Of the rationalists I know socially, I think they're at least ten times as likely to publicly state "ACAB" than to have ever considered being a police officer themselves, and I predict this will be even more skewed in the next generation of rationalists.) I know a lot of people who wanted to be teachers or professors because those were the primary adults that they spent time around; perhaps the non-academia public sector is also losing that recruitment battle (relative to the private sector, at least)?

My sense is that the detachment between public and private sector salaries is relatively recent, is concentrated in the higher ranks of the organization, and is driven in large part by greater economic integration and expansion; executive salary roughly tracks the logarithm of organization size, and private sector organizations have gotten much larger than they were 200 years ago. Public sector organizations have also gotten much larger, but haven't been able to increase compensation accordingly.

Comment by vaniver on Where do (did?) stable, cooperative institutions come from? · 2020-11-04T21:50:43.761Z · LW · GW

In this case the interesting thing is tracking how many cultures we form, and what factors control this rate.

The "old web" vs. "new web" seems interesting along this dimension; quite possibly the thing that seemed different about the phpBB days compared to reddit/twitter/Facebook is that an independent forum felt like more its own culture than, say, a Facebook group or a subreddit. I have the vague impression that Discord servers are more "culture-like" than other modern options, but are considerably less durable and discoverable, which seems sad.

Comment by vaniver on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-04T21:43:13.421Z · LW · GW

When it comes to funding science, there’s quite a bit of scrutiny that goes into determining which projects to fund. Labs coordinate to solve problems. Researchers do set their sights on goals. Governments organize field-wide roadmaps to chart the next ten years of research.

Tho if you take analyses like Braden's seriously, quite possibly these filtering efforts have negative value, in that they are more likely to favor projects supported by insiders and senior people, who have historically been bad at predicting where the next good things will come from. "Science advances one funeral at a time," in a way that seems detectable from analyzing the literature.

This isn't to say that planning is worthless, and that no one can see the future. It's to say that you can't buy the ability to buy the right things; you have to develop that sort of judgment on your own, and all the hard evidence comes too late to be useful.

Comment by vaniver on Where do (did?) stable, cooperative institutions come from? · 2020-11-04T16:50:22.240Z · LW · GW

Vinay Gupta, in Cutting Through Spiritual Colonialism, and Venkatesh Rao, in The Gervais Principle, paint a picture where the routine operation and maintenance of life and organizations generates some sort of pollution (focusing mostly on the intrapersonal and interpersonal varieties), and an important function of institutions is basically doing the 'plumbing work' of routing the pollution away from where it does noticeable damage to where it doesn't do noticeable damage. I don't think I fully endorse this lens, but it seems like it resonates moderately well, and combines with trends in a few unsettling ways.

In centuries past, it was common to have communities that cared very strongly about whether or not insiders were treated fairly, but perceived the rest of the world as "fair game" to be fleeced as much as you could get away with; now the 'expanding moral circle' seems more common (while obviously not universal), in a way that makes the 'plumbing work' harder to do. [If life requires aggression, and you have fewer legal targets, this increases the friction life has to work against.] 

It seems like our credit-allocation mechanisms have become weirdly unbalanced, where it's both easier to evade responsibility / delete your identity and start over / impact many people who will never know it was you who impacted them, and simultaneously it's easier to discover crimes, put things on permanent records, and rally the attention of thousands and millions to direct at wrongdoers. The new way that they operate seems to have empowered Social Desirability Bias; once we might have imagined the Very Serious People leading the crowd, and now it seems the crowds are leading the Very Serious People.

This is also one of the ways that I think about the 'crisis in confidence'; see Revolt of the Public for more details, but my basic take is that experts have always been uncertain and incorrect and yet portrayed themselves as certain and correct as part of their role's bargain with broader society. Overconfidence helps experts serve their function of reassuring and coordinating the public, and part of the 'plumbing work' is marginalizing dissent and keeping it constrained to private congregations of experts. But with expanded flow of information, we both have more expertise as a society, and more memory of expert mistakes, and more virulent memes spreading distrust in experts. This feels like the sort of thing where we get lots of short-term benefits in correcting the expert opinion, but also long-term costs in that we lose the ability to coordinate because of expertise.

[Feynman in an autobiography describes his father, who makes uniforms, pointing out that uniforms are manufactured / Feynman shouldn't reflexively trust people because of their uniforms, which seems like great advice for Feynman, but not great advice for everyone in society; the social technology of respecting uniforms does actually do a lot of useful work!]

Comment by vaniver on Where do (did?) stable, cooperative institutions come from? · 2020-11-04T15:57:37.278Z · LW · GW

Great people aren't just motivated by money. They're also motivated by things like great coworkers, interesting work, and prestige. In the private sector, you see companies like Yahoo go into death spirals: Once good people start to leave, the quality of the coworkers goes down, the prestige of being a Yahoo employee goes down

Another fascinating thing that I hadn't realized here until it was pointed out to me is this also means that Yahoo has to pay more, as a consequence of being able to offer less non-financial compensation. Because great people like working together, this essentially means that you can get a 'bulk discount' on them, because part of their compensation is working with each other.

Comment by vaniver on When Money Is Abundant, Knowledge Is The Real Wealth · 2020-11-04T15:43:51.762Z · LW · GW

If you were the President or as rich as Jeff Bezos, you could use your power or money to just throw a lot more darts at the dartboard. 

From the OP:

Beyond that level, more money mostly just means more ability to spray and pray for solutions - which is not a promising strategy in our high-dimensional world.

Comment by vaniver on Confucianism in AI Alignment · 2020-11-03T17:29:52.663Z · LW · GW

Even if BigCo senior management were virtuous and benevolent, and their workers were loyal and did not game the rules, the poor rules would still cause problems.

If BigCo senior management were virtuous and benevolent, would they have poor rules?

That is to say, when I put my Confucian hat on, the whole system of selecting managers based on a proxy measure that's gameable feels too Legalist. [The actual answer to my question is "getting rid of poor rules would be a low priority, because the poor rules wouldn't impede righteous conduct, but they still would try to get rid of them."]

Like, if I had to point at the difference between the two, the difference is where the put the locus of value. The Confucian ruler is primarily focused on making the state good, and surrounding himself with people who are primarily focused on making the state good. The Legalist ruler is primarily focused on surviving and thriving, and so tries to set up systems that cause people who are primarily focused on surviving and thriving to do the right thing. The Confucian imagines that you can have a large shared value; the Legalist imagines that you will necessarily have many disconnected and contradictory values.

The difference between hiring for regular companies and EA orgs seems relevant. Often, applicants for regular companies want the job, and standard practice is to attempt to trick the company into hiring them, regardless of qualification. Often, applicants for EA orgs only want the job if and only if they're the right person for the job; if I'm trying to prevent asteroids from hitting the Earth (or w/e) and someone else could do a better job of it than I could, I very much want to get out of their way and have them do it instead of me. As you mention in the post, this just means you get rid of the part of interviews where gaming is intentional, and significant difficulty remains. [Like, people will be honest about their weaknesses and try to be honest about their strengths, but accurately measuring those and fit with the existing team remains quite difficult.]

Now, where they're trying to put the locus of value doesn't mean their policy prescriptions are helpful. As I understand the Confucian focus on virtue in the leader, the main value is that it's really hard to have subordinates who are motivated by the common good if you yourself are selfish (both because they won't have your example and because the people who are motivated by the common good will find it difficult to be motivated by working for you).

But I find myself feeling some despair at the prospect of a purely Legalist approach to AI Alignment, because it feels like it is fighting against the AI at every step, instead of being able to recruit it to do some of the work for you, and without that last bit I'm not sure how you get extrapolation instead of interpolation. Like, you can trust the Confucian to do the right thing in novel territory, insofar as you gave them the right underlying principles, and the Confucian is operating at a philosophical level where you can give them concepts like corrigibility (where they not only want to accept correction from you, but also want to preserve their ability to accept correction for you, and preserve their preservation of that ability, and so on) and the map-territory distinction (where they want their sensors to be honest, because in order to have lots of strawberries they need their strawberry-counter to be accurate instead of inaccurate). In Legalism, the hope is that the overseer can stay a step ahead of their subordinate; in Confucianism, the hope is that everyone can be their own overseer.

[Of course, defense in depth is useful; it's good to both have trust in the philosophical competence of the system and have lots of unit tests and restrictions in case you or it are confused.]

Comment by vaniver on Location Discussion Takeaways · 2020-11-03T17:04:27.538Z · LW · GW

Habryka suspects that the rationalist community, writ large, basically wouldn’t exist without the Berkeley hub.

In the counterfactual where MIRI had been in Oxford this whole time, or wherever, I think there still would have been Bay Area, NYC, Boston, and Seattle communities. Maybe the Bay Area community would have been centered in SF instead, or something. Basically, I agree with your footnote, but some additional details:

It's less likely that we would have had the sense that there was one 'in-person primary hub', but I'm less convinced that the counterfactual on mission impact is huge here. Like, perhaps each of those communities would be better-developed and healthier without the significant brain drain? Maybe people would have still switched jobs between orgs, but also switched cities, so many more people would have lived in Boston and Seattle and the Bay Area, instead of lots of people who have lived in one other city and the Bay Area?

I think the rationality community is in a weird place where we get significant returns to concentration when considering getting work done (i.e. it'd be great to have every EA/rationalist org in the same spot) and also significant returns to dispersion when considering recruitment (having a healthy EA/rationalist community near every major university and in every tech hub seems like it would be great, and having the orgs communicating in the open with durable artifacts seems like it would be great for bringing people up to speed).

Habryka points out that it's very unlikely that he would have started LessWrong 2.0 if he hadn't known Vaniver socially. Both Habryka and Vaniver(?) initially moved to the Bay because that's where MIRI/CFAR was.

I laid the groundwork for reviving LW while living in Austin. [I suspect this was a core component of me still caring strongly about LW at the time, instead of going "yeah it's sad, but what are you going to do?" while going to in-person events.] In-person interactions were still an important part of it, but they happened at CFAR alumni reunions and Solstices, which I flew to.

Oli deciding to work on the project solidified at a... 80k New Year's house party, I think? I am pretty sure  that was end of 2016-start of 2017, and I had just moved to the Bay Area in October 2016 to work for MIRI (and wouldn't have moved to the Bay except to work at an org like MIRI). That was made much easier by both of us living in the same city, and especially the same city as lots of other stakeholders (which made it low-cost to meet with them), but I'm not sure about the counterfactual of them being video calls. [Like, my sense is it's harder to schedule meetings / stay on people's radar without incidental contact, but our meetings with Eliezer were scheduled mostly without incidental contact, I think.]

Sorry for the US-centric viewpoint; I know it's hard to immigrate here, it's just that so many of us are already here.

This also feels like a hard-to-evaluate counterfactual. I have a sense that "most of the good people are already in the US or could get here", but I don't have a great sense of how much that's filtered by only having a good read of the people who are already in the US. If you take the LW core team as an example, I think 2 out of 6 are Americans, and the rest are from various parts of the world here on visas; I don't know whether to say "this is evidence that it's not that hard to get people into the US" or whether to say "in the world where MIRI had always been in Oxford, it would have been even easier for everyone in that reference class to congregate in Oxford, and we would have even more promising people to work on LW."

 

I think, all things considered, I have over 50% probability that MIRI's decision will be to stay in the Bay, and for more than three quarters of 'in-person Bay Area community members' to stay in or return to the Bay. [I don't work for MIRI at the moment, doing nondisclosed-by-default research made possible by the concentration of people in the Bay, and so even if MIRI moves it's not obvious that I'd follow in less than a year or two.]

The thing I'm most worried about, tho, is something like "the Bay Area growing crazier and more hostile month over month, and us having squandered our obvious chance to do a coordinated move, in ways that make it harder to coordinate future moves." If the 'in-person community' has survived COVID quarantines and people moving away temporarily (which I think it mostly has, but maybe other people's sense of this is very different?), it seems likely to me that it would also survive MIRI moving to Boston (or wherever) and lots of other orgs staying in the Bay. [Like, OpenAI isn't going to move to Boston, OpenPhil probably won't, and so on.]

Maybe I shouldn't be very worried about this? 80k left the Bay, after all, and seems to be doing well, and landed in another EA/rationalist hub. If the temperature in the Bay gradually ramps up, maybe at some future point AI Impacts leaves, and then the LW team leaves, and then MIRI leaves, or whatever, and the existence of secondary hubs means this is a more gradual transition than it might seem.

Comment by vaniver on Why indoor lighting is hard to get right and how to fix it · 2020-11-01T17:36:46.030Z · LW · GW

Although getting some UV might be good for people with a vitamin D deficiency, doing this safely and without other consequences (like degrading plastics or bleaching colors in your office or bedroom) seems hard, and I don't recommend trying to do this.

I have been thinking about this recently (prompted again by vitamin D being a significant factor in protecting against COVID, but also it gradually making its way to the top of the "well-supported health things I ought to try" list), and am curious for details here. That is, suppose I get a bunch of RayVio 293 nm LEDs, and I want to figure out how to use them in a way that's safe and minimally does other bad things. What tools should I be buying to measure things, what sorts of damage should I be looking for, what sorts of things might make sense?

Comment by vaniver on A tale from Communist China · 2020-10-21T17:51:10.270Z · LW · GW

In general, I think people try to time markets much more than they have skill for. Suppose you think there's a stock bubble; the temptation is to buy, hold as it rises, and then sell at the top of the bubble before everything comes crashing down. But enough people are trying to do that that you need some special skill in order to be in the leading edge, able to sell when there's a buyer. It's much less risky to just sell the stocks as soon as you think there's a bubble, which foregoes any additional gains but means you avoid the crash entirely (by taking it on voluntarily, sort of).

A similar thing happened at the start of the pandemic, where my plan was basically "look, I don't think I'm going to be especially good at the risk assessment, I want to just lock down now instead of staying open to get the marginal few weeks of meetings or hangouts or whatever," and various other friends said "look, I believe in math, I think it's just paranoia to lock down immediately instead of doing so based on a cost-benefit analysis." None of us knew it at the time, but the official tests were faulty and delayed and other testing was being suppressed, and so the numbers being used for the cost-benefit analyses were significantly underestimating the true amount of viral transmission.

Also relevant was at the start of the pandemic, there were various border closures and regional quarantines that by design had as little warning as possible. Suppose Disease City has too many cases, and also people in Disease City would want to leave if they think they're going to get locked in (because regardless of whether or not they have the disease, it's better for them to be out than in), and the regional / national government wants to lock them in; then the government want to impose the lockdown and then announce it, since that minimizes the number of people who can flee.

Noticeably, if there's a revolution in some country, it's much better for the new government to murder all of the people who would want to leave than let them leave. Contrast "Who, after all, speaks today of the annihilation of the Armenians?" with anti-Cuban politics in the US being driven by Cuban expatriates who felt wronged by the government they fled, or various potential leaders kept by other powers as potential puppets ready to be installed when the time is right. If more educated / capitalist Chinese had been allowed to leave China, and ended up in the US, it seems likely could have been a voting bloc much like the Cuban voting bloc, and impacted US-Chinese relations accordingly.

[As it happens, most of the people that I know who speak of the annihilation of the Armenians are themselves part of the Armenian diaspora. Out of the various genocides and purges I'm familiar with, people mostly seem interested in ones to people similar to them, and curiously uninterested in ones that are of their political enemies, or even of the political enemies of people they would like to be allied with.]

In summary, there's no fire alarm for when to leave the country, in part because the situation is adversarial, and this isn't the sort of thing you should expect people to be well-calibrated on, since they don't have many examples to learn from.

---

I think there's also a factor where 'the best time to plant a tree is 20 years ago, and the second best time is today.' If you expect to want to be in Canada instead of the US in 2030, say, then you might expect to prefer the timeline where you move to Canada in 2020 and live there for ten years to the timeline where you live in the US for nine and a half years and then move to Canada the night before the civil war breaks out (or whatever), and then live there for six months. If the first, you'll have been able to invest in lots of things that you expect to stick around; in the second, you might skip making useful home improvements because they aren't mobile. [Similarly, imagining our Chinese intellectuals in the 1950s, if they knew they would want to move to the US in 1965, they might have decided to get the move over with as soon as possible, since then their children might have grown up speaking English, they might have been able to get positions at universities before there was a flood of similar refugees, they might have been able to liquidate Chinese assets at more favorable prices, and so on. Most of the benefit of delaying comes from the hope that actually you won't have to move; this is why the illusory short-term positive signs are one of the most important parts of the OP.]

This story is less obvious when there's a huge difference in value between the options. If you're making net $60k a year in the US and would make net $30k a year in Canada, or whatever, each year you delay moving doubles your effective income. But if the difference is smaller, or you're weighing different psychological benefits against each other, it's probably better to get it over with than procrastinate.

Comment by vaniver on The rationalist community's location problem · 2020-10-10T16:25:18.458Z · LW · GW

Austin checks a lot of the same boxes, except for the hub airport one, and is arguably a better cultural fit. There was some talk in 2018 of Delta making it a "mini-hub", but who knows where that went. I don't have enough travel experience in/out of Austin to compare.

I didn't travel that much out of Austin, and mostly to other hubs, but I never had a bad time and often could get direct flights. The main hassle is just that it's far from the other places, and so the flights take a while, but that's always going to be true for at least some people. [I suspect it's better to be close to some places and further from others than medium distance from everyone, but that's not obvious.]

Comment by vaniver on Industrial literacy · 2020-10-08T22:20:45.770Z · LW · GW

Yes, a lot of the increase came from childhood mortality, but life expectancy increased at every age.

Note that life expectancy at 50 and the gap between life expectancy at birth and life expectancy at 1 year basically didn't budge from 1850 to 1900, whereas life expectancy at birth jumped by 10 years over the same time range. I do think there are at least two distinct things going on (probably all of which are related to increased wealth and improved medical care).

Comment by vaniver on Industrial literacy · 2020-10-06T05:31:01.469Z · LW · GW

While I agree with you that reducing child mortality is one of the big wins of progress, I have a sense that you're reasoning about it the wrong way?

Like, I think many (most? Nearly all?) people in the long past did have the view that infants or young children are replaceable, because it was adaptive to their circumstances, and their culture promoted adaptive responses to those circumstances. [Practices like not naming a child for a year are a strategy for the parents to not get too attached, and that makes more sense the less likely it is that a child will last a year.] If they saw the level of attachment our culture encourages parents have to their infants, they would (rightly!) see it as profligate spending only made possible by our massive wealth and technological know-how.

And so in my view, the largest component of the benefit from being in a low infant-mortality world is that parents can afford to treat their children as irreplaceable, which is better for everyone involved. [Like, in the world that's distant to ameliorate the likely pain of child mortality, also the people who survive have their early experience of the world characterized by distance and low parental investment, including nutritional investment.] The longer you expect things to last, the more you can invest in them--and that goes for relationships and friendships as well.

Comment by vaniver on The rationalist community's location problem · 2020-10-04T21:28:50.860Z · LW · GW

Ease of joining and leaving.

See also Let Me Go Back, which looks at 'barrier to exit' as one of the 'barriers to entry'; trying something out is expensive not just because you have to switch to it, but also because you have to switch back if you don't like it.

Comment by vaniver on The rationalist community's location problem · 2020-10-04T21:21:28.792Z · LW · GW

Could someone elaborate on their thinking?

I think there are lots of different good things that come out of having a primary hub, with pretty different valuations. I basically don't expect people to have the same goals here, or for them to maybe have them altruistically but not personally.

For example, one big benefit of a hub is that it makes dating easier if you're looking to date other rationalists (well, except for the whole gender ratio and poly thing). This doesn't matter to me anymore personally, as I've found a long-term partner, but that worked out primarily because I was in the major hub, and so had more options to find a good match. But it still seems like a major benefit of having a primary hub to me; if (say) MIRI wants new hires to be able to date rationalists or EAs, it seems like a good idea to have the office near lots of single rationalists and EAs. [Otherwise, you might find the only people you can hire to work in your volcano lair are the ones that are already married.]

Another benefit of a hub is that you get to have in-person conversations more easily and more spontaneously. I live in a group house that's organized itself physically to try to maximize 'organic connection units', where people end up talking or connecting who otherwise wouldn't have come into contact. The more connection needs to be scheduled, the less of it you get (because taxes lead to deadweight loss) and also the less serendipity you get (because you only talk to the people who you know about).

I think people are influenced by their surroundings and what people around them are doing; the thing where you know lots of people who care about X is social proof that you too should care about X and effort put into it isn't wasted. If everyone is living in some random town and just connected to the Craft or Movement through the internet, this increases the chance that they disengage to do something else instead.

Comment by vaniver on Vaniver's Shortform · 2020-10-04T20:49:56.727Z · LW · GW

A challenge in group preference / info aggregation is distinguishing between "preferences" and "bids." For example, I might notice that a room is colder than I would like it to be; I might want to share that with the group (so that the decision-making process has this info that would otherwise be mostly private), but I also I'm ignorant of other people's temperature preferences, and so don't want to do something unilateralist (where I change the temperature myself) or stake social points on the temperature being changed (in that other people should be dissuaded from sharing their preferences unless they agree, or that others should feel a need to take care of me, or so on).

I've seen a lot of solutions to this that I don't like very much, mostly because it feels like this is a short concept but most of the sentences for it are long. (In part because I think a lot of this negotiation happens implicitly, and so making any of it explicit feels like it necessarily involves ramping up the bid strength.)

This also comes up in Circling; there's lots of times when you might want to express curiosity about something without it being interpreted as a demand to meet that curiosity. "I'm interested in this, but I don't know if the group is interested in this."

 I think my current strategy to try is going to 'jointly naming my individual desire and uncertainty about the aggregate', but we'll see how it goes.

Comment by vaniver on "Zero Sum" is a misnomer. · 2020-10-03T16:04:35.420Z · LW · GW

attention is zero-sum: there's a fix supply (well, as a simplification)

As the post notes, zero-sum in resources is not the same as zero-sum in satisfaction. Even if I can only spend a fixed attention budget, how I spend it determine global satisfaction, not just the distribution of satisfaction among players.

Comment by vaniver on Vaniver's Shortform · 2020-10-02T05:13:11.526Z · LW · GW

Corvee labor is when you pay taxes with your time and effort, instead of your dollars; historically it's made sense mostly for smaller societies without cash economies. Once you have a cash economy, it doesn't make very much sense; rather than having everyone spend a month a year building roads, better to have eleven people funding the twelfth person who builds roads, as they can get good at it, and will be the person who is best at building roads, instead of a potentially resentful amateur.

America still does this in two ways. The military draft, which was last used in 1973, is still in a state where it could be brought back (the selective service administration still tracks the men who would be drafted if it were reinstated), and a similar program tracks health care workers who could be drafted if needed.

The other is jury duty. Just like one can have professional or volunteer soldiers instead of having conscripted soldiers, one could have professional or volunteer jurors. (See this op-ed that makes the case, or this blog post.) As a result, they would be specialized and understand the law, instead of being potentially resentful amateurs. The primary benefit of a randomly selected jury--that they will (in expectation) represent the distribution of people in society--is lost by the jury selection process, where the lawyers can filter down to a highly non-representative sample. For example, in the OJ Simpson trial, a pool that was 28% black (and presumably 50% female) led to a jury that was 75% black and 83% female. Random selection from certified jurors seems possibly more likely to lead to unbiased juries (tho they will definitely be unrepresentative in some ways, in that they're legal professionals instead of professionals in whatever randomly selected field).

I'm posting this here because it seems like the sort of thing that is a good idea that is not my comparative advantage to push forward, and nevertheless might be doable with focused effort, and quite plausibly is rather useful, and it seems like a sadder world if people don't point out the fruit that might be good to pick even if they themselves won't pick them.

Comment by vaniver on "Zero Sum" is a misnomer. · 2020-10-02T02:28:23.495Z · LW · GW

"Completely adversarial" also better captures the strange feature of zero-sum games where doing damage to your opponent, by the nature of it being zero-sum, necessarily means improving your satisfaction, which is a very narrow class of situations.

Comment by vaniver on Draft report on AI timelines · 2020-09-26T16:27:42.402Z · LW · GW

We could then back out what a rational firm should be willing to invest.

This makes sense, altho I note that I expect the funding here to quite plausibly be 'irrational.' For example, some substantial fraction of Microsoft's value captured is going to global development in a way that seems unlikely to make sense from Microsoft's bottom line (because Microsoft enriched one of its owners, who then decided to deploy those riches for global development). If building TAI comes out of the 'altruism' or 'exploration' budget instead of the 'we expect this to pay back on schedule' budget, you could see more investment than that last category would justify.

Comment by vaniver on The rationalist community's location problem · 2020-09-25T19:22:08.818Z · LW · GW

I think NYC was long a 'second hub', and there were a bunch of third-tier hubs, but I think the relationships between the hubs never really worked out to make a happy global community. Here's a post about some previous context. I also suspect that the community has never really had enough people or commitment to have 'critical mass' for multiple hubs, and this is part of the problem.

I think there are some systems that have successfully figured this out. I am optimistic about a bunch of current EA student groups at top universities, many of which I visited on the SSC road trip, where there's both 1) natural recruitment and 2) natural outflow. If someone graduates from Yale and doesn't stay in New Haven, this is not a surprise; if someone who works as a professional in Austin moves to the Bay Area, this is more of a surprise. This does have a succession problem, where it may be the case that a particular student organizer is great, and once they graduate the group falls apart, but I think at least one university has gone through a few 'generations' of organizers, and there's probably more we can do to support future organizers.

I also think the Mormons have figured this out, where 'Salt Lake' controls a bunch of distribution and publishing and so on and is definitely the 'cultural capital' of the Mormon world. My sense is that in most places, rather than a weekly sermon cooked up by the local pastor you get a high-production values (in all senses) DVD from the central office. I think our version of that is popular blogs and podcasts, where a global community can be reading SSC and listening to the 80k hours podcast and so mostly be in sync with each other, but this only really works for the "excitement about cool topics" and "gradually fleshing out your world model" dimensions, and is not as good for local community norms or building pairwise relationships or so on. 

I think a problem here is that while we have lots of features that are religion-like, I think we don't really prioritize the "cultural center" aspects, and so there aren't really people who want to be rationalist pastors / bishops / etc.; Eliezer mostly want to work on AI safety instead of community-building, my sense is that CFAR mostly wants to work on skilling up / recruiting x-risk thinkers instead of community-building, and so on. For example, when I look at plans to make secondary hubs that seem likely to actually happen to me, most of them are parents trying to make good neighborhoods for themselves and their kids, where they are actually taking on the 'burden of cultural ownership' or w/e; I think a lot of orgs that people hope will be Community orgs are instead mostly interested in being Craft orgs.

Comment by vaniver on The rationalist community's location problem · 2020-09-24T19:19:32.920Z · LW · GW

Sure, one could argue that Oakland actually fits the three desiderata, because I left out "low crime," altho I don't think Oakland is actually cheap. The broader point of "you get what you pay for" holds, I think, and the only way you get something 'acceptably cheap' is deliberately deciding to not pay for some things you could pay for.

Comment by vaniver on The rationalist community's location problem · 2020-09-24T04:31:42.731Z · LW · GW

My sense is that coordination for this is basically impossible, because of competing access needs. I am most optimistic about versions of this that will:

1) Happen even if no one else signs on. [For example, people moving to small towns within commuting distance of SF/Berkeley, where it makes sense for them even if no one else moves to Pinole or Moraga or wherever.]
2) Be readily extensible. [If one person buys in Pinole, other people can later buy other houses in Pinole, and slowly shift the balance. Many rationalist group houses started off as a single apartment in a split house, and slowly took over through organic growth. Building a neighborhood of small houses in upstate Vermont to replace your group house, if it works, probably also means someone else could build a subdivision for their group house next door.]
3) Pick a vision and be willing to deliver on it. [You're not going to find a place that has great weather and cheap property value and proximity to great cities; that's not how efficient markets work. Instead, figure out the few criteria that matter most to you, and do what it takes to achieve those criteria.]

This is basically the only way I see for projects to get out of the planning stage and into the reality stage; there will be Some Children Left Behind, and also some people who decide that, well, they do really like the sun but lumenators will be sufficient to make upstate Vermont workable (or whatever).

 

Separately, I note that 'chance meetings among extraverts' seem to be a pretty powerful factor in shaping the history of cultures and organizations, and think that there really is a very large benefit to being in a central hub; I think those hubs have to become much worse for it to not be worth it anymore. [The main compelling reason I see for moving away from the hub is in order to have children--suburbs exist for a reason!--but for people still trying to find partners or meaningful work, the hubs remain very important.]

Comment by vaniver on The rationalist community's location problem · 2020-09-24T04:05:22.105Z · LW · GW

I think a more attractive option than a group house is something like a pocket neighborhood or baugruppe; I think a location being favorable to that sort of development is a major point in its favor.

Comment by vaniver on Open & Welcome Thread - September 2020 · 2020-09-21T17:11:54.751Z · LW · GW

To elaborate on this, I think there are two distinct issues: "do they have the right norms?" and "do they do norm enforcement?". The second is normally good instead of problematic, but makes the first much more important than it would be otherwise. I see Zack_M_Davis as pointing out "hey, if we don't let people enforce norms because that would make normbreakers feel threatened, do we even have norms?", which is a valid point, but which feels somewhat irrelevant to the curi question.

Comment by vaniver on Open & Welcome Thread - September 2020 · 2020-09-20T01:46:06.935Z · LW · GW

For what it's worth, I think a decision to ban would stand on just his pursuit of conversational norms that reward stamina over correctness, in a way that I think makes LessWrong worse at intellectual progress. I didn't check out this page, and it didn't factor into my sense that curi shouldn't be on LW.

I also find it somewhat worrying that, as I understand it, the page was a combination of "quit", "evaded", and "lied", of which 'quit' is not worrying (I consider someone giving up on a conversation with curi understandable instead of shameful), and that getting wrapped up in the "&c." instead of being the central example seems like it's defining away my main crux.

Comment by vaniver on Draft report on AI timelines · 2020-09-19T17:11:19.497Z · LW · GW

Part 1 page 15 talks about "spending on computation", and assumes spending saturates at 1% of the GDP of the largest country. This seems potentially odd to me; quite possibly the spending will be done by multinational corporations that view themselves as more "global" than "American" or "British" or whatever, and whose fortunes are more tied to the global economy than to the national economy. At most this gives you a factor of 2-3 doublings, but that's still 4-6 years on a 2-year doubling time.

Overall I'm not sure how much to believe this hypothesis; my mainline prediction is that corporations grow in power and rootlessness compared to nation-states, but it also seems likely that bits of the global economy will fracture / there will be a push to decentralization over centralization, where (say) Alphabet is more like "global excluding China, where Baidu is supreme" than it is "global." In that world, I think you still see approximately a 4x increase.

I also don't have a great sense how we should expect the 'ability to fund large projects' to compare between the governments of the past and the megacorps of the future; it seems quite plausible to me that Alphabet, without pressure to do welfare spending / fund the military / etc. could put a much larger fraction of its resources towards building TAI, but also presumably this means Alphabet has many fewer resources than the economy as a whole (because there still will be welfare spending and military funding and so on), and on net this probably works out to 1% of total gdp available for megaprojects.

Comment by vaniver on Draft report on AI timelines · 2020-09-19T16:48:40.295Z · LW · GW

Thanks for sharing this draft! I'm going to try to make lots of different comments as I go along, rather than one huge comment.

[edit: page 10 calls this the "most important thread of further research"; the downside of writing as I go! For posterity's sake, I'll leave the comment.]

Pages 8 and 9 of part 1 talk about "effective horizon length", and make the claim:

Prima facie, I would expect that if we modify an ML problem so that effective horizon length is doubled (i.e, it takes twice as much data on average to reach a certain level of confidence about whether a perturbation to the model improved performance), the total training data required to train a model would also double. That is, I would expect training data requirements to scale linearly with effective horizon length as I have defined it.

I'm curious where 'linearly' came from; my sense is that "effective horizon length" is the equivalent of "optimal batch size", which I would have expected to be a weirder function of training data size than 'linear'. I don't have a great handle on the ML theory here, tho, and it might be substantially different between classification (where I can make batch-of-the-envelope estimates for this sort of thing) and RL (where it feels like it's a component of a much trickier system with harder-to-predict connections).

Quite possibly you talked with some ML experts and their sense was "linearly", and it makes sense to roll with that; it also seems quite possible that the thing to do here is have uncertainty over functional forms. That is, maybe the effective horizon scales linearly, or maybe it scales exponentially, or maybe it scales logarithmically, or inverse square root, or whatever. This would help double-check that the assumption of linearity isn't doing significant work, and if it is, point to a potentially promising avenue of theoretical ML research.

[As a broader point, I think this 'functional form uncertainty' is a big deal for my timelines estimates. A lot of people (rightfully!) dismissed the standard RL algorithms of 5 years ago for making AGI because of exponential training data requirements, but my sense is that further algorithmic improvement is mostly not "it's 10% faster" but "the base of the exponent is smaller" or "it's no longer exponential.", which might change whether or not it makes sense to dismiss it.]

Comment by vaniver on Draft report on AI timelines · 2020-09-19T16:11:24.839Z · LW · GW

A simple, well-funded example is autonomous vehicles, which have spent considerably more than the training budget of AlphaStar, and are not there yet.

I am aware of other examples that do seem to be happening, but I'm not sure what the cutoff for 'massive' should be. For example, a 'call center bot' is moderately valuable (while not nearly as transformative as autonomous vehicles), and I believe there are many different companies attempting to do something like that, altho I don't know how their total ML expenditure compared to AlphaStar's. (The company I'm most familiar with in this space, Apprente, got acquired by McDonalds last year, who I presume is mostly interested in the ability to automate drive-thru orders.)

Another example that seems relevant to me is robotic hands (plus image classification) at sufficient level that warehouse pickers could be replaced by robots. 

Comment by vaniver on Open & Welcome Thread - September 2020 · 2020-09-19T15:51:34.605Z · LW · GW

I think you're denying him an important chance to do error correction via that decision. (This is a particularly important concept in CR/FI)

I agree that if we wanted to extend him more opportunities/resources/etc., we could, and that a ban is a decision to not do that.  But it seems to me like you're focusing on the benefit to him / "is there any chance he would get better?", as opposed to the benefit to the community / "is it reasonable to expect that he would get better?". 

As stewards of the community, we need to make decisions taking into account both the direct impact (on curi for being banned or not) and the indirect impact (on other people deciding whether or not to use the site, or their experience being better or worse).

Comment by vaniver on Open & Welcome Thread - September 2020 · 2020-09-18T16:46:03.989Z · LW · GW

So why hasn't civilization figured that out already? Or is not teaching moral uncertainty some kind of Chesterton's Fence, and teaching it widely would make the world even worse off on expectation?

This is sort of a rehash of sibling comments, but I think there are two factors to consider here.

The first is the rules. It is very important that people drive on the correct side of the road, and not have uncertainty about which side of the road is correct, and not very important whether they have a distinction between "correct for <country> in <year>" and "correct everywhere and for all time."

The second is something like the goal. At one point, people thought it was very important that society have a shared goal, and worked hard to make it expansive; things like "freedom of religion" are the things civilization figured out to both have narrow shared goals (like "keep the peace") and not expansive shared goals (like "as many get to Catholic Heaven as possible"). It is unclear to me whether we're better off with moral uncertainty as generator for "narrow shared goals", whether narrow shared goals is what we should be going for.

Comment by vaniver on Open & Welcome Thread - September 2020 · 2020-09-18T16:39:59.309Z · LW · GW

Sometimes people are warned, and sometimes they aren't, depending on the circumstances. By volume, the vast majority of our bans are spammers, who aren't warned. Of users who have posted more than 3 posts to the site, I believe over half (and probably closer to 80%?) are warned, and many are warned and then not banned. [See this list.]

Comment by vaniver on Vaniver's Shortform · 2020-09-12T17:57:59.721Z · LW · GW

My boyfriend: "I want a version of the Dune fear mantra but as applied to ugh fields instead"

Me:

I must not flinch.
Flinch is the goal-killer.
Flinch is the little death that brings total unproductivity.
I will face my flinch.
I will permit it to pass over me and through me.
And when it has gone past I will turn the inner eye to see its path.
Where the flinch has gone there will be nothing. Only I will remain.

Tho they later shortened it, and I think that one was better:

I will not flinch.
Flinch is the goal-killer.
I will face my flinch.
I will let it pass through me.
When the flinch has gone,
there shall be nothing.
Only I will remain.

Him: Nice, that feels like flinch towards

Comment by vaniver on [AN #115]: AI safety research problems in the AI-GA framework · 2020-09-02T21:59:30.308Z · LW · GW

Currently this is fixed manually for each crosspost by converting it to draft-js and then deleting some extra stuff. I'm not sure how high a priority it is to make that automatic.

Comment by vaniver on Prediction = Compression [Transcript] · 2020-09-02T18:44:00.149Z · LW · GW

This talk was announced on LW; check the upcoming events tab for more.

Comment by vaniver on Why is Bayesianism important for rationality? · 2020-09-01T18:20:04.061Z · LW · GW

I think "probabilistic reasoning" doesn't quite point at the thing; it's about what type signature knowledge should have, and what functions you can call on it. (This is a short version of Viliam's reply, I think.)

To elaborate, it's different to say "sometimes you should do X" and "this is the ideal". Like, sometimes I do proofs by contradiction, but not every proof is a proof by contradiction, and so it's just a methodology; but the idea of 'doing proofs' is foundational to mathematics / could be seen as one definition of 'what mathematical knowledge is.'

Comment by vaniver on Why is Bayesianism important for rationality? · 2020-09-01T04:50:58.554Z · LW · GW

See Eliezer's post Beautiful Probability, and Yvain on 'probabilism'; there's a core disagreement about what sort of knowledge is possible, and unless you're thinking about things in Bayesian terms, you will get hopelessly confused.

Comment by vaniver on Covid 8/27: The Fall of the CDC · 2020-08-27T21:29:37.225Z · LW · GW

I'm sure there's something stopping us, but I'm having trouble pinpointing what it is.

Presumably much of the usefulness of the CDC comes from data collection and reacting to that data; I wouldn't expect the Taiwanese CDC to be collecting data on American COVID cases.

Comment by vaniver on How More Knowledge is Making Us Dumber · 2020-08-27T20:54:25.409Z · LW · GW

See Why The Tails Come Apart, which I think is a more compelling take than "if you have too much of a good thing, you get trapped."

Comment by vaniver on Rereading Atlas Shrugged · 2020-08-27T03:55:46.062Z · LW · GW

In reality, the strike would never work, because the actual leaders of industrial society aren't all implicit Objectivists, and can't be convinced even in one of John Galt's three-hour conversations.

This doesn't seem like an obstacle to me; in the story, there are plenty of 'leaders of industrial society' who stick around until the bitter end.

And worse, if it did work, I think it would be an utter disaster—society would collapse, and it would not be easy for the strikers to come back and pick up the pieces.

I do think Rand is pretty clear about this also, although I think she still undersells it. One of the basic arguments from Adam Smith is that specialization of labor is a huge productivity booster, and the size of the market determines how much specialization it could support. If you reduced the 'market size' of the Earth from roughly one billion participants to roughly one million participants, you should expect things to get way worse, and even more so if the market size shrinks to roughly one thousand participants. (Given the number of people who are mentioned working for the various named strikers, I think this is a better estimate for the number of the people in Galt's Gulch than ten or a hundred, but she might have had a hundred in mind.) You can sort of get around this with imported capital, but then it's a long and lonely road back up.

Time has also been very unkind to this; you're not going to have a semiconductor industry with a thousand people, and I'm not sure about a million, either.

Comment by vaniver on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-21T21:30:36.196Z · LW · GW

in part since I didn't see much disagreement.

FWIW, I appreciated that your curation notice explicitly includes the desire for more commentary on the results, and that curating it seems to have been a contributor to there being more commentary. 

Comment by vaniver on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-21T21:28:51.863Z · LW · GW

I imagine this was not your intention, but I'm a little worried that this comment will have an undesirable chilling effect.

Note that there are desirable chilling effects too. I think it's broadly important to push back on inaccurate claims, or ones that have the wrong level of confidence. (Like, my comment elsewhere is intended to have a chilling effect.)

Comment by vaniver on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-21T03:52:52.351Z · LW · GW

A realistic example of this is that many onsen ban tattoos as an implicit ban on yakuza, which also ends up hitting foreign tourists with tattoos.

It feels to me like there's a plausible deniability point that's important here ("oh, it's not that we have anything against yakuza, we just think tattoos are inappropriate for mysterious reasons") and a simplicity point that's important here (rather than a subjective judgment of whether or not a tattoo is a yakuza tattoo, there's the objective judgment of whether or not a tattoo is present).

I can see it going both ways, where sometimes the more complex rule doesn't pay for itself, and sometimes it does, but I think it's important to take into account the costs of rule complexity.

Comment by vaniver on Matt Botvinick on the spontaneous emergence of learning algorithms · 2020-08-20T22:39:28.908Z · LW · GW

And the update should be fairly strong, given that this was (prior to my comment) the highest-upvoted post ever by AF karma.

Given karma inflation (as users gain more karma, their votes are worth more, but this doesn't propagate backwards to earlier votes they cast, and more people become AF voters than lose AF voter status), I think the karma differences between this post and these other 4 50+ karma posts [1 2 3 4] are basically noise. So I think the actual question is "is this post really in that tier?", to which "probably not" seems like a fair answer.

[I am thinking more about other points you've made, but it seemed worth writing a short reply on that point.]