Welcome to LessWrong!

2019-06-14T19:42:26.128Z · score: 67 (21 votes)
Comment by benito on FB/Discord Style Reacts · 2019-06-02T01:04:38.790Z · score: 2 (1 votes) · LW · GW

When you say social media - do you avoid using them in team slack/discord?

Von Neumann’s critique of automata theory and logic in computer science

2019-05-26T04:14:24.509Z · score: 30 (11 votes)

Ed Boyden on the State of Science

2019-05-13T01:54:37.835Z · score: 64 (16 votes)
Comment by benito on Why books don't work · 2019-05-12T19:44:11.194Z · score: 2 (1 votes) · LW · GW

Still reading, but I certainly got off the bus a bit with this paragraph (which seems largely false to me).

Let’s begin by looking at textbooks in practice. It’s striking that academic courses are often structured around textbooks, but lots of people spend the extra time and money to enroll in those courses—rather than just studying the textbooks independently. Indeed, I suspect that textbooks are mostly purchased for course syllabi, not for self-study. Sure: some people take courses because they want a credential. But plenty of students genuinely feel they’ll learn more by taking courses than they would by studying those courses’ textbooks. Assuming students’ feelings aren’t completely misplaced, courses must be offering something extra that’s important to how people learn.
Comment by benito on Open Thread May 2019 · 2019-05-03T01:33:06.549Z · score: 9 (5 votes) · LW · GW

Welcome, and I hope you enjoyed reading ~2 million words of content!

Because we're making more. So much more.

(And have fun with the potential LW meetup :) )

Comment by benito on Literature Review: Distributed Teams · 2019-05-02T22:15:19.186Z · score: 4 (2 votes) · LW · GW

Datapoint: Stripe's Fifth Engineering Hub is Remote. HN discussion.

Comment by benito on Habryka's Shortform Feed · 2019-05-01T01:03:22.104Z · score: 14 (4 votes) · LW · GW

Note that Paul Christiano warns against encouraging sluggish updating by massively publicising people’s updates and judging them on it. Not sure what implementation details this suggests yet, but I do want to think about it.

https://sideways-view.com/2018/07/12/epistemic-incentives-and-sluggish-updating/

Comment by benito on Asymmetric Justice · 2019-04-30T21:35:36.539Z · score: 8 (4 votes) · LW · GW

Pretty sure the only interesting thing here is twitter and how it puts different cultures with different ideas of what count as norm violations into a big room with each other and how this doesn’t lead to tolerance but instead leads to interminable anger and slap-downs, due to enough people thinking their own norms are ‘obvious’ and not ‘optimised for a particular environment’. Friend groups and scientists and journalists and businesspeople applying their areas’ norms to each other 100% of the time? Ugh.

Comment by benito on Habryka's Shortform Feed · 2019-04-29T01:01:22.612Z · score: 3 (2 votes) · LW · GW

Do our karma karma notifications disappear if you don’t check them that day? My model of Zvi suggested to me this is attention-grabbing and bad. I wonder if it’s better to let folks be notified of all days’ karma updates ‘til their most recent check in, and maybe also see all historical ones ordered by date if they click on a further button, so that the info isn’t lost and doesn’t feel scarce.

Why does category theory exist?

2019-04-25T04:54:46.475Z · score: 35 (7 votes)
Comment by benito on Literature Review: Distributed Teams · 2019-04-16T09:15:27.891Z · score: 10 (5 votes) · LW · GW

This is awesome, thanks.

In case it’s of interest to anyone, I recently wrote down some short, explicit models of the costs of remote teams (I did not try to write the benefits). Here’s what I wrote:

  • Substantially increases activation costs of collaboration, leading to highly split focus of staff
  • Substantially increases costs of creating common knowledge (especially in political situations)
  • Substantially increases barriers to building trust (in-person interaction is key for interpersonal trust)
  • Substantially decreases communication bandwidth - both rate and quality of feedback - increasing the cost of subtle, fine-grained and specific positive feedback harder, and making strong negative feedback on bad decisions much easier, leading to risk-aversion.
  • Substantially increases cost of transmitting potentially embarrassing information, and incentivises covering up of low productivity, as it’s very hard for a manager to see the day-to-day and week-to-week output.
Comment by benito on Excerpts from a larger discussion about simulacra · 2019-04-10T23:29:58.421Z · score: 12 (3 votes) · LW · GW

Let me check I'm following with some simple claims:

  • If it's common knowledge that we're in world 3, then we're in world 4.
  • If it's common knowledge that we're in world 2, then we're in world 4.
  • The key value to being in world n+1 is that you can outplay all the people in world n.
  • To move back from world 2 into world 1, one can punish inaccurate job titles
  • To move back from world 3 into world 2, one can punish not treating workers according to their titles
  • You can't move back from world 4 to world 3.
Comment by Benito on [deleted post] 2019-04-08T14:00:26.491Z

Dupe.

Comment by benito on What are the advantages and disadvantages of knowing your own IQ? · 2019-04-03T22:32:44.149Z · score: 11 (7 votes) · LW · GW

People seem to treat it in a fatalistic way, like they’ve been told what their score will be at the end of the game, as opposed to one of their base stats (like finding out how tall you are). I tested myself on the big 5 lately and finding out I have a fairly extreme baseline on things like neuroticism and intellect been surprisingly valuable for understanding myself.

I understand there are also subcategories of IQ, and I am interested to know if there’s an IQ test I can take which gives me info on a variety of the more robust components of IQ (whatever they are, I don’t actually know). I could imagine this giving me advice of the type “In general try strongly to use verbal reasoning over spacial reasoning, and if you’re in a situation where spacial reasoning is necessary, make a conscious plan to put in more deliberate practice than seems necessary for the median similarly smart person around you who is learning the same skill.” If I expected to get 3+ big recommendations like that I think I’d be quite excited to pay for a test.

I think that having IQ tie more closer to your decisions might help people understand it than if it’s just an abstract immutable number that says you’re worse than these other people, and having it be multifactorial could be a way to help there?

But unless I can actually tie it to some decisions, I do expect finding out my IQ to make me depressed on net. Perhaps I can just use it to figure out whether or not it’s on the table for me to do math at MIRI, though my sense is that philosophical sophistication is much more the bottleneck there.

Comment by benito on LW Update 2019-04-02 – Frontpage Rework · 2019-04-03T20:23:54.327Z · score: 3 (2 votes) · LW · GW

I am especially interested to hear about any strong positive/negatives from mobile and tablet users.

Comment by benito on Dependability · 2019-03-29T22:28:48.196Z · score: 6 (3 votes) · LW · GW

Yeah, I think CFAR has been heavy tailed, and I would predict that there are some individuals for whom it has counterfactually caused them to solve big problems like this.

Comment by benito on Please use real names, especially for Alignment Forum? · 2019-03-29T18:36:01.591Z · score: 7 (4 votes) · LW · GW

People do assign a fair amount of status based on attractiveness of faces, and I think it's good on the margin to not introduce that class of bias to the discussion. My current guess is that the costs aren't commensurate with the benefits of faster recognition.

Comment by benito on Humans Who Are Not Concentrating Are Not General Intelligences · 2019-03-28T15:46:23.246Z · score: 5 (3 votes) · LW · GW

I feel like I learned something very important about my mind - you're right, if I skim these low-level-pattern-matched paragraphs, they read as basically fine to me. Has plausibly quite important implications for AI too. So I've curated this post.

Comment by benito on Announcement: AI alignment prize round 4 winners · 2019-03-23T15:42:57.198Z · score: 6 (3 votes) · LW · GW

Reminder to do this.

(I will stop reminding you if you ask, but until then I am a fan of helping public commitment get acted on.)

Comment by benito on Has "politics is the mind-killer" been a mind-killer? · 2019-03-21T23:32:15.103Z · score: 5 (3 votes) · LW · GW

Yes, the new reading is "Politics isn't a good place to practice rationality unless all the discussants are already rational". Not that you shouldn't engage in discussion of politics, just that you shouldn't go to train rationality there (when not already well practiced in other areas).

Comment by benito on "Other people are wrong" vs "I am right" · 2019-03-18T15:08:36.659Z · score: 18 (8 votes) · LW · GW

I already told Buck that I loved this post. For this curation notice, let me be specific about why.

  • Posts from people who think carefully and seriously about difficult questions writing about some of the big ways they changed their mind over time are rare and valuable (other examples: Holden, Eliezer, Kahneman).
  • OP is unusually transparent, in a way that leads me to feel I can actually update on the data rather than holding it in an internal sandbox. In feel it has not been as adversarially selected as most other writings by someone about themselves, making it extremely valuable data. (Where data is normally covered up, even small amounts of true data are often very surprising.)
  • I find the specific update quite useful, including all of the examples. It fits together with Eliezer's claim (at the end of section 5 here) that you can figure out which experts are right/wrong far more often than you can come up with the correct theory yourself.
  • Comment by benito on Has "politics is the mind-killer" been a mind-killer? · 2019-03-18T14:15:19.661Z · score: 4 (2 votes) · LW · GW

    Note: even with you making a point of it, it took me two reads to understand why my initial read of "unless all the discussants are already rational" was wrong.

    Comment by benito on Understanding information cascades · 2019-03-14T17:24:56.050Z · score: 12 (5 votes) · LW · GW

    Agreed. I realise the OP could be misread; I've updated the first paragraph with an extra sentence mentioning that summarising and translating existing work/literature in related domains is also really helpful.

    Comment by benito on Understanding information cascades · 2019-03-14T16:16:03.430Z · score: 7 (4 votes) · LW · GW

    Thanks for the pointers to network science Jan, I don't know this literature, and if it's useful here then I'm glad you understand it well enough to guide us (and others) to key parts of it. I don't see yet how to apply it to thinking quantitatively about scientific and forecasting communities.

    If you (or another LWer) thinks that the theory around universality classes is applicable in thinking about how to ensure good info propagation in e.g. a scientific community, and you're right, then I (and Jacob and likely many others) would love to read a summary, posted here as an answer. Might you explain how understanding the linked paper on universality classes has helped you think about info propagation in forecasting communities / related communities? Concrete heuristics would be especially interesting

    (Note that Jacob and I have not taken a math course in topology or graph theory and won't be able to read answers that assume such, though we've both studied formal fields of study and could likely pick it up quickly if it seemed practically useful.)

    In general we're not looking for *novel* contributions. To give an extreme example, if one person translates an existing theoretical literature into a fully fleshed out theory of info-cascades for scientific and forecasting communities, we'll give them the entire prize pot.

    Formalising continuous info cascades? [Info-cascade series]

    2019-03-13T10:55:46.133Z · score: 17 (4 votes)

    How large is the harm from info-cascades? [Info-cascade series]

    2019-03-13T10:55:38.872Z · score: 23 (4 votes)

    How can we respond to info-cascades? [Info-cascade series]

    2019-03-13T10:55:25.685Z · score: 15 (3 votes)

    Distribution of info-cascades across fields? [Info-cascade series]

    2019-03-13T10:55:17.194Z · score: 15 (3 votes)

    Understanding information cascades

    2019-03-13T10:55:05.932Z · score: 55 (19 votes)
    Comment by benito on Thoughts on Human Models · 2019-03-13T09:29:04.954Z · score: 10 (4 votes) · LW · GW

    This is a very carefully reasoned and detailed post, which lays out a clear framework for thinking about approaches to alignment, and I'm especially excited because it points to one quadrant - engineering-focused research without human models - as highly neglected. For these three reasons I've curated the post.

    Comment by benito on In My Culture · 2019-03-07T10:29:51.707Z · score: 34 (13 votes) · LW · GW

    Elo, I don’t know what you’re talking about. Can you point to something specific in the text? My read of the post is a bid to reliably make implicit culture explicit in cases where interpersonal conflict is happening, plus a ton of examples of Duncan making his own implicit culture explicit, with an explicit tag of “I’m not advocating for these”. I’m not sure whether it’s computationally simple to notice when and when not the conflict is due to difference of implicit norms, but the OP has clearly had a bunch of effort put in to understand how people will read it and try to help them understand precisely what is and isn’t being said, and is definitely intending to make the world a better place.

    In general spending a paragraph publicly punishing a post without showing how the post violates any norms is IMO a bad faith way of enforcing norms, though I may have misunderstood you.

    Comment by benito on Announcement: AI alignment prize round 4 winners · 2019-03-03T20:20:43.293Z · score: 6 (3 votes) · LW · GW

    Reminder to do this.

    Comment by benito on Rule Thinkers In, Not Out · 2019-03-03T20:13:15.891Z · score: 4 (2 votes) · LW · GW

    I edited your comment to add the spoiler cover. FYI the key for this is > followed by ! and then a space.

    Comment by benito on Karma-Change Notifications · 2019-03-03T09:38:38.228Z · score: 5 (3 votes) · LW · GW

    Yeah, if you open up the new karma button, you’ll see a ‘Change Settings’ button.

    Comment by benito on Less Competition, More Meritocracy? · 2019-02-27T11:28:33.236Z · score: 2 (1 votes) · LW · GW

    Datapoint: I got the point about challenge equilibria being the place where everyone has to start fighting and taking risks. However I thought that 'concession' referred to the employers making concessions to weaker candidates, by hiring some. I suppose the paper's explanation makes more sense.

    Comment by benito on How good is a human's gut judgement at guessing someone's IQ? · 2019-02-26T11:37:58.452Z · score: 11 (5 votes) · LW · GW

    "Conservation of Virtue effect" <- aka Berkson's paradox.

    Comment by benito on Blackmail · 2019-02-23T20:01:53.282Z · score: 9 (2 votes) · LW · GW

    On a hypothetical ‘blackmail industry’, I don’t know if it’s legal, but there appear to already be Chinese banks that require you to send nudes before getting a loan, so that they’ll release them if you don’t pay out in time. If this was representative of the blackmail world, I think I’m against it.

    Comment by benito on Announcement: AI alignment prize round 4 winners · 2019-02-20T13:06:21.840Z · score: 2 (1 votes) · LW · GW

    Gosh I'm so irritated that you gave the reminder before me, I was looking forward to showing off my basic calendar-use skills ;-)

    Anyway, am also looking forward to Zvi's lessons and updates!

    Comment by benito on The Case for a Bigger Audience · 2019-02-20T01:02:23.557Z · score: 4 (2 votes) · LW · GW

    I think it was mostly due to the topic, which was important and something that lots of people felt a desire to follow and contribute to. (Though I agree discussion prompts are good.)

    Comment by benito on Avoiding Jargon Confusion · 2019-02-18T09:41:46.069Z · score: 2 (1 votes) · LW · GW

    Sure, I’ll try to find one later today.

    Edit: Added some more detail.

    Comment by benito on Avoiding Jargon Confusion · 2019-02-18T02:15:49.749Z · score: 4 (2 votes) · LW · GW

    Edit: I mis-remembered something, so I won't leave false info up. Comment re-written.

    I think there's a related way jargon can get confused, which is where the central example used to convey it is selected for controversy, not accuracy. I have an example, but I'm not sure of it.

    Claim: 'Nudge' is a fairly general idea, but the most common example used is one that has been selected for controversy rather than centrality to the concept.

    I remember once seeing a talk by Cass Sunstein where he expressed irritation with the fact that everyone thinks of 'nudge' as the thing where you change organ donation from opt-in to opt-out. I recall him being quite irritated that it is the central example used, and wishing he'd never used it, though I don't remember precisely what reason he gave at the time.

    I looked around, and there's a post by him here expressing that he prefers 'mandated choice' for organ donation, where you don't opt-in or opt-out, you're just forced to explicitly make the decision (i.e. it's a required question when you renew your drivers license).

    Another example Sunstein uses is 'putting fruit at eye level' in a store, or adding the image of a housefly to men's urinals to 'improve aim'. I think the issue with the opt-in / opt-out example is that it's strongly trying to route around your agency to get you to make a choice that isn't obviously what you want. And the opt-in/opt-out example has been fairly controversial (Muslim groups opposed it in the UK) which I can image contributing to it being the most widespread example.

    Regarding routing around agency: I know that every time I get an 'opt-out of this newsletter' box when signing up on a website, I feel like they're acting adversarially, in a way I wouldn't if they'd said "choose from the drop-down whether you'd like our newsletter".

    Can anyone whose looked into this with depth confirm the above account of 'nudge' and whether opt-in / opt-out is non-central and has been selected for non-epistemic reasons?

    Comment by Benito on [deleted post] 2019-02-16T22:57:02.086Z

    Hm, good point. Will come back later and see if I can rewrite into a better question.

    Comment by benito on Some disjunctive reasons for urgency on AI risk · 2019-02-16T20:00:25.149Z · score: 4 (5 votes) · LW · GW

    I think the answer to the first question is that, as with every other (important) industry, the people in that industry will have the time and skill to notice the problems and start working on them. The FOOM argument says that a small group will form a singleton quickly, and so we need to do something special to ensure it goes well, and the non-FOOM argument is that AI is an industry like most others, and like most others it will not take over the world in a matter of months.

    Comment by benito on Masculine Virtues · 2019-02-13T22:44:49.011Z · score: 2 (1 votes) · LW · GW

    +1 to being interested in reading this :)

    Comment by benito on The Case for a Bigger Audience · 2019-02-10T19:29:31.744Z · score: 7 (4 votes) · LW · GW

    Yeah, I was not saying the posts invented the terms, I was saying they were responsible for my usage of them. I remember at the time reading the post Goodhart Taxonomy and not thinking it was very useful, but then repeatedly referring back to it a great deal in my conversations. I also ended up writing a post based on the four subtypes.

    Added: Local Validity and Free Energy are two other examples that obviously weren't coined here, but the discussion here caused me to use quite a lot.

    Comment by benito on The Case for a Bigger Audience · 2019-02-10T00:48:01.445Z · score: 5 (3 votes) · LW · GW

    Actually in my head I was more counting the tail conversations (e.g. where I use a term 20-30 times), but you're right that the regular conversations will count for most of the area under the curve. Slack, Goodharting, Common Knowledge, are all ones I use quite frequently.

    Comment by benito on Book Review: The Structure Of Scientific Revolutions · 2019-02-09T21:07:57.543Z · score: 2 (1 votes) · LW · GW

    Interesting analogy here.

    Comment by benito on The Case for a Bigger Audience · 2019-02-09T21:02:06.183Z · score: 34 (13 votes) · LW · GW
    Have any posts from LW 2.0 generated new conceptual handles for the community like "the sanity waterline"?

    As a datapoint, here's a few I've used a bunch of times in real life due to discussing them on LW (2.0). I've used most of these more than 20 times, and a few of them more like 2000 times.

    Embedded Agency, Demon Threads, Slack, Combat vs Nurture Culture, Rationality Realism, Local Validity, Common Knowledge, Free Energy, Out to Get You, Fire Alarm, Robustness to Scale, Unrolling Social Metacognition, The Steering Problem, Goodhart's Law.

    Comment by benito on The Case for a Bigger Audience · 2019-02-09T20:48:13.316Z · score: 31 (8 votes) · LW · GW

    If I put on my startup hat, I hear this proposal as "Have you considered scaling your product by 10x?" A startup is essentially a product that can (and does) scale by multiple orders of magnitude to be useful to massive numbers of people, producing a significant value for the consumer population, and if you share attributes of a startup, it's a good question to ask yourself.

    That said, many startups scale before their product is ready. I have had people boast to me about how much funding they've gotten for their startup, without giving me a story for how they think they can actually turn that funding into people using their product. Remember that time Medium fired 1/3rd of its staff. There are many stories of startups getting massive amounts of funding and then crashing. So you don't want to scale prematurely.

    To pick something very concrete, one question you could as is "If I told you that LW had gotten 10x comments this quarter, do you update that we'd made 10x or even 3x progress on the art of human rationality and/or AI alignment (relative to the amount of progress we made on LW the quarter before)?" I think that isn't implausible, but I think that it's not obvious, and I think there are other things to focus on. To give a very concrete example that's closer to work we've done lately, if you heard that "LessWrong had gotten 10x answers to questions of >50 karma this quarter", I think I'd be marginally more confident that core intellectual progress had been made, but still that metric is obviously very goodhart-able.

    A second and related reason to be skeptical of focusing on moving comments from 19 to 179 at the current stage (especially if I put on my 'community manager hat'), is a worry about wasting people's time. In general, LessWrong is a website where we don't want many core members of the community to be using it 10 hours per day. Becoming addictive and causing all researchers to be on it all day, could easily be a net negative contribution to the world. While none of your recommendations were about addictiveness, there are related ways of increasing the number of comments such as showing a user's karma score on every page, like LW 1.0 did.

    Anyway, those are some arguments against. I overall feel like we're in the 'figuring out the initial idea and product' stage rather than the 'execute' stage and is where my thoughts are spent presently. I'm interested in more things like creating basic intro texts in AI alignment, creating new types of ways of knowing what ideas are needed on the site, and focusing generally on the end of the pipeline of intellectual progress right now, before focusing on getting more people spending their time on the site. I do think I'd quickly change my mind if net engagement of the site was decreasing, but my current sense is that it is slowly increasing.

    Thoughts?

    Comment by benito on EA grants available (to individuals) · 2019-02-07T16:14:40.943Z · score: 10 (6 votes) · LW · GW

    Just a short note to say that CEA’s “EA Grants” programme is funded in part by OpenPhil.

    https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2017#Budget_and_room_for_more_funding

    Comment by benito on Thoughts on Ben Garfinkel's "How sure are we about this AI stuff?" · 2019-02-07T01:46:38.072Z · score: 2 (1 votes) · LW · GW
    The criticism is expecting counter-criticism.

    I might slightly alter to one of

    The critique-author commits to writing a response post 2-4 weeks later responding to the comments, or alternatively a response post 1-2 months later responding to all posts on LW with >20 karma that critique the initial post.
    Comment by benito on Is the World Getting Better? A brief summary of recent debate · 2019-02-06T17:46:24.816Z · score: 6 (4 votes) · LW · GW

    The summary is great, thanks a lot!

    Comment by benito on Conclusion to the sequence on value learning · 2019-02-06T17:17:02.942Z · score: 2 (1 votes) · LW · GW

    Related: Gwern wrote a post arguing that people have an incentive to build a goal-directed AI over a non-goal directed AI. See the references here.

    Comment by benito on If Rationality can be likened to a 'Martial Art', what would be the Forms? · 2019-02-06T16:00:47.535Z · score: 6 (3 votes) · LW · GW

    Probabilistic forecasting (for evaluative thinking) and Fermi estimates (for generative thinking).

    (notes on) Policy Desiderata for Superintelligent AI: A Vector Field Approach

    2019-02-04T22:08:34.337Z · score: 46 (16 votes)
    Comment by benito on (notes on) Policy Desiderata for Superintelligent AI: A Vector Field Approach · 2019-02-04T22:00:43.336Z · score: 17 (8 votes) · LW · GW

    Reflections

    I definitely am not quite sure what the epistemic state of the paper is, or even its goal. Bostrom, Dafoe and Flynn keep mentioning that this paper is not a complete list of desiderata, but I don't know what portion of key desiderata they think they've hit, or why they think it's worthwhile at this stage to pre-emptively list the desiderata that currently seem important.

    (Added: My top hypothesis is that Bostrom was starting a policy group with Dafoe as its head, and thought to himself "What are the actual policy implications of the work in my book?" and then wrote them down, without expecting it to be complete, just an obvious starting point.)

    As to my thoughts on whether the recommendations in the paper seem good... to be honest, it all felt so reasonable and simple (added: this is a good thing). There were not big leaps of inference. It didn't feel surprising to me. But here's a few updates/reflections.

    I have previously run the thought experiment "What would I do if I were at the start, or just before the start, of the industrial revolution?" Thought pertaining to massive turbulence, redistribution, and concentration, and adaptability, seemed natural focal concerns to me, but I had not made them as precise or as clear as the paper had. Then again I'd been thinking more about what I as an individual should do, not how a government or larger organisation should approach the problem. I definitely hadn't thought about population dynamics in that context (which were also a big deal after the industrial revolution - places like England scaled by an order of magnitude, requiring major infrastructural changes in politics, education, industry, and elsewhere).

    I think that the technical details of AI are most important in the sections on Efficiency and Population. The sections on Allocation and Process I would expect to apply to any technological revolution (industrial, agricultural, etc).

    I'm not sure that this is consistent with his actions, but I think it's likely that Ben from yesterday would've said the words "In order to make sensible progress on AI policy you require a detailed understanding of the new technology". I realise now that, while it is indeed required to get the overall picture right, there is progress to be made that merely takes heed of this being a technological revolution of historic proportions, and does not need to matter too much which particular technological revolution we're going through.

    I've seen another discussion here, along with the Vulnerable World Hypothesis paper (LW discussion here), for the need for the ability to execute a massive coordination increase. I'm going to definitely think more about 'conditional stabilization', how exactly it follows from the conceptual space of thinking about singletons and coordination, and what possible things it might look like (global surveillance seems terrible on the face of it, I wonder if moving straight to that is premature. I think there's probably a lot more granular ways of thinking about surveillance).

    In general this paper is full of very cautious and careful conceptual work, based on simple arguments and technical understandings of AI and coordination. In general I don't trust many people to do this without vetting the ideas in depth myself or without seeing a past history of their success. Bostrom certainly ticks the latter box and weakly ticks the former box for me (I've yet to personally read enough of his writings to say anything stronger there), and given that he's a primary author on this paper, I feel epistemically safe taking on these framings without 30-100 hours of further examination.

    I hope to be able to spend a similar effort summarising the many other strategic papers Bostrom and others at the FHI have produced.

    Feedback

    For future posts of a similar nature, please PM me if you have any easy changes that would've made this post more useful to you / made it easier to get the info you needed (I will delete public comments on that topic). It'd also be great to (publicly) hear that someone else actually read the paper and checked whether my notes missed something important or are inaccurate.

    Comment by benito on Río Grande: judgment calls · 2019-01-29T11:08:29.304Z · score: 6 (3 votes) · LW · GW

    Yeah, this was crossposted from Katja's travel blog.

    Comment by benito on On MIRI's new research directions · 2019-01-25T01:10:01.480Z · score: 12 (3 votes) · LW · GW

    I think that closed-by-default is a very bad strategy from the perspective of outreach, and the perspective of building a field of AI alignment. But I realise that MIRI is explicitly and wholly focusing on making research progress, for at least the coming few years, and I think overall the whole post and decisions make a lot of sense from this perspective.

    Our impression is indeed that well-targeted outreach efforts can be highly valuable. However, attempts at outreach/influence/field-building seem to us to currently constitute a large majority of worldwide research activity that’s motivated by AGI safety concerns,[10] such that MIRI’s time is better spent on taking a straight shot at the core research problems. Further, we think our own comparative advantage lies here, and not in outreach work.[11]

    And here's the footnotes:

    [10] In other words, many people are explicitly focusing only on outreach, and many others are selecting technical problems to work on with a stated goal of strengthening the field and drawing others into it.
    [11] This isn’t meant to suggest that nobody else is taking a straight shot at the core problems. For example, OpenAI’s Paul Christiano is a top-tier researcher who is doing exactly that. But we nonetheless want more of this on the present margin.
    Comment by benito on On MIRI's new research directions · 2019-01-25T01:01:03.019Z · score: 32 (5 votes) · LW · GW

    I was recently thinking about focus. Some examples:

    This tweet:

    The internet provides access to an education that the aristocracy of old couldn't have imagined.
    It also provides the perfect attack vector for marketers to exploit cognitive vulnerabilities and dominate your attention.
    A world-class education is free for the undistractable.

    Sam Altman's recent blogpost on How to Be Successful has the following two commands:

    3. Learn to think independently
    6. Focus

    (He often talks about the main task a startup founder has is to pick the 2 or 3 things to focus on that day of the 100+ things vying for your attention.)

    And I found this old quote by the mathematician Gronthendieck on Michael Nielson's blog.

    In those critical years I learned how to be alone. [But even] this formulation doesn't really capture my meaning. I didn't, in any literal sense, learn to be alone, for the simple reason that this knowledge had never been unlearned during my childhood. It is a basic capacity in all of us from the day of our birth. However these three years of work in isolation [1945-1948], when I was thrown onto my own resources, following guidelines which I myself had spontaneously invented, instilled in me a strong degree of confidence, unassuming yet enduring in my ability to do mathematics, which owes nothing to any consensus or to the fashions which pass as law. By this I mean to say: to reach out in my own way to the things I wished to learn, rather than relying on the notions of the consensus, overt or tacit, coming from a more or less extended clan of which I found myself a member, or which for any other reason laid claim to be taken as an authority. This silent consensus had informed me both at the lycee and at the university, that one shouldn't bother worrying about what was really meant when using a term like "volume" which was "obviously self-evident", "generally known," "in problematic" etc... it is in this gesture of "going beyond" to be in oneself rather than the pawn of a consensus, the refusal to stay within a rigid circle that others have drawn around one -- it is in this solitary act that one finds true creativity. All others things follow as a matter of course.
    Since then I’ve had the chance in the world of mathematics that bid me welcome, to meet quite a number of people, both among my "elders" and among young people in my general age group who were more brilliant, much more ‘gifted’ than I was. I admired the facility with which they picked up, as if at play, new ideas, juggling them as if familiar with them from the cradle -- while for myself I felt clumsy, even oafish, wandering painfully up an arduous track, like a dumb ox faced with an amorphous mountain of things I had to learn (so I was assured) things I felt incapable of understanding the essentials or following through to the end. Indeed, there was little about me that identified the kind of bright student who wins at prestigious competitions or assimilates almost by sleight of hand, the most forbidding subjects.
    In fact, most of these comrades who I gauged to be more brilliant than I have gone on to become distinguished mathematicians. Still from the perspective of thirty or thirty five years, I can state that their imprint upon the mathematics of our time has not been very profound. They've done all things, often beautiful things in a context that was already set out before them, which they had no inclination to disturb. Without being aware of it, they've remained prisoners of those invisible and despotic circles which delimit the universe of a certain milieu in a given era. To have broken these bounds they would have to rediscover in themselves that capability which was their birthright, as it was mine: The capacity to be alone.

    Overall, it made me update that MIRI's decision to be closed-by-default is quite sensible. This section seems trivially correct from this point of view.

    Focus seems unusually useful for this kind of work
    There may be some additional speed-up effects from helping free up researchers’ attention, though we don’t consider this a major consideration on its own.
    Historically, early-stage scientific work has often been done by people who were solitary or geographically isolated, perhaps because this makes it easier to slowly develop a new way to factor the phenomenon, instead of repeatedly translating ideas into the current language others are using. It’s difficult to describe how much mental space and effort turns out to be taken up with thoughts of how your research will look to other people staring at you, until you try going into a closed room for an extended period of time with a promise to yourself that all the conversation within it really won’t be shared at all anytime soon.
    Once we realized this was going on, we realized that in retrospect, we may have been ignoring common practice, in a way. Many startup founders have reported finding stealth mode, and funding that isn’t from VC outsiders, tremendously useful for focus. For this reason, we’ve also recently been encouraging researchers at MIRI to worry less about appealing to a wide audience when doing public-facing work. We want researchers to focus mainly on whatever research directions they find most compelling, make exposition and distillation a secondary priority, and not worry about optimizing ideas for persuasiveness or for being easier to defend.

    How did academia ensure papers were correct in the early 20th Century?

    2018-12-29T23:37:35.789Z · score: 79 (20 votes)

    Open and Welcome Thread December 2018

    2018-12-04T22:20:53.076Z · score: 28 (10 votes)

    The Vulnerable World Hypothesis (by Bostrom)

    2018-11-06T20:05:27.496Z · score: 47 (17 votes)

    Open Thread November 2018

    2018-10-31T03:39:41.480Z · score: 17 (6 votes)

    Introducing the AI Alignment Forum (FAQ)

    2018-10-29T21:07:54.494Z · score: 89 (30 votes)

    Quick Thoughts on Generation and Evaluation of Hypotheses in a Community

    2018-09-06T01:01:49.108Z · score: 56 (21 votes)

    Psychology Replication Quiz

    2018-08-31T18:54:54.411Z · score: 49 (14 votes)

    Goodhart Taxonomy: Agreement

    2018-07-01T03:50:44.562Z · score: 44 (11 votes)

    Ben Pace's Shortform Feed

    2018-06-27T00:55:58.219Z · score: 11 (3 votes)

    Bounded Rationality: Two Cultures

    2018-05-29T03:41:49.527Z · score: 23 (4 votes)

    Temporarily Out of Office

    2018-05-08T21:59:55.021Z · score: 6 (1 votes)

    Brief comment on frontpage/personal distinction

    2018-05-01T18:53:19.250Z · score: 63 (14 votes)

    Form Your Own Opinions

    2018-04-28T19:50:18.321Z · score: 61 (15 votes)

    Community Page Mini-Guide

    2018-04-24T15:04:53.641Z · score: 16 (3 votes)

    Hold On To The Curiosity

    2018-04-23T07:32:01.960Z · score: 97 (30 votes)

    LW Update 04/06/18 – QM Sequence Updated

    2018-04-06T08:53:45.560Z · score: 37 (9 votes)

    Why Karma 2.0? (A Kabbalistic Explanation)

    2018-04-02T20:43:02.032Z · score: 28 (9 votes)

    A Sketch of Good Communication

    2018-03-31T22:48:59.652Z · score: 150 (50 votes)

    LessWrong Launch Party

    2018-03-22T02:19:14.468Z · score: 18 (3 votes)

    The Costly Coordination Mechanism of Common Knowledge

    2018-03-15T20:20:41.566Z · score: 183 (58 votes)

    The Building Blocks of Interpretability

    2018-03-14T20:42:30.674Z · score: 23 (4 votes)

    Editor Mini-Guide

    2018-03-11T21:20:59.393Z · score: 44 (11 votes)

    Welcome to Oxford University Rationality Community

    2018-03-11T20:17:46.086Z · score: 13 (4 votes)

    Moderation List (warnings and bans)

    2018-03-06T19:18:44.226Z · score: 34 (7 votes)

    Extended Quote on the Institution of Academia

    2018-03-01T02:58:11.159Z · score: 128 (42 votes)

    A model I use when making plans to reduce AI x-risk

    2018-01-19T00:21:45.460Z · score: 136 (48 votes)

    Field-Building and Deep Models

    2018-01-13T21:16:14.523Z · score: 55 (18 votes)

    12/31/17 Update: Frontpage Redesign

    2018-01-01T03:21:11.408Z · score: 16 (4 votes)

    Against Love Languages

    2017-12-29T09:51:29.609Z · score: 27 (12 votes)

    Comments on Power Law Distribution of Individual Impact

    2017-12-29T01:49:26.791Z · score: 54 (15 votes)

    Comment on SSC's Review of Inadequate Equilibria

    2017-12-01T11:46:13.688Z · score: 30 (9 votes)

    Bet Payoff 1: OpenPhil/MIRI Grant Increase

    2017-11-09T18:31:06.034Z · score: 40 (12 votes)

    Brief comment on featured

    2017-10-29T17:12:51.320Z · score: 33 (12 votes)

    Maths requires less magical ability than advertised. Also, messy details are messy (and normal).

    2017-10-27T03:11:50.773Z · score: 16 (5 votes)

    Frontpage Posting and Commenting Guidelines

    2017-09-26T06:24:59.649Z · score: 44 (24 votes)

    LW 2.0 Site Update: 09/21/17

    2017-09-22T07:43:31.740Z · score: 20 (8 votes)

    LW 2.0 Site Update: 09/21/17

    2017-09-22T07:42:09.542Z · score: 8 (3 votes)

    Intellectual Progress Inside and Outside Academia

    2017-09-02T23:08:46.690Z · score: 26 (26 votes)

    Open Questions in Life-Planning

    2017-03-16T00:02:00.000Z · score: 5 (1 votes)

    An early stage project generation model

    2017-03-16T00:01:00.000Z · score: 7 (2 votes)