Comment by benito on Avoiding Jargon Confusion · 2019-02-18T02:15:49.749Z · score: 4 (2 votes) · LW · GW

Another standard example here is 'nudge'. As we all know, a nudge is like when on an organ-donor form, you have to check the box to opt-out rather than to opt-in. Lots of little nudges build up to an environment where the path of least resistance takes you a certain way (hopefully a pro-social way).

Yet I repeatedly hear the guys who wrote that book mention how the opt-in / opt-out thing isn't an example of a nudge.

Now, I have no idea what they intended, but I sure know I want a name for that way that you cause the environment to make it easy to do one action, and so I'm using nudge for that.

Comment by Benito on [deleted post] 2019-02-16T22:57:02.086Z

Hm, good point. Will come back later and see if I can rewrite into a better question.

Comment by benito on Some disjunctive reasons for urgency on AI risk · 2019-02-16T20:00:25.149Z · score: 3 (3 votes) · LW · GW

I think the answer to the first question is that, as with every other (important) industry, the people in that industry will have the time and skill to notice the problems and start working on them. The FOOM argument says that a small group will form a singleton quickly, and so we need to do something special to ensure it goes well, and the non-FOOM argument is that AI is an industry like most others, and like most others it will not take over the world in a matter of months.

Comment by benito on Masculine Virtues · 2019-02-13T22:44:49.011Z · score: 2 (1 votes) · LW · GW

+1 to being interested in reading this :)

Comment by benito on The Case for a Bigger Audience · 2019-02-10T19:29:31.744Z · score: 4 (2 votes) · LW · GW

Yeah, I was not saying the posts invented the terms, I was saying they were responsible for my usage of them. I remember at the time reading the post Goodhart Taxonomy and not thinking it was very useful, but then repeatedly referring back to it a great deal in my conversations. I also ended up writing a post based on the four subtypes.

Added: Local Validity and Free Energy are two other examples that obviously weren't coined here, but the discussion here caused me to use quite a lot.

Comment by benito on The Case for a Bigger Audience · 2019-02-10T00:48:01.445Z · score: 4 (2 votes) · LW · GW

Actually in my head I was more counting the tail conversations (e.g. where I use a term 20-30 times), but you're right that the regular conversations will count for most of the area under the curve. Slack, Goodharting, Common Knowledge, are all ones I use quite frequently.

Comment by benito on Book Review: The Structure Of Scientific Revolutions · 2019-02-09T21:07:57.543Z · score: 2 (1 votes) · LW · GW

Interesting analogy here.

Comment by benito on The Case for a Bigger Audience · 2019-02-09T21:02:06.183Z · score: 31 (11 votes) · LW · GW
Have any posts from LW 2.0 generated new conceptual handles for the community like "the sanity waterline"?

As a datapoint, here's a few I've used a bunch of times in real life due to discussing them on LW (2.0). I've used most of these more than 20 times, and a few of them more like 2000 times.

Embedded Agency, Demon Threads, Slack, Combat vs Nurture Culture, Rationality Realism, Local Validity, Common Knowledge, Free Energy, Out to Get You, Fire Alarm, Robustness to Scale, Unrolling Social Metacognition, The Steering Problem, Goodhart's Law.

Comment by benito on The Case for a Bigger Audience · 2019-02-09T20:48:13.316Z · score: 28 (6 votes) · LW · GW

If I put on my startup hat, I hear this proposal as "Have you considered scaling your product by 10x?" A startup is essentially a product that can (and does) scale by multiple orders of magnitude to be useful to massive numbers of people, producing a significant value for the consumer population, and if you share attributes of a startup, it's a good question to ask yourself.

That said, many startups scale before their product is ready. I have had people boast to me about how much funding they've gotten for their startup, without giving me a story for how they think they can actually turn that funding into people using their product. Remember that time Medium fired 1/3rd of its staff. There are many stories of startups getting massive amounts of funding and then crashing. So you don't want to scale prematurely.

To pick something very concrete, one question you could as is "If I told you that LW had gotten 10x comments this quarter, do you update that we'd made 10x or even 3x progress on the art of human rationality and/or AI alignment (relative to the amount of progress we made on LW the quarter before)?" I think that isn't implausible, but I think that it's not obvious, and I think there are other things to focus on. To give a very concrete example that's closer to work we've done lately, if you heard that "LessWrong had gotten 10x answers to questions of >50 karma this quarter", I think I'd be marginally more confident that core intellectual progress had been made, but still that metric is obviously very goodhart-able.

A second and related reason to be skeptical of focusing on moving comments from 19 to 179 at the current stage (especially if I put on my 'community manager hat'), is a worry about wasting people's time. In general, LessWrong is a website where we don't want many core members of the community to be using it 10 hours per day. Becoming addictive and causing all researchers to be on it all day, could easily be a net negative contribution to the world. While none of your recommendations were about addictiveness, there are related ways of increasing the number of comments such as showing a user's karma score on every page, like LW 1.0 did.

Anyway, those are some arguments against. I overall feel like we're in the 'figuring out the initial idea and product' stage rather than the 'execute' stage and is where my thoughts are spent presently. I'm interested in more things like creating basic intro texts in AI alignment, creating new types of ways of knowing what ideas are needed on the site, and focusing generally on the end of the pipeline of intellectual progress right now, before focusing on getting more people spending their time on the site. I do think I'd quickly change my mind if net engagement of the site was decreasing, but my current sense is that it is slowly increasing.

Thoughts?

Comment by benito on EA grants available (to individuals) · 2019-02-07T16:14:40.943Z · score: 10 (6 votes) · LW · GW

Just a short note to say that CEA’s “EA Grants” programme is funded in part by OpenPhil.

https://www.openphilanthropy.org/giving/grants/centre-effective-altruism-general-support-2017#Budget_and_room_for_more_funding

Comment by benito on Thoughts on Ben Garfinkel's "How sure are we about this AI stuff?" · 2019-02-07T01:46:38.072Z · score: 2 (1 votes) · LW · GW
The criticism is expecting counter-criticism.

I might slightly alter to one of

The critique-author commits to writing a response post 2-4 weeks later responding to the comments, or alternatively a response post 1-2 months later responding to all posts on LW with >20 karma that critique the initial post.
Comment by benito on Is the World Getting Better? A brief summary of recent debate · 2019-02-06T17:46:24.816Z · score: 6 (4 votes) · LW · GW

The summary is great, thanks a lot!

Comment by benito on Conclusion to the sequence on value learning · 2019-02-06T17:17:02.942Z · score: 2 (1 votes) · LW · GW

Related: Gwern wrote a post arguing that people have an incentive to build a goal-directed AI over a non-goal directed AI. See the references here.

Comment by benito on If Rationality can be likened to a 'Martial Art', what would be the Forms? · 2019-02-06T16:00:47.535Z · score: 6 (3 votes) · LW · GW

Probabilistic forecasting (for evaluative thinking) and Fermi estimates (for generative thinking).

(notes on) Policy Desiderata for Superintelligent AI: A Vector Field Approach

2019-02-04T22:08:34.337Z · score: 46 (16 votes)
Comment by benito on (notes on) Policy Desiderata for Superintelligent AI: A Vector Field Approach · 2019-02-04T22:00:43.336Z · score: 17 (8 votes) · LW · GW

Reflections

I definitely am not quite sure what the epistemic state of the paper is, or even its goal. Bostrom, Dafoe and Flynn keep mentioning that this paper is not a complete list of desiderata, but I don't know what portion of key desiderata they think they've hit, or why they think it's worthwhile at this stage to pre-emptively list the desiderata that currently seem important.

(Added: My top hypothesis is that Bostrom was starting a policy group with Dafoe as its head, and thought to himself "What are the actual policy implications of the work in my book?" and then wrote them down, without expecting it to be complete, just an obvious starting point.)

As to my thoughts on whether the recommendations in the paper seem good... to be honest, it all felt so reasonable and simple (added: this is a good thing). There were not big leaps of inference. It didn't feel surprising to me. But here's a few updates/reflections.

I have previously run the thought experiment "What would I do if I were at the start, or just before the start, of the industrial revolution?" Thought pertaining to massive turbulence, redistribution, and concentration, and adaptability, seemed natural focal concerns to me, but I had not made them as precise or as clear as the paper had. Then again I'd been thinking more about what I as an individual should do, not how a government or larger organisation should approach the problem. I definitely hadn't thought about population dynamics in that context (which were also a big deal after the industrial revolution - places like England scaled by an order of magnitude, requiring major infrastructural changes in politics, education, industry, and elsewhere).

I think that the technical details of AI are most important in the sections on Efficiency and Population. The sections on Allocation and Process I would expect to apply to any technological revolution (industrial, agricultural, etc).

I'm not sure that this is consistent with his actions, but I think it's likely that Ben from yesterday would've said the words "In order to make sensible progress on AI policy you require a detailed understanding of the new technology". I realise now that, while it is indeed required to get the overall picture right, there is progress to be made that merely takes heed of this being a technological revolution of historic proportions, and does not need to matter too much which particular technological revolution we're going through.

I've seen another discussion here, along with the Vulnerable World Hypothesis paper (LW discussion here), for the need for the ability to execute a massive coordination increase. I'm going to definitely think more about 'conditional stabilization', how exactly it follows from the conceptual space of thinking about singletons and coordination, and what possible things it might look like (global surveillance seems terrible on the face of it, I wonder if moving straight to that is premature. I think there's probably a lot more granular ways of thinking about surveillance).

In general this paper is full of very cautious and careful conceptual work, based on simple arguments and technical understandings of AI and coordination. In general I don't trust many people to do this without vetting the ideas in depth myself or without seeing a past history of their success. Bostrom certainly ticks the latter box and weakly ticks the former box for me (I've yet to personally read enough of his writings to say anything stronger there), and given that he's a primary author on this paper, I feel epistemically safe taking on these framings without 30-100 hours of further examination.

I hope to be able to spend a similar effort summarising the many other strategic papers Bostrom and others at the FHI have produced.

Feedback

For future posts of a similar nature, please PM me if you have any easy changes that would've made this post more useful to you / made it easier to get the info you needed (I will delete public comments on that topic). It'd also be great to (publicly) hear that someone else actually read the paper and checked whether my notes missed something important or are inaccurate.

Comment by benito on Río Grande: judgment calls · 2019-01-29T11:08:29.304Z · score: 6 (3 votes) · LW · GW

Yeah, this was crossposted from Katja's travel blog.

Comment by benito on On MIRI's new research directions · 2019-01-25T01:10:01.480Z · score: 12 (3 votes) · LW · GW

I think that closed-by-default is a very bad strategy from the perspective of outreach, and the perspective of building a field of AI alignment. But I realise that MIRI is explicitly and wholly focusing on making research progress, for at least the coming few years, and I think overall the whole post and decisions make a lot of sense from this perspective.

Our impression is indeed that well-targeted outreach efforts can be highly valuable. However, attempts at outreach/influence/field-building seem to us to currently constitute a large majority of worldwide research activity that’s motivated by AGI safety concerns,[10] such that MIRI’s time is better spent on taking a straight shot at the core research problems. Further, we think our own comparative advantage lies here, and not in outreach work.[11]

And here's the footnotes:

[10] In other words, many people are explicitly focusing only on outreach, and many others are selecting technical problems to work on with a stated goal of strengthening the field and drawing others into it.
[11] This isn’t meant to suggest that nobody else is taking a straight shot at the core problems. For example, OpenAI’s Paul Christiano is a top-tier researcher who is doing exactly that. But we nonetheless want more of this on the present margin.
Comment by benito on On MIRI's new research directions · 2019-01-25T01:01:03.019Z · score: 32 (5 votes) · LW · GW

I was recently thinking about focus. Some examples:

This tweet:

The internet provides access to an education that the aristocracy of old couldn't have imagined.
It also provides the perfect attack vector for marketers to exploit cognitive vulnerabilities and dominate your attention.
A world-class education is free for the undistractable.

Sam Altman's recent blogpost on How to Be Successful has the following two commands:

3. Learn to think independently
6. Focus

(He often talks about the main task a startup founder has is to pick the 2 or 3 things to focus on that day of the 100+ things vying for your attention.)

And I found this old quote by the mathematician Gronthendieck on Michael Nielson's blog.

In those critical years I learned how to be alone. [But even] this formulation doesn't really capture my meaning. I didn't, in any literal sense, learn to be alone, for the simple reason that this knowledge had never been unlearned during my childhood. It is a basic capacity in all of us from the day of our birth. However these three years of work in isolation [1945-1948], when I was thrown onto my own resources, following guidelines which I myself had spontaneously invented, instilled in me a strong degree of confidence, unassuming yet enduring in my ability to do mathematics, which owes nothing to any consensus or to the fashions which pass as law. By this I mean to say: to reach out in my own way to the things I wished to learn, rather than relying on the notions of the consensus, overt or tacit, coming from a more or less extended clan of which I found myself a member, or which for any other reason laid claim to be taken as an authority. This silent consensus had informed me both at the lycee and at the university, that one shouldn't bother worrying about what was really meant when using a term like "volume" which was "obviously self-evident", "generally known," "in problematic" etc... it is in this gesture of "going beyond" to be in oneself rather than the pawn of a consensus, the refusal to stay within a rigid circle that others have drawn around one -- it is in this solitary act that one finds true creativity. All others things follow as a matter of course.
Since then I’ve had the chance in the world of mathematics that bid me welcome, to meet quite a number of people, both among my "elders" and among young people in my general age group who were more brilliant, much more ‘gifted’ than I was. I admired the facility with which they picked up, as if at play, new ideas, juggling them as if familiar with them from the cradle -- while for myself I felt clumsy, even oafish, wandering painfully up an arduous track, like a dumb ox faced with an amorphous mountain of things I had to learn (so I was assured) things I felt incapable of understanding the essentials or following through to the end. Indeed, there was little about me that identified the kind of bright student who wins at prestigious competitions or assimilates almost by sleight of hand, the most forbidding subjects.
In fact, most of these comrades who I gauged to be more brilliant than I have gone on to become distinguished mathematicians. Still from the perspective of thirty or thirty five years, I can state that their imprint upon the mathematics of our time has not been very profound. They've done all things, often beautiful things in a context that was already set out before them, which they had no inclination to disturb. Without being aware of it, they've remained prisoners of those invisible and despotic circles which delimit the universe of a certain milieu in a given era. To have broken these bounds they would have to rediscover in themselves that capability which was their birthright, as it was mine: The capacity to be alone.

Overall, it made me update that MIRI's decision to be closed-by-default is quite sensible. This section seems trivially correct from this point of view.

Focus seems unusually useful for this kind of work
There may be some additional speed-up effects from helping free up researchers’ attention, though we don’t consider this a major consideration on its own.
Historically, early-stage scientific work has often been done by people who were solitary or geographically isolated, perhaps because this makes it easier to slowly develop a new way to factor the phenomenon, instead of repeatedly translating ideas into the current language others are using. It’s difficult to describe how much mental space and effort turns out to be taken up with thoughts of how your research will look to other people staring at you, until you try going into a closed room for an extended period of time with a promise to yourself that all the conversation within it really won’t be shared at all anytime soon.
Once we realized this was going on, we realized that in retrospect, we may have been ignoring common practice, in a way. Many startup founders have reported finding stealth mode, and funding that isn’t from VC outsiders, tremendously useful for focus. For this reason, we’ve also recently been encouraging researchers at MIRI to worry less about appealing to a wide audience when doing public-facing work. We want researchers to focus mainly on whatever research directions they find most compelling, make exposition and distillation a secondary priority, and not worry about optimizing ideas for persuasiveness or for being easier to defend.
Comment by benito on The 3 Books Technique for Learning a New Skilll · 2019-01-25T00:32:06.638Z · score: 4 (2 votes) · LW · GW

I curated this for being a handy abstraction for learning. As comments downthread have said, things like distinguishing positive from negative reviews, and the actual examples you gave (procrastination and calculus), were both really helpful. And the post was as short+simple as it could be.

(I'd love to see more examples in the comments.)

Comment by benito on Announcement: AI alignment prize round 4 winners · 2019-01-21T23:59:11.111Z · score: 6 (3 votes) · LW · GW

Interesting. Can you talk a bit more about how much time you actually devoted to thinking about whitelisting in the lead up to the work that was awarded, and whether you considered it your top priority at the time?

Added: Was it the top idea in your mind for any substantial period of time?

Comment by benito on Announcement: AI alignment prize round 4 winners · 2019-01-21T23:42:43.758Z · score: 8 (4 votes) · LW · GW

I observe that, of the 16 awards of money from the AI alignmnet prize, as far as I can see none of the winners had a full-time project that wasn't working on AI alignment (i.e. they either worked on alignment full time, or else were financially supported in a way that gave them the space to devote their attention to it fully for the purpose of the prize). I myself, just now introspecting on why I didn't apply, didn't S1-expect to be able to produce anything I expected to win a prize without ~1 month of work, and I have to work on LessWrong. This suggests some natural interventions (e.g. somehow giving out smaller prizes for good efforts even if they weren't successful).

Comment by benito on Announcement: AI alignment prize round 4 winners · 2019-01-20T19:23:46.775Z · score: 7 (3 votes) · LW · GW

Okay. I’ve added myself a calendar reminder.

Comment by benito on Announcement: AI alignment prize round 4 winners · 2019-01-20T19:22:10.619Z · score: 11 (4 votes) · LW · GW

Woop! All the winners are awesome and I’m glad they’re getting money for making progress on the most important problem.

Comment by benito on Why not tool AI? · 2019-01-20T09:17:31.373Z · score: 4 (2 votes) · LW · GW

Thanks, this example was so big and recent that I forgot it. Have added it to my answer.

Comment by benito on Why not tool AI? · 2019-01-19T20:57:15.822Z · score: 42 (14 votes) · LW · GW

You mention having looked through the literature; in case you missed any, here's what I think of as the standard resources on this topic.

All are very worth reading.

Comment by benito on 1/9/2018 Update - Frontpage Views · 2019-01-14T20:02:48.403Z · score: 4 (2 votes) · LW · GW

People write content, and then the mods discuss it and choose to promote content to the curated section, which is easier for more people to follow (it updates slowly) and gets more visibility. Mods also write a brief explanation in a comment for why they curated that particular post.

Comment by benito on Littlewood's Law and the Global Media · 2019-01-13T10:37:02.783Z · score: 7 (3 votes) · LW · GW

Wow. Reading the story of Air France 447 was intense (and horrifying). It's remarkable how many little things went wrong that didn't need to. It felt weirdly like reality was conspiring to make the plane crash ("Just before the critical moment, let's send the most experienced pilot off for a nap" and "During the critical moment, let's make sure the pilot in command mentions all the information except the critical piece that could save them" etc), even though I recognise that it's all a selection effect on picking the one crash out of 10s of billions of very safe flights. I'll definitely keep Littlewood's Law in mind when discussing the news in the future.

The coping mechanisms for the news feel optimistic. Things like 'screen size' or 'typography' in proportion to base rate seems unlikely to produce a good product that people can use. I think to solve news you probably need a high effort process that gathers lots of information and doesn't have to compete quickly for attention and can sort through it and make a decision about what to publish, rather than trusting all the consumers of news to do the statistical data gathering themselves, each individually largely getting it right. (A while back I saw this new org building itself around a subscription based system rather than an ads based system, and after ~20 mins of exploring their site I paid to be a member, though I don't expect to read it.)

I've put a bunch of work myself into basically not updating when reading a news article on basically anything except the editorial process of the news org. It feels to me related to when a character reads a news article in a movie, or perhaps when I read fanfiction and learn new info about a character that isn't canon. I'm definitely building a web of implications and associations, but it's very disconnected from my model of the real world. And obviously more than that I've put a bunch of work into not being exposed to the news in general.

Unrelated: I hadn't seen this before, and it is a fun resource to explore.

Comment by benito on Open Thread January 2019 · 2019-01-13T10:26:09.048Z · score: 20 (7 votes) · LW · GW

I'd be keen to see other examples of the What, How and Why books for an area of expertise that you know about. I found the two examples in the post to be really useful for framing how I even think about those areas - procrastination and calculus,. (If we get some more good examples, maybe we could make another post like the 'best textbooks' post but for this, to collect them.)

Comment by benito on What makes people intellectually active? · 2019-01-11T22:22:24.627Z · score: 8 (4 votes) · LW · GW

I've curated this question. It has lots of properties of great discussions: someone posting a genuine question that they're confused about; lots of people chipping in with ideas and suggestions; someone distilling the ideas into a single place (i.e. the currently top-voted answer by Kaj), clearly making progress. And this is all about a key problem, of becoming intellectually generative.

I've been looking for a question to curate since we launched, and while I thought the type I'd curate would be one where someone definitively answered a concrete question, this one has surprised me with the quality of its discussion. Kaj's answer, the answers that Kaj summarised, as well as An1lam's and Lukas_Gloor's, should consider themselves part of this curation (and people reading this post will read your answers). Thanks all!

Comment by benito on Open Thread January 2019 · 2019-01-11T20:43:40.573Z · score: 5 (3 votes) · LW · GW

Welcome :) I wish you well in practising the art of Bayescraft.

Comment by benito on Book Review: The Structure Of Scientific Revolutions · 2019-01-09T08:25:56.443Z · score: 9 (5 votes) · LW · GW

David Chapman recommends reading Kuhn's postscript to the second edition, responding to criticism in the first 7 years of the book's publication. Here's a PDF of the book, with the postscript starting at page 174. I'll add further commentary once I read it.

Comment by benito on Book Review: The Structure Of Scientific Revolutions · 2019-01-09T08:08:10.848Z · score: 10 (5 votes) · LW · GW

This post makes a much stronger case for the anti-Bayes side in Eliezer's "Science Doesn't Trust Your Rationality".

In my mind I definitely hold romantic notions of pure-and-perfect scientists, whose work makes crisp predictions we can all see and that changes everyone's minds. Yet it appears that Kuhn paints a far messier picture of science, to the extent that at no point does anyone, really, know what they're doing, and that their high-level models shouldn't be trusted to reach sane conclusions there (nor should I trust myself).

I update significantly toward the not-trusting-myself position reading this. I update downward on us getting AGI right the first time (i.e. building a paradigm that will produce aligned AGI that we trust in and that we're well-calibrated about that trust). I also increase my desire to study the history of science, math, philosophy, and knowledge.

Comment by benito on Which approach is most promising for aligned AGI? · 2019-01-08T06:45:12.024Z · score: 2 (1 votes) · LW · GW

Do you mean approach for building it or general alignment research avenue? For example, agent foundations is not an approach to building aligned AGI, it's an approach to understanding intelligence better than may later significantly help in building aligned AGI.

Comment by benito on Failures of UDT-AIXI, Part 1: Improper Randomizing · 2019-01-07T21:26:11.632Z · score: 14 (4 votes) · LW · GW

This was originally posted to the AI Alignment Forum, where researchers are typically engaged in an advanced technical discussion. AI Alignment Forum posts assume these concepts; they aren't new terms. This is similar to how pure math papers don't explain what groups are, what homomorphisms are, etc.

If you want to get up to speed so that you can understand and contribute, I suggest some googling (here's the wiki page and LW wiki page for AIXI). There's also a lot of historical discussion of AIXI on LW that you can read.

(I think we will update the posts that are from AIAF to include info at the top about them being from the AIAF.)

Comment by benito on Two More Decision Theory Problems for Humans · 2019-01-04T21:10:45.135Z · score: 9 (5 votes) · LW · GW
I believe it is common

Name one example? :)

Comment by benito on "Traveling Salesman" · 2019-01-03T17:38:42.809Z · score: 9 (5 votes) · LW · GW

I think this would be a better fit for the open thread.

Comment by benito on What exercises go best with 3 blue 1 brown's Linear Algebra videos? · 2019-01-01T22:29:40.602Z · score: 6 (3 votes) · LW · GW

Previous LessWrong reviews of Linear Algebra Done Right by Nate Soares and TurnTrout (both highly detailed).

Here's a summary passage from Nate:

This book did a far better job of introducing the main concepts of linear algebra to me than did my Discrete Mathematics course. I came away with a vastly improved intuition for why the standard tools of linear algebra actually work.
I can personally attest that Linear Algebra Done Right is a great way to un-memorize passwords and build up that intuition. If you know how to compute a determinant but you have no idea what it means, then I recommend giving this book a shot.
I imagine that Linear Algebra Done Right would also be a good introduction for someone who hasn't done any linear algebra at all. 

TurnTrout's review has links to solutions to the exercises (he did ~100% of the exercises, and talks about them a bit, which is relevant for Raemon's question).

Comment by benito on How did academia ensure papers were correct in the early 20th Century? · 2018-12-31T19:01:03.239Z · score: 8 (2 votes) · LW · GW

You’re right that I misspoke to imply they explicitly claim no errors, I shall correct that in the OP.

However, if I knew a community where nobody ever disagreed with anyone else nor admitted error of their own, I would think this odd. Especially if they purported to be doing science.

Comment by benito on How did academia ensure papers were correct in the early 20th Century? · 2018-12-30T05:41:55.225Z · score: 4 (2 votes) · LW · GW

Oli's comment is right. Another way of phrasing the question is:

Why do journals purport to contain no errors in any of the papers? Is it because they're lying about the quality of the papers? Or if there's a reliable process that removed errors from 100% of papers, can someone tell me what that process was?

Edit: I've added this to the OP.

How did academia ensure papers were correct in the early 20th Century?

2018-12-29T23:37:35.789Z · score: 79 (20 votes)
Comment by benito on Sunscreen. When? Why? Why not? · 2018-12-27T22:50:01.555Z · score: 5 (3 votes) · LW · GW

ahaha

Added: I fixed it again.

Comment by benito on Sunscreen. When? Why? Why not? · 2018-12-27T22:11:06.687Z · score: 3 (2 votes) · LW · GW

(fixed the formatting a bit. if you want to use the markdown editor, go to your settings and check the box for that, but by default you can just hover over text and click the link button to put a link behind some text.)

Comment by benito on Cognitive Bias of AI Researchers? · 2018-12-22T09:30:50.582Z · score: 6 (5 votes) · LW · GW

I feel a bit confused reading this. The notion of an expected utility maximiser is standard in game theory and economics, and is (mathematically) defined as having a preference (ordering) over states that is complete, transitive, continuous and independent.

Did you not really know about the concept when you wrote the OP? Perhaps you've mostly done set theory and programming, and not run into the game theory and economics models?

Or maybe you find the concept unsatisfactory in some other way? I agree that it can give lease to bring in all of one's standard intuitions surrounding goals and such, and Rohin Shah has written a post trying to tease those apart. Nonetheless, I hear that the core concept is massively useful in economics and game theory, suggesting it's a still a very useful abstraction.

Similarly, concepts like 'environment' are often specified mathematically. I once attended an 'intro to AI' course at a top university, and it repeatedly would define the 'environment' (the state space) of a search algorithm in toy examples - the course had me had to code A* search into a pretend 'mars rover' to drive around and find its goal. Things like defining a graph, the weights of its edges, etc, or otherwise enumerating the states and how they connect to each other, are ways of defining such concepts.

If you have any examples of people misusing the words - situations where an argument is made by association, and falls if you replace the common word with a technically precise definition - that would also be interesting.

Comment by benito on Solstice Album Crowdfunding · 2018-12-18T21:17:33.015Z · score: 2 (1 votes) · LW · GW

IndieGogo link is broken, and I can't find the right one using google.

Comment by benito on 2018 AI Alignment Literature Review and Charity Comparison · 2018-12-18T06:11:44.532Z · score: 20 (8 votes) · LW · GW

As usual, this is an excellent resource. Thanks so much. I've PM'd you with about 5 typos / minor errors.

Also, I'm stealing this:

In an early proof of the viability of cryonics, LessWrong has been brought back to life.
Comment by benito on Multi-agent predictive minds and AI alignment · 2018-12-16T18:57:57.136Z · score: 4 (2 votes) · LW · GW
you probably need to understand predictive processing better than what you get from reading the SSC article

I'm a bit confused then that the SSC article is your citation for this concept. Did you just read the SSC article? Or if you didn't, could you link to maybe the things you read? Also, writing a post assuming this concept but that has no sufficient explanation on the web or in the community seems suboptimal, maybe consider writing that post first. Then again, maybe you tried to make a more general point about the brain not being agents, and you could factor out the predictive processing concept and give a different example of a brain architecture that doesn't have a utility function.

Btw, if that is your goal, that doesn't speak to my cruxes for why reasoning with about an AI with a utility function makes sense, which are discussed here and pointed to here (something like 'there is a canonical way to scale me up even if it's not obvious').

Comment by benito on Can I use Less Wrong branding in youtube videos? · 2018-12-14T08:21:02.934Z · score: 9 (5 votes) · LW · GW

+1 on not wanting people to be confused about the relationship between the youtube channel and the site, and a channel called 'LessWrong' that wasn't by someone who was at least a major writer on the site would lead to confusion.

I am generally pro marketing not by name but by example. Living the principles rather than cheering them. Lots of videos asking people why they believe what they believe, spending the time doing scholarship, asking the questions most likely to change your mind, and all the rest, as opposed to overly talking about how great rationality is or using lots of buzzwords.

Anyhow, thanks for asking - and best of skill with the videos! (Drop a link here once you've put some up.)

Comment by benito on Act of Charity · 2018-12-13T20:10:03.338Z · score: 16 (6 votes) · LW · GW

I've curated this post. It makes a lot of really interesting points about internal honesty and societal incentives, and in a very vivid way. The dialogue really pulled on different parts of my psyche; I read it twice, a few weeks apart, and one time I thought the worker was wrong and the other time I thought the worker was right. I expect I'll be linking people to this dialogue many times in the future.

Comment by benito on Act of Charity · 2018-12-13T19:45:04.927Z · score: 3 (2 votes) · LW · GW

Huh, I notice I've not explicitly estimated my timeline distribution for massive international institutional collapse, and that I want to do that. Do you have any links to places where others/you have thought about it?

Comment by benito on What precisely do we mean by AI alignment? · 2018-12-09T07:13:55.103Z · score: 29 (10 votes) · LW · GW

I'm sure someone else is able to write a more thoughtful/definitive answer, but I'll try here to point to two key perspectives on the problem that are typically discussed under this name.

The first perspective is what Rohin Shah has called the motivation-competence split of AGI. One person who's written about this perspective very clearly is Paul Christiano, so I'll quote him:

When I say an AI A is aligned with an operator H, I mean:
A is trying to do what H wants it to do.
The “alignment problem” is the problem of building powerful AI systems that are aligned with their operators.
This is significantly narrower than some other definitions of the alignment problem, so it seems important to clarify what I mean.
In particular, this is the problem of getting your AI to try to do the right thing, not the problem of figuring out which thing is right. An aligned AI would try to figure out which thing is right, and like a human it may or may not succeed.

I believe the general idea is to build a system that is trying to help you, and to not run a computation that is acting adversarially in any situation. Correspondingly, Paul Christiano's research often takes the frame of the following problem:

The steering problem: Using black-box access to human-level cognitive abilities, can we write a program that is as useful as a well-motivated human with those abilities?

Here's some more writing on this perspective:

The second perspective is what Rohin Shah has called the definition-optimization split of AGI. One person who's written about this perspective very clearly is Nate Soares, I'll quote him:

Imagine you have a Jupiter-sized computer and a very simple goal: Make the universe contain as much diamond as possible. The computer has access to the internet and a number of robotic factories and laboratories, and by “diamond” we mean carbon atoms covalently bound to four other carbon atoms. (Pretend we don’t care how it makes the diamond, or what it has to take apart in order to get the carbon; the goal is to study a simplified problem.) Let’s say that the Jupiter-sized computer is running python. How would you program it to produce lots and lots of diamond?
As it stands, we do not yet know how to program a computer to achieve a goal such as that one.
We couldn’t yet create an artificial general intelligence by brute force, and this indicates that there are parts of the problem we don’t yet understand.

There are many AI systems you could build today that would help with this problem, and furthermore, given that much compute you could likely use it for something useful to the goal of making as much diamond as possible. But there is no single program that will continue to usefully create as much diamond as possible as you give it increasing computational power - at some point it will do something weird and unhelpful cf. Bostrom's "Perverse Instantiations", and Paul Christiano on What does the universal prior actually look like?

Again, Nate:

There are two types of open problem in AI. One is figuring how to solve in practice problems that we know how to solve in principle. The other is figuring out how to solve in principle problems that we don’t even know how to brute force yet.

The question of aligning an AI, is creating it such that if the AI you created were to become far more intelligent than any system that has ever existed (including humans), it would continue to do the useful thing you asked it to do, and not do something else.

Here's some more writing on this perspective.

---

Overall, I think that it's the case that neither of these two perspectives is cleanly formalised or well-specified, and that's a key part of the problem with making sure AGI goes well - being able to clearly state exactly what we're confused about in the long run about how to build an AGI is half the battle.

Personally, when I hear 'AI alignment' in a party/event/blog, I expect a discussion of AGI design with the following assumption:

The key bottleneck to ensuring an existential win when creating AGI that is human-level-and-above, is that we need to do advance work on technical problems that we're confused about. (This is to be contrasted with e.g. social coordination among companies and governments about how to use the AGI.)

Precisely what we're confused about, and which research will resolve our confusion, is an open question. The word 'alignment' captures the spirit of certain key ideas about what problems need solving, but is not a finished problem statement.

Added: Another quote from Nate Soares on the definition of alignment:

Or, to put it briefly: precisely naming a problem is half the battle, and we are currently confused about how to precisely name the alignment problem.
For an alternative attempt to name this concept, refer to Eliezer’s rocket alignment analogy. For a further discussion of some of the reasons today’s concepts seem inadequate for describing an aligned intelligence with sufficient precision, see Scott and Abram’s recent write-up.

So it is not Nate's opinion that the problem is well-specified at present.

Comment by benito on Transhumanists Don't Need Special Dispositions · 2018-12-09T01:53:54.096Z · score: 5 (5 votes) · LW · GW

I like sex.

Open and Welcome Thread December 2018

2018-12-04T22:20:53.076Z · score: 28 (10 votes)

The Vulnerable World Hypothesis (by Bostrom)

2018-11-06T20:05:27.496Z · score: 47 (17 votes)

Open Thread November 2018

2018-10-31T03:39:41.480Z · score: 17 (6 votes)

Quick Thoughts on Generation and Evaluation of Hypotheses in a Community

2018-09-06T01:01:49.108Z · score: 56 (21 votes)

Psychology Replication Quiz

2018-08-31T18:54:54.411Z · score: 49 (14 votes)

Goodhart Taxonomy: Agreement

2018-07-01T03:50:44.562Z · score: 44 (11 votes)

Ben Pace's Shortform Feed

2018-06-27T00:55:58.219Z · score: 11 (3 votes)

Bounded Rationality: Two Cultures

2018-05-29T03:41:49.527Z · score: 23 (4 votes)

Temporarily Out of Office

2018-05-08T21:59:55.021Z · score: 6 (1 votes)

Brief comment on frontpage/personal distinction

2018-05-01T18:53:19.250Z · score: 63 (14 votes)

Form Your Own Opinions

2018-04-28T19:50:18.321Z · score: 61 (15 votes)

Community Page Mini-Guide

2018-04-24T15:04:53.641Z · score: 16 (3 votes)

Hold On To The Curiosity

2018-04-23T07:32:01.960Z · score: 94 (29 votes)

LW Update 04/06/18 – QM Sequence Updated

2018-04-06T08:53:45.560Z · score: 37 (9 votes)

Why Karma 2.0? (A Kabbalistic Explanation)

2018-04-02T20:43:02.032Z · score: 28 (9 votes)

A Sketch of Good Communication

2018-03-31T22:48:59.652Z · score: 150 (50 votes)

The Costly Coordination Mechanism of Common Knowledge

2018-03-15T20:20:41.566Z · score: 183 (58 votes)

The Building Blocks of Interpretability

2018-03-14T20:42:30.674Z · score: 23 (4 votes)

Editor Mini-Guide

2018-03-11T21:20:59.393Z · score: 44 (11 votes)

Moderation List (warnings and bans)

2018-03-06T19:18:44.226Z · score: 34 (7 votes)

Extended Quote on the Institution of Academia

2018-03-01T02:58:11.159Z · score: 128 (42 votes)

A model I use when making plans to reduce AI x-risk

2018-01-19T00:21:45.460Z · score: 136 (48 votes)

Field-Building and Deep Models

2018-01-13T21:16:14.523Z · score: 55 (18 votes)

12/31/17 Update: Frontpage Redesign

2018-01-01T03:21:11.408Z · score: 16 (4 votes)

Against Love Languages

2017-12-29T09:51:29.609Z · score: 27 (12 votes)

Comments on Power Law Distribution of Individual Impact

2017-12-29T01:49:26.791Z · score: 54 (14 votes)

Comment on SSC's Review of Inadequate Equilibria

2017-12-01T11:46:13.688Z · score: 30 (9 votes)

Bet Payoff 1: OpenPhil/MIRI Grant Increase

2017-11-09T18:31:06.034Z · score: 40 (12 votes)

Brief comment on featured

2017-10-29T17:12:51.320Z · score: 33 (12 votes)

Maths requires less magical ability than advertised. Also, messy details are messy (and normal).

2017-10-27T03:11:50.773Z · score: 16 (5 votes)

Frontpage Posting and Commenting Guidelines

2017-09-26T06:24:59.649Z · score: 44 (24 votes)

LW 2.0 Site Update: 09/21/17

2017-09-22T07:43:31.740Z · score: 20 (8 votes)

LW 2.0 Site Update: 09/21/17

2017-09-22T07:42:09.542Z · score: 8 (3 votes)

Intellectual Progress Inside and Outside Academia

2017-09-02T23:08:46.690Z · score: 26 (26 votes)

Open Questions in Life-Planning

2017-03-16T00:02:00.000Z · score: 5 (1 votes)

An early stage project generation model

2017-03-16T00:01:00.000Z · score: 7 (2 votes)

An early stage prioritisation model

2017-03-16T00:00:00.000Z · score: 5 (1 votes)

[Repost] My First Insight After CFAR, or the Art of Not Needing to Ask Permission

2015-08-20T23:00:00.000Z · score: 5 (1 votes)

[Repost] CFAR, or The Art of Winning at Life

2015-07-21T23:00:00.000Z · score: 7 (2 votes)

P/S/A - Sam Harris offering money for a little good philosophy

2013-09-01T18:36:40.217Z · score: 12 (12 votes)

[LINK] Does Time Exist? With Julian Barbour

2013-05-23T14:38:57.275Z · score: -1 (6 votes)

LW Study Group Facebook Page

2013-04-08T21:15:40.587Z · score: 16 (17 votes)