Comment by ricraz on Three Kinds of Research Documents: Clarification, Explanatory, Academic · 2019-02-15T17:43:42.288Z · score: 10 (3 votes) · LW · GW

Agreed that "clarification" is confusing. What about "exploration"?

Comment by ricraz on Arguments for moral indefinability · 2019-02-13T13:44:18.278Z · score: 5 (3 votes) · LW · GW

Thanks for the detailed comments! I only have time to engage with a few of them:

Most of this is underdefined, and that’s unsettling at least in some (but not necessarily all) cases, and if we want to make it less underdefined, the notion of 'one ethics' has to give.

I'm not that wedded to 'one ethics', more like 'one process for producing moral judgements'. But note that if we allow arbitrariness of scope, then 'one process' can be a piecewise function which uses one subprocess in some cases and another in others.

I find myself having similarly strong meta-level intuitions about wanting to do something that is "non-arbitrary" and in relevant ways "simple/elegant". ...motivationally it feels like this intuition is importantly connected to what makes it easy for me to go "all-in“ for my ethical/altruistic beliefs.

I agree that these intuitions are very strong, and they are closely connected to motivational systems. But so are some object-level intuitions like "suffering is bad", and so the relevant question is what you'd do if it were a choice between that and simplicity. I'm not sure your arguments distinguish one from the other in that context.

one can maybe avoid to feel this uncomfortable feeling of uncertainty by deferring to idealized reflection. But it’s not obvious that this lastingly solves the underlying problem

Another way of phrasing this point: reflection is almost always good for figuring out what's the best thing to do, but it's not a good way to define what's the best thing to do.

Comment by ricraz on Arguments for moral indefinability · 2019-02-13T13:29:22.824Z · score: 5 (3 votes) · LW · GW

For the record, this is probably my key objection to preference utilitarianism, but I didn't want to dive into the details in the post above (for a very long post about such things, see here).

Comment by ricraz on Coherent behaviour in the real world is an incoherent concept · 2019-02-13T12:01:02.458Z · score: 2 (1 votes) · LW · GW

From Rohin's post, a quote which I also endorse:

You could argue that while [building AIs with really weird utility functions] is possible in principle, no one would ever build such an agent. I wholeheartedly agree, but note that this is now an argument based on particular empirical facts about humans (or perhaps agent-building processes more generally).

And if you're going to argue based on particular empirical facts about what goals we expect, then I don't think that doing so via coherence arguments helps very much.

Comment by ricraz on Coherent behaviour in the real world is an incoherent concept · 2019-02-13T11:31:03.996Z · score: 2 (1 votes) · LW · GW
This seems pretty false to me.

I agree that this problem is not a particularly important one, and explicitly discard it a few sentences later. I hadn't considered your objection though, and will need to think more about it.

(Side note: I'm pretty annoyed with all the use of "there's no coherence theorem for X" in this post.)

Mind explaining why? Is this more a stylistic preference, or do you think most of them are wrong/irrelevant?

the "further out" your goal is and the more that your actions are for instrumental value, the more it should look like world 1 in which agents are valuing abstract properties of world states, and the less we should observe preferences over trajectories to reach said states.

Also true if you make world states temporally extended.

Comment by ricraz on Arguments for moral indefinability · 2019-02-12T13:39:10.645Z · score: 4 (2 votes) · LW · GW

If I had to define it using your taxonomy, then yes. However, it's also trying to do something broader. For example, it's intended to be persuasive to people who don't think of meta-ethics in terms of preferences and rationality at all. (The original intended audience was the EA forum, not LW).

Edit: on further reflection, your list is more comprehensive than I thought it was, and maybe the people I mentioned above actually would be on it even if they wouldn't describe themselves that way.

Another edit: maybe the people who are missing from your list are those who would agree that morality has normative force but deny that rationality does (except insofar as it makes you more moral), or at least are much more concerned with the former than the latter. E.g. you could say that morality is a categorical imperative but rationality is only a hypothetical imperative.

Arguments for moral indefinability

2019-02-12T10:40:01.226Z · score: 45 (14 votes)

Coherent behaviour in the real world is an incoherent concept

2019-02-11T17:00:25.665Z · score: 29 (11 votes)
Comment by ricraz on Book Trilogy Review: Remembrance of Earth’s Past (The Three Body Problem) · 2019-01-30T18:58:44.635Z · score: 1 (2 votes) · LW · GW

There are some interesting insights about the overall viewpoint behind this book, but gosh the tone of this post is vicious. I totally understand frustration with stupidity in fiction, and I've written such screeds in my time too. But I think it's well worth moderating the impulse to do so in cases like this where the characters whose absolute stupidity you're bemoaning map onto the outgroup in so many ways.

Comment by ricraz on Too Smart for My Own Good · 2019-01-24T02:00:44.030Z · score: 2 (1 votes) · LW · GW

Agreed, except that the behaviour described could also just be procrastination.

Comment by ricraz on Disentangling arguments for the importance of AI safety · 2019-01-24T01:43:33.998Z · score: 2 (1 votes) · LW · GW

I don't think it depends on how much A and B, because the "expected amount" is not a special point. In this context, the update that I made personally was "There are more shifts than I thought there were, therefore there's probably more of A and B than I thought there was, therefore I should weakly update against AI safety being important." Maybe (to make A and B more concrete) there being more shifts than I thought downgrades my opinion of the original arguments from "absolutely incredible" to "very very good", which slightly downgrades my confidence that AI safety is important.

As a separate issue, conditional on the field being very important, I might expect the original arguments to be very very good, or I might expect them to be very good, or something else. But I don't see how that expectation can prevent a change from "absolutely exceptional" to "very very good" from downgrading my confidence.

Vote counting bug?

2019-01-22T15:44:48.154Z · score: 7 (2 votes)
Comment by ricraz on Disentangling arguments for the importance of AI safety · 2019-01-22T11:43:15.378Z · score: 5 (3 votes) · LW · GW

Apologies if this felt like it was targeted specifically at you and other early AI safety advocates, I have nothing but the greatest respect for your work. I'll rewrite to clarify my intended meaning, which is more an attempt to evaluate the field as a whole. This is obviously a very vaguely-defined task, but let me take a stab at fleshing out some changes over the past decade:

1. There's now much more concern about argument 2, the target loading problem (as well as inner optimisers, insofar as they're distinct).

2. There's now less focus on recursive self-improvement as a key reason why AI will be dangerous, and more focus on what happens when hardware scales up. Relatedly, I think a greater percentage of safety researchers believe that there'll be a slow takeoff than used to be the case.

3. Argument 3 (prosaic AI alignment) is now considered more important and more tractable.

4. There's now been significant criticism of coherence arguments as a reason to believe that AGI will pursue long-term goals in an insatiable maximising fashion.

I may be wrong about these shifts - I'm speaking as a newcomer to the field who has a very limited perspective on how it's evolved over time. If so, I'd be happy to be corrected. If they have in fact occurred, here are some possible (non-exclusive) reasons why:

A. None of the proponents of the original arguments have changed their minds about the importance of those arguments, but new people came into the field because of those arguments, then disagreed with them and formulated new perspectives.

B. Some of the proponents of the original arguments have changed their minds significantly.

C. The proponents of the original arguments were misinterpreted, or overemphasised some of their beliefs at the expense of others, and actually these shifts are just a change in emphasis.

I think none of these options reflect badly on anyone involved (getting everything exactly right the first time is an absurdly high standard), but I think A and B would be weak evidence against the importance of AI safety (assuming you've already conditioned on the size of the field, etc). I also think that it's great when individual people change their minds about things, and definitely don't want to criticise that. But if the field as a whole does so (whatever that means), the dynamics of such a shift are worth examination.

I don't have strong beliefs about the relative importance of A, B and C, although I would be rather surprised if any one of them were primarily responsible for all the shifts I mentioned above.

Comment by ricraz on Disentangling arguments for the importance of AI safety · 2019-01-22T11:07:44.458Z · score: 4 (2 votes) · LW · GW

I endorse ESRogs' answer. If the world were a singleton under the control of a few particularly benevolent and wise humans, with an AGI that obeys the intention of practical commands (in a somewhat naive way, say, so it'd be unable to help them figure out ethics) then I think argument 5 would no longer apply, but argument 4 would. Or, more generally: argument 5 is about how humans might behave badly under current situations and governmental structures in the short term, but makes no claim that this will be a systemic problem in the long term (we could probably solve it using a singleton + mass surveillance); argument 4 is about how we don't know of any governmental(/psychological?) structures which are very likely to work well in the long term.

Having said that, your ideas were the main (but not sole) inspiration for argument 4, so if this isn't what you intended, then I may need to rethink its inclusion.

Disentangling arguments for the importance of AI safety

2019-01-21T12:41:43.615Z · score: 109 (34 votes)
Comment by ricraz on What AI Safety Researchers Have Written About the Nature of Human Values · 2019-01-16T15:52:21.307Z · score: 3 (2 votes) · LW · GW

Nice overview :) One point: the introductory sentences don't seem to match the content.

It is clear to most AI safety researchers that the idea of “human values” is underdefined, and this concept should be additionally formalized before it can be used in (mostly mathematical) models of AI alignment.

In particular, I don't interpret most of the researchers you listed as claiming that "[human values] should be formalized". I think that's a significantly stronger claim than, for example, the claim that we should try to understand human values better.

Comment by ricraz on Open Thread January 2019 · 2019-01-16T15:45:24.388Z · score: 5 (3 votes) · LW · GW

Are you claiming that price per computation would drop in absolute terms, or compared with the world in which Moore's law continued? The first one seems unobjectionable, the default state of everything is for prices to fall since there'll be innovation in other parts of the supply chain. The second one seems false. Basic counter-argument: if it were true, why don't people produce chips from a decade ago which are cheaper per amount of computation than the ones being produced today?

1. You wouldn't have to do R&D, you could just copy old chip designs.

2. You wouldn't have to keep upgrading your chip fabs, you could use old ones.

3. People could just keep collecting your old chips without getting rid of them.

4. Patents on old chip designs have already expired.

Comment by ricraz on Comments on CAIS · 2019-01-14T11:11:11.925Z · score: 7 (3 votes) · LW · GW
AI services can totally be (approximately) VNM rational -- for a bounded utility function.

Suppose an AI service realises that it is able to seize many more resources with which to fulfil its bounded utility function. Would it do so? If no, then it's not rational with respect to that utility function. If yes, then it seems rather unsafe, and I'm not sure how it fits Eric's criterion of using "bounded resources".

Note that CAIS is suggesting that we should use a different prior: the prior based on "how have previous advances in technology come about". I find this to be stronger evidence than how evolution got to general intelligence.

I agree with Eric's claim that R&D automation will speed up AI progress. The point of disagreement is more like: when we have AI technology that's able to do basically all human cognitive tasks (which for want of a better term I'll call AGI, as an umbrella term to include both CAIS and agent AGI), what will it look like? It's true that no past technologies have looked like unified agent AGIs - but no past technologies have also looked like distributed systems capable of accomplishing all human tasks either. So it seems like the evolution prior is still the most relevant one.

"Humans think in terms of individuals with goals, and so even if there's an equally good approach to AGI which doesn't conceive of it as a single goal-directed agent, researchers will be biased against it."
I'm curious how strong an objection you think this is. I find it weak; in practice most of the researchers I know think much more concretely about the systems they implement than "agent with a goal", and these are researchers who work on deep RL. And in the history of AI, there were many things to be done besides "agent with a goal"; expert systems/GOFAI seems like the canonical counterexample.

I think the whole paradigm of RL is an example of a bias towards thinking about agents with goals, and that as those agents become more powerful, it becomes easier to anthropomorphise them (OpenAI Five being one example where it's hard not to think of it as a group of agents with goals). I would withdraw my objection if, for example, most AI researchers took the prospect of AGI from supervised learning as seriously as AGI from RL.

A clear counterargument is that some companies will have AI CEOs, and they will outcompete the others, and so we'll quickly transition to the world where all companies have AI CEOs. I think this is not that important -- having a human in the loop need not slow down everything by a huge margin, since most of the cognitive work is done by the AI advisor, and the human just needs to check that it makes sense (perhaps assisted by other AI services).

I claim that this sense of "in the loop" is irrelevant, because it's equivalent to the AI doing its own thing while the human holds a finger over the stop button. I.e. the AI will be equivalent to current CEOs, the humans will be equivalent to current boards of directors.

To the extent that you are using this to argue that "the AI advisor will be much more like an agent optimising for an open-ended goal than Eric claims", I agree that the AI advisor will look like it is "being a very good CEO". I'm not sure I agree that it will look like an agent optimizing for an open-ended goal, though I'm confused about this.

I think of CEOs as basically the most maximiser-like humans. They have pretty clear metrics which they care about (even if it's not just share price, "company success" is a clear metric by human standards), they are able to take actions that are as broad in scope as basically any actions humans can take (expand to new countries, influence politics, totally change the lives of millions of employees), and almost all of the labour is cognitive, so "advising" is basically as hard as "doing" (modulo human interactions). To do well they need to think "outside the box" of stimulus and response, and deal with worldwide trends and arbitrarily unusual situations (has a hurricane just hit your factory? do you need to hire mercenaries to defend your supply chains?) Most of them have some moral constraints, but also there's a higher percentage of psychopaths than any other role, and it's plausible that we'd have no idea whether an AI doing well as a CEO actually "cares about" these sorts of bounds or is just (temporarily) constrained by public opinion in the same way as the psychopaths.

The main point of CAIS is that services aren't long-term goal-oriented; I agree that if services end up being long-term goal-oriented they become dangerous.

I then mentioned that to build systems which implement arbitrary tasks, you may need to be operating over arbitrarily long time horizons. But probably this also comes down to how decomposable such things are.

If you go via the CAIS route you definitely want to prevent unbounded AGI maximizers from being created until you are sure of their safety or that you can control them. (I know you addressed that in the previous point, but I'm pretty sure that no one is arguing to focus on CAIS conditional on AGI agents existing and being more powerful than CAIS, so it feels like you're attacking a strawman.)

People are arguing for a focus on CAIS without (to my mind) compelling arguments for why we won't have AGI agents eventually, so I don't think this is a strawman.

Given a sufficiently long delay, we could use CAIS to build global systems that can control any new AGIs, in the same way that government currently controls most people.

This depends on having pretty powerful CAIS and very good global coordination, both of which I think of as unlikely (especially given that in a world where CAIS occurs and isn't very dangerous, people will probably think that AI safety advocates were wrong about there being existential risk). I'm curious how likely you think this is though? If agent AGIs are 10x as dangerous, and the probability that we eventually build them is more than 10%, then agent AGIs are the bigger threat.

I also am not sure why you think that AGI agents will optimize harder for self-improvement.

Because they have long-term convergent instrumental goals, and CAIS doesn't. CAIS only "cares" about self-improvement to the extent that humans are instructing it to do so, but humans are cautious and slow. Also because even if building AGI out of task-specific strongly-constrained modules is faster at first, it seems unlikely that it's anywhere near the optimal architecture for self-improvement.

Compared to what? If the alternative is "a vastly superintelligent AGI agent that is acting within what is effectively the society of 2019", then I think CAIS is a better model. I'm guessing that you have something else in mind though.

It's something like "the first half of CAIS comes true, but the services never get good enough to actually be comprehensive/general. Meanwhile fundamental research on agent AGI occurs roughly in parallel, and eventually overtakes CAIS." As a vague picture, imagine a world in which we've applied powerful supervised learning to all industries, and applied RL to all tasks which are either as constrained and well-defined as games, or as cognitively easy as most physical labour, but still don't have AI which can independently do the most complex cognitive tasks (Turing tests, fundamental research, etc).

Comment by ricraz on Comments on CAIS · 2019-01-14T09:12:54.460Z · score: 11 (3 votes) · LW · GW

You're right, this is a rather mealy-mouthed claim. I've edited it to read as follows:

the empirical claim that we'll develop AI services which can replace humans at most cognitively difficult jobs significantly before we develop any single strongly superhuman AGI

This would be false if doing well at human jobs requires capabilities that are near AGI. I do expect a phase transition - roughly speaking I expect progress in automation to mostly require more data and engineering, and progress towards AGI to require algorithmic advances and a cognition-first approach. But the thing I'm trying to endorse in the post is a weaker claim which I think Eric would agree with.

Comment by ricraz on Comments on CAIS · 2019-01-12T20:30:40.539Z · score: 9 (4 votes) · LW · GW
AGI is ... something that approximates an expected utility maximizer.

This seems like a trait which AGIs might have, but not a part of how they should be defined. I think Eric would say that the first AI system which can carry out all the tasks we would expect an AGI to be capable of won't actually approximate an expected utility maximiser, and I consider it an open empirical question whether or not he's right.

Many risk-reducing services (especially ones that can address human safety problems) seem to require high-level general reasoning abilities, whereas many risk-increasing services can just be technical problem solvers or other kinds of narrow intelligences or optimizers, so CAIS is actually quite unsafe, and hard to make safe, whereas AGI / goal-directed agents are by default highly unsafe, but with appropriate advances in safety research can perhaps be made safe.

Yeah, good point. I guess that my last couple of sentences were pretty shallowly-analysed, and I'll retract them and add a more measured conclusion.

Comment by ricraz on Why is so much discussion happening in private Google Docs? · 2019-01-12T18:57:39.285Z · score: 6 (4 votes) · LW · GW

I agree with both those points, and would add:

3. The fact that access to the doc is invite-only, and therefore people feel like they've been specifically asked to participate.

Comments on CAIS

2019-01-12T15:20:22.133Z · score: 63 (17 votes)
Comment by ricraz on You can be wrong about what you like, and you often are · 2018-12-20T18:49:41.172Z · score: 5 (2 votes) · LW · GW
I'm just not one of those people who enjoys "deeper" activities like reading a novel. I like watching TV and playing video games.
I'm just not one of those people who likes healthy foods. You may like salads and swear by them, but I am different. I like pizza and french fries.
I'm just not an intellectual person. I don't enjoy learning.
I'm just not into that early retirement stuff. I need to maintain my current lifestyle in order to be happy.
I'm just not into "good" movies/music/art. I like the Top 50 stuff.

I'm curious why you chose these particular examples. I think they're mostly quite bad and detract from the reasonable point of the overall post. The first three, and the fifth, I'd characterise as "acquired tastes": they're things that people may come to enjoy over time, but often don't currently enjoy. So even someone who would grow to like reading novels, and would have a better life if they read more novels, may be totally correct in stating that they don't enjoy reading novels.

The fourth is a good example for many people, but many others find that retirement is boring. Also, predicting what your life will look like after a radical shift is a pretty hard problem, so if this is the sort of thing people are wrong about it doesn't seem so serious.

More generally, whether or not you enjoy something is different from whether that thing, in the future, will make you happier. At points in this post you conflate those two properties. The examples also give me elitist vibes: the implication seems to be that upper-class pursuits are just better, and people who say they don't like them are more likely to be wrong. (If anything, actually, I'd say that people are more likely to be miscalibrated about their enjoyment of an activity the more prestigious it is, since we're good at deceiving ourselves about status considerations).

Comment by ricraz on You can be wrong about what you like, and you often are · 2018-12-20T18:23:48.890Z · score: 2 (1 votes) · LW · GW

Why is the optimisation space convex?

Comment by ricraz on Double-Dipping in Dunning--Kruger · 2018-11-29T14:59:34.807Z · score: 2 (3 votes) · LW · GW

In general it's understandable not to consider that hypothesis. But when you are specifically making a pointed and insulting comment about another person, I think the bar should be higher.

Comment by ricraz on Double-Dipping in Dunning--Kruger · 2018-11-29T00:12:31.259Z · score: 6 (4 votes) · LW · GW

Some posts can focus on raising the sanity waterline. Other posts can be motivational and targeted at people's incorrect self-assessments. Note that successfully doing the latter is often quite a good way of making people achieve better outcomes.

Comment by ricraz on Double-Dipping in Dunning--Kruger · 2018-11-29T00:09:53.300Z · score: 7 (5 votes) · LW · GW

I downvoted this explanation because it's uncharitable to claim that someone is either lying or deluded when it seems plausible that they were instead making a joke. Perhaps you have other reasons to think that Alephywr isn't joking, but if so it's worth explaining that.

Comment by ricraz on Double-Dipping in Dunning--Kruger · 2018-11-28T17:29:02.548Z · score: 26 (14 votes) · LW · GW

I like and endorse the general theme of this post, but have some issues with the details.

The takeaway is this: if you're the kind of person who worries about statistics and Dunning--Kruger in the first place, you're already way above average and clearly have the necessary meta-cognition to not fall victim to such things.

I feel like this is good motivation but bad world-modelling. Two important ways in which it fails:

  • Social interactions. You gave the example of people not really knowing how funny they are. I don't think worrying about statistics in general helps with this, because this might just not be the type of thing you've considered as a failure mode, and also because it's very difficult to substitute deliberate analysis for buggy social intuitions.
  • People being bad at philosophy. There are very many smart people who confidently make ridiculous arguments - people smart enough to understand Dunning-Kruger, but who either think they're an exception, or else pay lip service to it and then don't actually process any change in beliefs.
The world is being run by people who are too incompetent to know it; people who are only in power because they're the ones who showed up, and because showing up is most of the battle.

I dislike lines of argument which point at people on top of a pile of utility and call them incompetent. I think it is plausibly very difficult to get to the top of the society, but that the skills required are things which are really difficult to measure or even understand properly, like "hustle" or "ambition" or "social skills" or "pays less attention to local incentive gradients" or "has no wasted mental motion in between deciding that x is a good idea and deciding to do x".

From now on, unless you have evidence that you're particularly bad at something, I want you to assume that you're 15 percentile points higher than you would otherwise estimate.

Nit: I prefer using standard deviations instead of percentile points when talking about high-level performance, because it better allows us to separate people with excellent skill from people with amazing skill. Also because "assume that you're 15 percentile points higher" leaves a lot of people above 100%.

Comment by ricraz on How democracy ends: a review and reevaluation · 2018-11-28T15:36:31.375Z · score: 4 (2 votes) · LW · GW

Insofar as we're talking about the collapse of democracy in America, millions of people will be adversely affected, many of them wealthy enough that they could already be taking important steps at fairly low cost.

Comment by ricraz on How democracy ends: a review and reevaluation · 2018-11-27T23:02:09.833Z · score: 6 (3 votes) · LW · GW

Done.

How democracy ends: a review and reevaluation

2018-11-27T10:50:01.130Z · score: 17 (9 votes)
Comment by ricraz on On first looking into Russell's History · 2018-11-09T17:50:15.259Z · score: 20 (4 votes) · LW · GW

I also don't have strong opinions on how accurate the book is, but that link really doesn't support the claim that the book is inaccurate. Its most scathing criticism of Russell: "As far as the omissions go, the grossest is the denial of any role to Eastern philosophy." Something I'm inclined to forgive in a "History of Western Philosophy". Then there are complaints about "inconsequential logical griping...from place to place" in the book, which again is not really a devastating blow.

Comment by ricraz on On first looking into Russell's History · 2018-11-09T17:42:58.549Z · score: 2 (1 votes) · LW · GW

Cheers, fixed. Just autocorrect being autocorrect as per.

On first looking into Russell's History

2018-11-08T11:20:00.935Z · score: 35 (11 votes)
Comment by ricraz on A compendium of conundrums · 2018-11-07T12:26:26.403Z · score: 2 (1 votes) · LW · GW

Pirate treasure: you're right that tiebreak information is needed. I've added it in now - assume the pirates are spiteful.

Blind maze: nope, it's a fairly ugly solution.

Prisoners and boxes: yes, you can save all of them. Is the solution to your variant the same as my variant?

Battleships: I've found an ugly solution, and I expect it's the one you intended, but is there anything nicer? Sbe rirel fdhner, pbafvqre rirel cbffvoyr zbirzrag irpgbe, juvpu tvirf hf n ghcyr bs yratgu 4. Gura jr vgrengr guebhtu rirel fhpu ghcyr hfvat qvntbanyvfngvba, naq ng rnpu gvzrfgrc jr pna pnyphyngr jurer n fuvc juvpu fgnegrq ba gung fdhner jbhyq unir raqrq hc ol gur pheerag gvzrfgrc, naq fubbg vg.

Comment by ricraz on Speculations on improving debating · 2018-11-06T14:22:45.741Z · score: 2 (1 votes) · LW · GW

I'm sympathetic with that view, but think it's far from clear-cut. For example, suppose you model rationality as the skill of identifying bad arguments plus the mental habit of applying that skill to your own ideas. When the former is the bottleneck, then debating probably has a positive effect on overall rationality; when the latter is the bottleneck, it is probably negative. Probably the latter is more common, but the effect of the former is bigger? I don't have a strong opinion on this though.

As an anecdotal point, I have been pleasantly surprised by how often you can win a debate by arguing primarily for things that you actually believe. The example that comes to mind is being assigned the pro-Brexit side in a debate, and focusing on the EU's pernicious effects on African development, and how trade liberalisation would benefit the bottom billion. In cases like these you don't so much rebut your opponents' points as reframe them to be irrelevant - and I do think that switching mental frameworks is an important skill.

Speculations on improving debating

2018-11-05T16:10:02.799Z · score: 26 (10 votes)
Comment by ricraz on When does rationality-as-search have nontrivial implications? · 2018-11-05T15:15:03.799Z · score: 6 (3 votes) · LW · GW

This was a very useful and well-explained idea. Strongly upvoted.

Comment by ricraz on Implementations of immortality · 2018-11-01T22:43:01.914Z · score: 5 (2 votes) · LW · GW

Hmm. Yeah, you're probably right. Although there is the common phenomenon of social subgroups stratifying into smaller and smaller cliques, which is some evidence that oppositional group identity matters even if it doesn't feel like it from the inside.

Comment by ricraz on Implementations of immortality · 2018-11-01T22:36:49.823Z · score: 2 (1 votes) · LW · GW
Age stratification in a world where people live arbitrarily long means you never have an opportunity to become a respected elder in your society; generations of more respected super-elders will be around no matter how old and wise you get.

For any x, you eventually get the opportunity to be in the top x% of society! And assuming that the size of a social circle/community stays roughly fixed, eventually you'll be at the very top of your community. Maybe the more respected super-elders will be in some other galaxy, but not on your world.

Also, in this world, are people youthful indefinitely? I think many of the age related changes in activity choices are driven by physical aging, not maturity, e.g., choosing cocktail parties over clubbing happens not because you realize one day that cocktail parties are a richer experience, but because one day you realize you get too tired by 10pm for the club scene.

Yes, people would be youthful indefinitely. I think there's a mix of reasons, but getting bored/moving on is definitely one of the main ones. Picture a jaded 40 year old in a nightclub - is the limiting factor really tiredness?

Comment by ricraz on Implementations of immortality · 2018-11-01T22:28:55.316Z · score: 2 (1 votes) · LW · GW
Obviously I would desire some amount of novelty, but it's mostly in the context of slotting into a roughly stable daily or weekly routine, rather than the routine itself varying much. (e.g. Thursday evening is for games, the games may vary and becoming increasingly complex, but they are still generally played on Thursday evenings).

The point about typical mind fallacy is well-taken but I don't really see how you can be confident in preferences like the one quoted above given that the timeframes we're talking about are much longer than your lifespan so far. I mean, many people have midlife crises after only a few decades of such routines. I have a strong intuition that after several centuries of games every Thursday evening, almost anyone would get very bored.

At the very least, I would want a mostly stable "island of calm" where things mostly remained the same, and where I would always return when I was tired of going on adventures.

This isn't ruled out by my proposal; note that "progress" doesn't mean discarding every aspect of the past. However, I am suspicious of this sort of island of calm, for roughly the same reason that I doubt it's valuable to most adults to regularly visit their childhood treehouse. (Also, if there are other people in this 'island of calm', then you can't force them to stay the same just for your sake...)

[This] reads to me like you're saying that it's a problem if people like me have the freedom to choose stability if that makes them happier than variety does.

People get stuck in local maxima, and often don't explore enough to find better options for themselves. The longer people live, the more valuable it is to have sufficient exploration to figure out the best option before choosing stability.

Implementations of immortality

2018-11-01T14:20:01.494Z · score: 21 (8 votes)

What will the long-term future of employment look like?

2018-10-24T19:58:09.320Z · score: 11 (4 votes)

Book review: 23 things they don't tell you about capitalism

2018-10-18T23:05:29.465Z · score: 19 (11 votes)
Comment by ricraz on Some cruxes on impactful alternatives to AI policy work · 2018-10-18T15:15:46.989Z · score: 5 (3 votes) · LW · GW

I generally agree with the idea of there being long tails to the left. Revolutions are a classic example - and, more generally, any small group of ideologically polarised people taking extreme actions. Environmentalists groups blocking genetically engineered crops might be one example; global warming skepticism another; perhaps also OpenAI.

I'm not sure about the "sleeping dragons", though, since I can't think of many cases where small groups created technologies that counterfactually wouldn't have happened (or even would have happened in safer ways).

Comment by ricraz on Some cruxes on impactful alternatives to AI policy work · 2018-10-14T09:41:48.575Z · score: 10 (3 votes) · LW · GW

There's something to this, but it's not the whole story, because increasing probability of survival is good no matter what the current level is. Perhaps if you model decreasing existential risk as becoming exponentially more difficult (e.g. going from 32% risk to 16% risk is as difficult as going from 16% to 8%) and with the possibility of accidental increases (e.g. you're trying to go from 32 to 16 but there's some probability you go to 64 instead) then the current expectation for the level of risk will affect whether you take high-variance actions or not.

Comment by ricraz on Some cruxes on impactful alternatives to AI policy work · 2018-10-14T09:35:15.388Z · score: 6 (2 votes) · LW · GW

I'm not sure how much to believe in this without concrete examples (the ones which come to mind are mostly pretty trivial, like Yahoo having a cluttered homepage and Google having a minimalist one, or MacOS being based on Unix).

Maybe Twitter is an example? I can easily picture it having a very different format. Still, I'm not particularly swayed by that.

Book review: The Complacent Class

2018-10-13T19:20:05.823Z · score: 21 (9 votes)
Comment by ricraz on non-judgemental awareness · 2018-10-11T13:06:40.691Z · score: 5 (3 votes) · LW · GW
When practicing to improve in a skill you want to get as much good quality information as possible (eg: Bob).

I just don't think this is true. The advice to always practice with good technique, to entrench good habits, is fairly common. And in my experience it's only once I've played a lot with good technique that I can even notice many of the subtleties.

Comment by ricraz on We can all be high status · 2018-10-10T18:02:14.162Z · score: 14 (9 votes) · LW · GW
So what I want to propose is that we define much more clearly what it takes to be taken seriously around here.

I mostly agree with your description of the problem, and I sympathise with your past self. However, I also think you understate the extent to which the EA and rationality communities are based around individual friendships. That makes things much messier than they might be in a corporation, and make definitions like the one you propose much harder.

On the other hand, it also means that there's another sense in which "we can all be high-status": within our respective local communities. I'm curious how you feel about that, because that was quite adequate for me for a long time, especially as a student.

On a broader level, one actionable idea I've been thinking about is to talk less about existential risk being "talent constrained", so that people who can't get full-time jobs in the field don't feel like they're not talented. A more accurate term in my eyes is "field-building constrained".

Some cruxes on impactful alternatives to AI policy work

2018-10-10T13:35:27.497Z · score: 146 (50 votes)
Comment by ricraz on A compendium of conundrums · 2018-10-10T13:24:42.285Z · score: 1 (1 votes) · LW · GW

Oh man, that was a stupid typo. I was very confused, mostly because I myself hadn't properly read the question. Edited now; yours is a clever solution though.

Comment by ricraz on A compendium of conundrums · 2018-10-10T08:11:32.102Z · score: 1 (1 votes) · LW · GW

Nope, some people get stuck on which links to cut. The standard answer is 3, which is the same as yours but with the assumption that you can detatch a link from its neighbours after 1 cut, which wasn't made explicit.

Comment by ricraz on Thinking of the days that are no more · 2018-10-09T13:06:19.357Z · score: 1 (1 votes) · LW · GW

I don't think there's any alternative. The reason that these contracts used to be hard to breach was mainly because of social norms - otherwise you could just leave and live in sin with someone else any time you wanted. But weaker contracts are only possible because the relevant social norms have changed. (Although there are probably some communities which take marriage much more seriously, and you could live there if you wanted to).

Then there are changes re who gets child custody, but it seems to me that having consistent legal judgements based on what's best for the kids is better than allowing some people to opt into more extreme contracts.

Another factor is laws around property ownership, but I think that even though the laws have weakened, opting in to prenups is a sufficient solution for anyone who wants stronger commitments. They have clauses changing property allocations depending on who's "at fault" for the divorce, right? (Although I guess I'm against prenups which specify custody arrangements, except insofar as they turn out to be good for kids).

Comment by ricraz on Thinking of the days that are no more · 2018-10-08T15:33:49.390Z · score: 2 (2 votes) · LW · GW

"Doesn't work well" by what metric - having children? I don't see why that should be the predominant consideration. I have many other goals when I go into relationships - enjoyment, companionship, self-improvement, security, signalling, etc. Now that people are much wealthier and have fewer children, the relative importance of hard-to-breach contracts has decreased, and it's plausible that for many people, moving even further towards flexible contracts is better for most of their goals.

A compendium of conundrums

2018-10-08T14:20:01.178Z · score: 12 (12 votes)

Thinking of the days that are no more

2018-10-06T17:00:01.208Z · score: 13 (6 votes)
Comment by ricraz on The Unreasonable Effectiveness of Deep Learning · 2018-10-06T15:43:01.667Z · score: 1 (1 votes) · LW · GW

I confess that I have a weakness for slightly fanciful titles. In my defence, though, I do actually think that "unreasonable" is a reasonable way of describing the success of neural networks. The argument in the original paper was something like "it could have been the case that math just wasn't that helpful in describing the universe, but actually it works really well on most things we try it on, and we don't have any principled explanation for why that is so". Similarly, it could have been the case that feedforward neural networks just weren't very good at learning useful functions, but actually they work really well on most things we try them on, and we don't have any principled explanation for why that is so.

Comment by ricraz on The Unreasonable Effectiveness of Deep Learning · 2018-10-03T20:52:36.439Z · score: 2 (2 votes) · LW · GW

Glad to hear it :) I was talking about DL not RL, although I'd also claim that the latter is unreasonably effective. Basically, we throw compute at neural nets, and they solve problems! We don't need to even know how to solve them ourselves! We don't even know what the nets are doing internally! I think this efficacy is as entirely magical as the one in the original paper I was referencing.

Comment by ricraz on What the Haters Hate · 2018-10-02T14:24:49.459Z · score: 12 (6 votes) · LW · GW

"Article" refers to your post, the irony is that you are accusing Klein of being unable to imagine other minds working in different ways, because you are unable to imagine his mind working in any different way.

In the paragraph directly before the one I quoted, you pointed out that it's silly for SJWs to assume that everyone thinks in terms of identity, race, gender, etc. But the blind spot that you're accusing Klein of is one which implicitly assumes that he thinks in terms of ITTs, tribes which are distinct from ideologies, etc. Klein's framework leads him to make a slightly dubious statement about unbridegeable divides. Your framework leads you to badly strawman his statement and throw around ad hominem attacks.

I don't think this is particularly worth arguing about, since I predict it'll become an argument about the post as a whole. In hindsight, I shouldn't have given in to the temptation to post a snarky comment. I did so because I consider the quoted paragraph (and the one above it) both rude and incorrect, a particularly irksome combination. As a more meta note, if culture war posts are accepted on Less Wrong, I think we should strongly encourage them to have a much more civil tone.

Comment by ricraz on What the Haters Hate · 2018-10-01T21:06:02.979Z · score: 7 (7 votes) · LW · GW
Klein not only is incapable of passing the IDW’s Ideological Turing Test, but he also seems unaware of the fact that someone can fail to pass his own. The only way I can imagine this happening is that Klein is so absorbed in his ideology that he can’t fathom other minds being different.

Even for an article dedicated to bashing the outgroup, this is a particularly ironic passage.

Comment by ricraz on Leto among the Machines · 2018-10-01T04:26:45.386Z · score: 24 (13 votes) · LW · GW

This is an entertaining essay but extrapolates wayyyy too far. Case in point: I don't even think it's actually about automation - the thing you're criticising sounds more like bureaucracy. Automation includes using robots to build cars and writing scripts to automatically enter data into spreadsheets instead of doing it by hand. You don't address this type of thing at all. Your objection is to mindless rule-following - which may in future be done by machines, but right now is mostly done by people. (I don't know of any tech system which doesn't have customer support and can't be manually overridden).

As a solution, you propose using more intelligent people who are able to exercise their discretion. Problem 1: there aren't that many intelligent, competent people. But you can make more use of the ones you have by putting a competent person in charge of a bunch of less competent people, and laying down guidelines for them to follow. Ooops, we've reinvented bureaucracies. And when we need to scale such a system to a thousand- or million-person enterprise, like a corporation or government, then the people at the bottom are going to be pretty damn incompetent and probably won't care at all about the organisation's overall goals. So having rules to govern their behaviour is important. When implemented badly, that can lead to Kafka-esque situations, but that's true of any system. And there are plenty of companies which create great customer service by having well-thought-out policies - Amazon, for instance.

But incompetence isn't even the main issue. Problem 2: the more leeway individuals have, the more scope they have to be corrupt. A slightly less efficient economy isn't an existential threat to a country. But corrupt governments are. One way we prevent them is using a constitution - a codification of rules to restrict behaviour. That's exactly the type of thing you rail against. Similarly, corruption in a corporation can absolutely wreck it, and so it's better to err on the side of strictness.

Anyway, the funny thing is that I do think there's a useful moral which can be drawn from your account of the Butlerian Jihad, but it's almost the exact opposite of your interpretation: namely, that humans are bad at solving coordination problems without deontological rules. Suppose you want to ensure that strong AI isn't developed for the next 10000 years. Do you a) tell people that strong AI is a terrible idea, but anything short of that is fine, or b) instill a deep hatred of all computing technology, and allow people to come up with post-hoc justifications for why. I don't think you need to know much about psychology or arms races to realise that the latter approach is much better - not despite its pseudo-religious commandments, but rather because of them.

The Unreasonable Effectiveness of Deep Learning

2018-09-30T15:48:46.861Z · score: 86 (26 votes)
Comment by ricraz on Deep learning - deeper flaws? · 2018-09-27T05:27:57.372Z · score: 1 (1 votes) · LW · GW

OpenAI Five is very close to being superhuman at Dota. Would you be surprised if it got there in the next few months, without any major changes?

Deep learning - deeper flaws?

2018-09-24T18:40:00.705Z · score: 42 (17 votes)

Book review: Happiness by Design

2018-09-23T04:30:00.939Z · score: 14 (6 votes)
Comment by ricraz on Realism about rationality · 2018-09-21T09:02:37.044Z · score: 1 (1 votes) · LW · GW

These ideas are definitely pointing in the direction of rationality realism. I think most of them are related to items on my list, although I've tried to phrase them in less ambiguous ways.

Comment by ricraz on Realism about rationality · 2018-09-20T21:23:48.944Z · score: 1 (1 votes) · LW · GW

I'm not convinced by the distinction you draw. Suppose you simulate me at slightly less than perfect fidelity. The simulation is an agent with a (slightly) different utility function to me. Yet this seems like a case where FDT should be able to say relevant things.

In Abram's words,

FDT requires a notion of logical causality, which hasn't appeared yet.

I expect that logical causality will be just as difficult to formalise as normal causality, and in fact that no "correct" formalisation exists for either.

Book review: Why we sleep

2018-09-19T22:36:19.608Z · score: 49 (22 votes)

Realism about rationality

2018-09-16T10:46:29.239Z · score: 136 (55 votes)

Is epistemic logic useful for agent foundations?

2018-05-08T23:33:44.266Z · score: 19 (6 votes)

What we talk about when we talk about maximising utility

2018-02-24T22:33:28.390Z · score: 27 (8 votes)

In Defence of Conflict Theory

2018-02-17T03:33:01.970Z · score: 6 (6 votes)

Is death bad?

2018-01-13T04:55:25.788Z · score: 8 (4 votes)