Posts

Learning to get things right first time 2015-05-29T22:06:56.171Z · score: 8 (15 votes)
Counterfactual trade 2015-03-09T13:23:54.252Z · score: 10 (11 votes)
Neutral hours: a tool for valuing time 2015-03-04T16:38:50.419Z · score: 19 (19 votes)
Report -- Allocating risk mitigation across time 2015-02-20T16:37:33.199Z · score: 11 (12 votes)
Existential Risk and Existential Hope: Definitions 2015-01-10T19:09:43.882Z · score: 9 (9 votes)
Factoring cost-effectiveness 2014-12-24T23:25:24.230Z · score: 8 (11 votes)
Make your own cost-effectiveness Fermi estimates for one-off problems 2014-12-11T12:00:07.145Z · score: 9 (10 votes)
Estimating the cost-effectiveness of research 2014-12-11T11:55:01.025Z · score: 18 (19 votes)
Decision theories as heuristics 2014-09-28T14:36:27.460Z · score: 14 (19 votes)
Why we should err in both directions 2014-08-21T11:10:59.654Z · score: 8 (11 votes)
How to treat problems of unknown difficulty 2014-07-30T11:27:33.014Z · score: 13 (16 votes)

Comments

Comment by owencb on Acausal trade: double decrease · 2017-05-16T20:48:58.000Z · score: 0 (0 votes) · LW · GW

I think the double decrease effect kicks in with uncertainty, but not with confident expectation of a smaller network.

Comment by owencb on A permutation argument for comparing utility functions · 2017-04-27T15:12:50.000Z · score: 0 (0 votes) · LW · GW

I'm not sure I've fully followed, but I'm suspicious that you seem to be getting something for nothing in your shift from a type of uncertainty that we don't know how to handle to a type we do.

It seems to me like you must be making an implicit assumption somewhere. My guess is that this is where you used to pair with . If you'd instead chosen as the matching then you'd have uncertainty between whether should be or . My guess is that generically this gives different recommendations from your approach.

Comment by owencb on Learning Impact in RL · 2017-02-05T14:59:32.000Z · score: 2 (2 votes) · LW · GW

Seems to me like there are a bunch of challenges. For example you need extra structure on your space to add things or tell what's small; and you really want to keep track of long-term impact not just at the next time-step. Particularly the long-term one seems thorny (for low-impact in general, not just for this).

Nevertheless I think this idea looks promising enough to explore further, would also like to hear David's reasons.

Comment by owencb on On motivations for MIRI's highly reliable agent design research · 2017-01-28T01:16:17.000Z · score: 2 (2 votes) · LW · GW

For #5, OK, there's something to this. But:

  • It's somewhat plausible that stabilising pivotal acts will be available before world-destroying ones;
  • Actually there's been a supposition smuggled in already with "the first AI systems capable of performing pivotal acts". Perhaps there will at no point be a system capable of a pivotal act. I'm not quite sure whether it's appropriate to talk about the collection of systems that exist being together capable of pivotal acts if they will not act in concert. Perhaps we'll have a collection of systems which if aligned would produce a win, or if acting together towards an unaligned goal would produce catastrophe. It's unclear if they each have different unaligned goals that we necessarily get catastrophe (though it's certainly not a comfortable scenario).

I like your framing for #1.

Comment by owencb on On motivations for MIRI's highly reliable agent design research · 2017-01-27T00:39:03.000Z · score: 2 (2 votes) · LW · GW

Thanks for the write-up, this is helpful for me (Owen).

My initial takes on the five steps of the argument as presented, in approximately decreasing order of how much I am on board:

  • Number 3 is a logical entailment, no quarrel here
  • Number 5 is framed as "therefore", but adds the assumption that this will lead to catastrophe. I think this is quite likely if the systems in question are extremely powerful, but less likely if they are of modest power.
  • Number 4 splits my intuitions. I begin with some intuition that selection pressure would significantly constrain the goal (towards something reasonable in many cases), but the example of Solomonoff Induction was surprising to me and makes me more unsure. I feel inclined to defer intuitions on this to others who have considered it more.
  • Number 2 I don't have a strong opinion on. I can tell myself stories which point in either direction, and neither feels compelling.
  • Number 1 is the step I feel most sceptical about. It seems to me likely that the first AIs which can perform pivotal acts will not perform fully general consequentialist reasoning. I expect that they will perform consequentialist reasoning within certain domains (e.g. AlphaGo in some sense reasons about consequences of moves, but has no conception of consequences in the physical world). This isn't enough to alleviate concern: some such domains might be general enough that something misbehaving in them would cause large problems. But it is enough for me to think that paying attention to scope of domains is a promising angle.
Comment by owencb on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” · 2016-12-15T11:51:06.313Z · score: 0 (0 votes) · LW · GW

This conclusion is way too strong. To just give one way: there's a big space of possibilities where discovering the planning fallacy in fact makes you less susceptible to the planning fallacy, but not immune.

Comment by owencb on CFAR's new mission statement (on our website) · 2016-12-11T13:24:27.650Z · score: 11 (12 votes) · LW · GW

I don't know who the intended audience for this is, but I think it's worth flagging that it seemed extremely jargon-heavy to me. I expect this to be off-putting to at least some people you actually want to attract (if it were one of my first interactions with CFAR I would be less inclined to engage again). In several cases you link to explanations of the jargon. This helps, but doesn't really solve the problem that you're asking the reader to do a large amount of work.

Some examples from the first few paragraphs:

  • clear and unhidden
  • original seeing
  • original making
  • existential risk
  • informational content [non-standard use]
  • thinker/doer
  • know the right passwords
  • double crux
  • outreach efforts
Comment by owencb on CFAR's new mission statement (on our website) · 2016-12-11T13:14:42.850Z · score: 5 (5 votes) · LW · GW

I found this document kind of interesting, but it felt less like what I normally understand as a mission statement, and more like "Anna's thoughts on CFAR's identity". I think there's a place for the latter, but I'd be really interested in seeing (a concise version of) the former, too.

If I had to guess right now I'd expect it to say something like:

We want to develop a community with high epistemic standards and good rationality tools, at least part of which is devoted to reducing existential risk from AI.

... but I kind of expect you to think I have the emphasis there wrong in some way.

Comment by owencb on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” · 2016-12-11T13:00:05.487Z · score: 6 (6 votes) · LW · GW

I like your (A)-(C), particularly (A). This seems important, and something that isn't always found by default in the world at large.

Because it's somewhat unusual, I think it's helpful to give strong signals that this is important to you. For example I'd feel happy about it being a core part of the CFAR identity, appearing in even short statements of organisational mission. (I also think this can help organisation insiders to take it even more seriously.)

On (i), it seems clearly a bad idea for staff to pretend they have no viewpoints. And if the organisation has viewpoints, it's a bad idea to hide them. I think there is a case for keeping organisational identity small -- not taking views on things it doesn't need views on. Among other things, this helps to make sure that it actually delivers on (A). But I thought the start of your post (points (1)-(4)) did a good job of explaining why there are in fact substantive benefits to having an organisational view on AI, and I'm more supportive of this than before. I still think it is worth trying to keep organisational identity relatively small, and I'm still not certain whether it would be better to have separate organisations.

Comment by owencb on Be secretly wrong · 2016-12-10T23:32:40.741Z · score: 1 (1 votes) · LW · GW

This was helpful to me, thanks.

I think I'd still endorse a bit more of a push towards thinking in credences (where you're at a threshold of that being a reasonable thing to do), but I'll consider further.

Comment by owencb on CFAR’s new focus, and AI Safety · 2016-12-10T23:28:25.023Z · score: 1 (1 votes) · LW · GW

Thanks. I'll dwell more on these. Quick thoughts from a first read:

  • I generally liked the "further discussion" doc.
  • I do think it's important to strongly signal the aspects of cause neutrality that you do intend to pursue (as well as pursuing them). These are unusual and important.
  • I found the mission statement generally opaque and extremely jargony. I think I could follow what you were saying, but in some cases this required a bit of work and in some cases I felt like it was perhaps only because I'd had conversations with you. (The FAQ at the top was relatively clear, but an odd thing to lead with.)
  • I was bemused by the fact that there didn't appear to be a clear mission statement highlighted anywhere on the page!

ETA: Added some more in depth comments on the relevant comment threads: here on "further thoughts", and here and here on the mission statement.

Comment by owencb on CFAR’s new focus, and AI Safety · 2016-12-10T15:49:55.039Z · score: 4 (4 votes) · LW · GW

Thanks for engaging. Further thoughts:

I agree with you that framing is important; I just deleted the old ETA.

For what it's worth I think even without saying that your aim is explicitly AI safety, a lot of people reading this post will take that away unless you do more to cancel the implicature. Even the title does this! It's a slightly odd grammatical construction which looks an awful lot like CFAR’s new focus: AI Safety; I think without being more up-front about alternative interpretation it will sometimes be read that way.

I'm curious where our two new docs leave you

Me too! (I assume that these have not been posted yet, but if I'm just failing to find them please let me know.)

I think they make clearer that we will still be doing some rationality qua rationality.

Great. Just to highlight that I think there are two important aspects of doing rationality qua rationality:

  • Have the people pursuing the activity have this as their goal. (I'm less worried about you failing on this one.)
  • Have external perceptions be that this is what you're doing. I have some concern that rationality-qua-rationality activities pursued by an AI safety org will be perceived as having an underlying agenda relating to that. And that this could e.g. make some people less inclined to engage, even relative to if they're run by a rationality org which has a significant project on AI safety.

my guess is that there isn't enough money and staff firepower to run a good standalone rationality organization in CFAR's stead

I feel pretty uncertain about this, but my guess goes the other way. Also, I think if there are two separate orgs, the standalone rationality one should probably retain the CFAR brand! (as it seems more valuable there)

I do worry about transition costs and losing synergies of working together from splitting off a new org. Though these might be cheaper earlier than later, and even if it's borderline right now whether there's enough money and staff to do both I think it won't be borderline within a small number of years.

Julia will be launching a small spinoff organization called Convergence

This sounds interesting! That's a specialised enough remit that it (mostly) doesn't negate my above concerns, but I'm happy to hear about it anyway.

Comment by owencb on Be secretly wrong · 2016-12-10T14:10:39.378Z · score: 8 (3 votes) · LW · GW

I'm not sure exactly what you meant, so not ultimately sure whether I disagree, but I at least felt uncomfortable with this claim.

I think it's because:

  • Your framing pushes towards holding beliefs rather than credences in the sense used here.
  • I think it's generally inappropriate to hold beliefs about the type of things that are important and you're likely to turn out to be wrong on. (Of course for boundedly rational agents it's acceptable to hold beliefs about some things as a time/attention-saving matter.)
  • It's normally right to update credences gradually as more evidence comes in. There isn't so much an "I was wrong" moment.

On the other hand I do support generating explicit hypotheses, and articulating concrete models.

Comment by owencb on CFAR’s new focus, and AI Safety · 2016-12-08T14:45:30.774Z · score: 11 (11 votes) · LW · GW

I had mixed feelings towards this post, and I've been trying to process them.

On the positive side:

  • I think AI safety is important, and that collective epistemology is important for this, so I'm happy to know that there will be some attention going to this.
  • There may be synergies to doing some of this alongside more traditional rationality work in the same org.

On the negative side:

  • I think there is an important role for pursuing rationality qua rationality, and that this will be harder to do consistently under an umbrella with AI safety as an explicit aim. For example one concern is that there will be even stronger pressure to accept community consensus that AI safety is important rather than getting people to think this through for themselves. Since I agree with you that the epistemology matters, this is concerning to me.
  • With a growing community, my first inclination would be that one could support both organisations, and that it would be better to have something new focus on epistemology-for-AI, while CFAR in a more traditional form continues to focus more directly on rationality (just as Open Phil split off from GiveWell rather than replacing the direction of GiveWell). I imagine you thought about this; hopefully you'll address it in one of the subsequent posts.
  • There is potential reputational damage by having these things too far linked. (Though also potential reputational benefits. I put this in "mild negative" for now.)

On the confused side:

  • I thought the post did an interesting job of saying more reasonable things than the implicature. In particular I thought it was extremely interesting that it didn't say that AI safety was a new focus. Then in the ETA you said "Even though our aim is explicitly AI Safety..."

I think framing matters a lot here. I'd feel much happier about a CFAR whose aim was developing and promoting individual and group rationality in general and particularly for important questions, one of whose projects was focusing on AI safety, than I do about a CFAR whose explicit focus is AI safety, even if the basket of activities they might pursue in the short term would look very similar. I wonder if you considered this?

Comment by owencb on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T22:39:09.284Z · score: 3 (3 votes) · LW · GW

Your (a) / (b) division basically makes sense to me.[*] I think we're already at the point where we need this fracturing.

However, I don't think that the LW format makes sense for (a). I'd probably prefer curated aggregation of good content for (a), with fairly clear lines about what's in or out. It's very unclear what the threshold for keeping up on LW should be.

Also, I quite like the idea of the topical centres being hosted in the same place as the core, so that they're easy to find.

[*] A possible caveat is dealing with new community members nicely; I haven't thought about this enough so I'm just dropping a flag here.

Comment by owencb on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T10:22:08.638Z · score: 5 (5 votes) · LW · GW

In general if we don't explicitly design institutions that will work well with a much larger community, we shouldn't be surprised if things break down when the community grows.

Comment by owencb on On the importance of Less Wrong, or another single conversational locus · 2016-11-27T10:16:17.177Z · score: 7 (7 votes) · LW · GW

I think I disagree with your conclusion here, although I'd agree with something in its vicinity.

One of the strengths of a larger community is the potential to explore multiple areas in moderate amounts of depth. We want to be able to have detailed conversations on each of: e.g. good epistemic habits; implications of AI; distributions of cost-effectiveness; personal productivity; technical AI safety; ...

It asks too much for everyone to keep up with each of these conversations, particularly when each of them can spawn many detailed sub-conversations. But if they're all located in the same place, it's hard to browse through to find the parts that you're actually trying to keep up with.

So I think that we want two things:

  1. Separate conversational loci for each topic
  2. A way of finding the best material to get up to speed on a given topic

For the first, I find myself thinking back to days of sub-forums on bulletin boards (lack of nested comments obviously a big problem there). That way you could have the different loci gathered together. For the second, I suspect careful curation is actually the right way to identify this content, but I'm not sure what the best way to set up infrastructure for this is.

Comment by owencb on Polymath-style attack on the Parliamentary Model for moral uncertainty · 2016-10-25T11:41:55.975Z · score: 0 (0 votes) · LW · GW

Update: I now believe I was over-simplifying things. For two delegates I think is correct, but in the parliamentary model that corresponds to giving the theories equal credence. As credences vary so do the number of delegates. Maximising the Nash product over all delegates is equivalent to maximising a product where they have different exponents (exponents in proportion to the number of delegates).

Comment by owencb on Graphical Assumption Modeling · 2015-08-21T10:40:23.240Z · score: 0 (0 votes) · LW · GW

Maybe, if it had good enough UI and enough features?

I feel like it's quite a narrow-target/high-bar to compete with back-of-the-envelope/whiteboard at one end (for ease of use), and a software package that does monte carlos properly at the other end.

Comment by owencb on Versions of AIXI can be arbitrarily stupid · 2015-08-11T13:43:54.987Z · score: 1 (1 votes) · LW · GW

Thanks!

Comment by owencb on Versions of AIXI can be arbitrarily stupid · 2015-08-10T14:06:15.150Z · score: 1 (1 votes) · LW · GW

Thanks, this is an important result showing that the dominating property really isn't enough to pick out a prior for a good agent. I like your example as a to-the-point explanation of the issue.

I think the post title is a somewhat misleading, though: it sounds as though differences in instantiations of AIXI don't really matter, and they can all be arbitrarily stupid. Any chance of changing that? Perhaps to something like "Versions of AIXI can be arbitrarily stupid"?

Comment by owencb on Learning to get things right first time · 2015-06-03T10:15:18.110Z · score: 0 (0 votes) · LW · GW

I disagree that "you really didn't gain all that much" in your example. There are possible numbers such that it's better to avoid producing AI, but (a) that may not be a lever which is available to us, and (b) AI done right would probably represent an existential eucatastrophe, greatly improving our ability to avoid or deal with future threats.

Comment by owencb on Learning to get things right first time · 2015-06-01T20:51:15.099Z · score: 0 (0 votes) · LW · GW

I'm not sure quite what point you're trying to make:

  • If you're arguing that with the best attempt in the world it might be we still get it wrong, I agree.
  • If you're arguing that greater diligence and better techniques won't increase our chances, I disagree.
  • If you're arguing something else, I've missed the point.
Comment by owencb on Learning to get things right first time · 2015-06-01T12:40:01.193Z · score: 0 (0 votes) · LW · GW

I'm not suggesting that the problems would come from what we normally think of as software bugs (though see the suggestion in this comment). I'm suggesting that they would come from a failure to specify the right things in a complex scenario -- and that this problem bears enough similarities to software bugs that they could be a good test bed for working out how to approach such problems.

Comment by owencb on Learning to get things right first time · 2015-06-01T12:37:17.079Z · score: 0 (0 votes) · LW · GW

I'm not sure how much we are disagreeing here. I'm not proposing anything like formal verification. I think development in simulation is likely to be an important tool in getting it right the first time you go "live", but I also think there may be other useful general techniques/tools, and that it could be worth investigating them well in advance of need.

Comment by owencb on Learning to get things right first time · 2015-05-31T20:04:45.490Z · score: 0 (0 votes) · LW · GW

Thanks, this is a great collection of relevant information.

I agree with your framing of this as differential tech development. Do you have any thoughts on the best routes to push on this?

I will want to think more about framing AGI failures as (subtle) bugs. My initial impression is positive, but I have some worry that it would introduce a new set of misconceptions.

Comment by owencb on Learning to get things right first time · 2015-05-31T09:44:35.585Z · score: 1 (1 votes) · LW · GW

Good point that this hasn't always been the case. However, we also know that people made a lot of mistakes in some of these cases. It would be great to work out how we can best approach such challenges in the future.

Comment by owencb on Learning to get things right first time · 2015-05-31T09:42:25.145Z · score: 0 (0 votes) · LW · GW

To me these look like (pretty good) strategies for getting something right the first time, not in opposition to the idea that this would be needed.

They do suggest that an environment which is richer than just "submit perfect code without testing" might be a better training ground.

Comment by owencb on Learning to get things right first time · 2015-05-31T09:32:16.726Z · score: 0 (0 votes) · LW · GW

Meta: I'd love to know whether the downvotes are because people don't like the presentation of undeveloped ideas like this, or because they don't think the actual idea is a good one.

(The first would put me off posting similar things in the future, the second would encourage me as a feedback mechanism.)

Comment by owencb on Learning to get things right first time · 2015-05-31T09:29:48.478Z · score: 0 (0 votes) · LW · GW

Software may not be the best domain, but it has a key advantage over the other suggestions you are making: it's easy to produce novel challenges that are quite different from the previous challenges.

In a domain such as peeling an egg, it's true that peeling an individual egg has to be done correctly first time, but one egg is much like another, so the skill transfers easily. On the other hand one complex programming challenge may be quite different from another, so the knowledge from having solved one doesn't transfer so much. This should I think help make sure that the skill that does transfer is something closer to a general skill of knowing how to be careful enough to get it right first time.

Comment by owencb on Learning to get things right first time · 2015-05-30T08:49:30.780Z · score: 2 (2 votes) · LW · GW

Yes, gjm's summary is right.

I agree that there are some important disanalogies between the two problems. I thought software development was an unusually good domain to start trying to learn the general skill, mostly because it offers easy-to-generate complex challenges where it's simple to assess success.

Comment by owencb on Learning to get things right first time · 2015-05-30T08:42:10.247Z · score: 3 (3 votes) · LW · GW

I'm not hopeful that there's an easy solution (or I think it would be used in the industry), and I don't think you'd get up to total reliability.

Nonetheless it seems likely that there are things people can do that increase their bug rate, and there are probably things they can do that would decrease it. These might be costly things -- perhaps it involves writing detailed architectural plans for the software and getting these critiqued and double-checked by a team who also double-check that the separate parts do the right thing with respect to the architecture.

Maybe you can only cut your bug rate by 50% at the cost of going only 5% of normal speed. In that case there may be no commercially useful skills here. But it still seems like it would be useful to work out what kind of things help to do that.

Comment by owencb on LW survey: Effective Altruists and donations · 2015-05-18T18:36:18.024Z · score: 0 (0 votes) · LW · GW

Maybe. But would it change any of the conclusions?

It would change the regressions. I don't know whether you think that's an important part of the conclusion. It is certainly minor compared to the body of the work.

Again, if you think it does make a difference, I have provided all the code and data.

I think this is commendable; unfortunately I don't know the language and while it seemed like it would take a few minutes to explain the insight, it seems like it would be a few hours for me to mug up enough to explore the change to the data.

[...] Disagree here as well.

Happy with that disagreement: I don't have very strong support for my guess that a figure higher than $1 is best. I was just trying to explain how you might try to make the choice.

Comment by owencb on LW survey: Effective Altruists and donations · 2015-05-17T17:35:31.698Z · score: 1 (3 votes) · LW · GW

By using a slightly different offset you get a slightly different nonlinear transformation, and one that may work even better.

There isn't a way to make this transformation without a choice. You've made a choice by adding $1 -- it looks kind of canonical but really it's based on the size of a dollar, which is pretty arbitrary.

For example say instead of denominating everything in dollars you'd denominated in cents (and added 1 cent before logging). Then everyone would move up the graph by pretty much log(100), except the people who gave nothing, who would be pulled further from the main part of the graph. I think this would make your fit worse.

In a similar way, perhaps you can make the fit better by denominating everyone's donations in hectodollars (h$1 = $100), or equivalently by changing the offset to $100.

We could try to pick the right offset by doing a sensitivity analysis and seeing what gives us the best fit, or by thinking about whether there's a reasonable meaning to attach to the figure. In this case we might think that people tend to give something back to society even when they don't do this explicitly as charity donations, so add on a figure to account for this. My feeling is that $1 is probably smaller than optimal under either interpretation. This would fit with the intuition that going from donating $1 to $9 is likely a smaller deal at a personal level than going from $199 to $999 (counted the same in the current system).

Comment by owencb on LW survey: Effective Altruists and donations · 2015-05-15T15:10:01.520Z · score: 5 (7 votes) · LW · GW

It shifts all datapoints equally in the dollar domain, but not in the log domain (hence letting you get rid of the -infinity). Of course it still preserves orderings, but it's a non-linear transformation of the y-axis.

I'd support this sensitivity check, or if just using one value would prefer a larger offset.

(Same caveat: I might have misunderstood log1p)

Comment by owencb on Make your own cost-effectiveness Fermi estimates for one-off problems · 2015-03-27T12:37:16.963Z · score: 0 (0 votes) · LW · GW

No, it's supposed to be annual spend. However it's worth noting that this is a simplified model which assumes a particular relationship between annual spend and historical spend (namely it assumes that spending has grown and will grow on an exponential).

Comment by owencb on Neutral hours: a tool for valuing time · 2015-03-18T13:57:24.870Z · score: 2 (2 votes) · LW · GW

Thanks. I wasn't entirely sure whether you were aiming at improving decision-making or at game design, but it was interesting either way!

By the way, your link is doubly(!) broken. This should work.

Comment by owencb on Calibration Test with database of 150,000+ questions · 2015-03-13T22:16:04.030Z · score: 0 (0 votes) · LW · GW

Perfect, thanks!

Comment by owencb on Calibration Test with database of 150,000+ questions · 2015-03-13T20:21:32.181Z · score: 7 (7 votes) · LW · GW

Thanks for providing this!

I have a worry about using trivia questions for calibration: there's a substantial selection effect in the construction of trivia questions, so you're much more likely to get an obscure question pointing to a well-known answer than happens by chance. The effect may be to calibrate people for trivia questions in a way that transfers poorly to other questions.

Comment by owencb on Calibration Test with database of 150,000+ questions · 2015-03-13T19:56:56.668Z · score: 4 (4 votes) · LW · GW

I think it's misleading to just drop in the statement that 0 and 1 are not probabilities.

There is a reasonable and arguably better definition of probabilities which excludes them, but it's not the standard one, and it also has costs -- for example probabilities are a useful tool in building models, and it is sometimes useful to use probabilities 0 and 1 in models.

(aside: it works as a kind of 'clickbait' in the original article title, and Eliezer doesn't actually make such a controversial statement in the post, so I'm not complaining about that)

Comment by owencb on Neutral hours: a tool for valuing time · 2015-03-13T19:31:38.861Z · score: 2 (2 votes) · LW · GW

Thanks, I'd love to see what you come up.

I agree that it is a big simplification, but I don't know how much of a practical problem that is, given that a lot of people can get things wrong that would be fixable even by the two-resource model. Still, I fully support having a range of different models of different complexities!

Comment by owencb on Have I just destroyed the acausal trade network? · 2015-03-12T11:50:36.858Z · score: 3 (3 votes) · LW · GW

Perhaps small tariffs in the short term, but: (i) I don't think people are engaging in that much trade at the moment, (ii) while it's an update, I don't think this will change most people's estimates very much.

In the medium term I don't think so, because the network is greased by deeper understanding of what others might do. I think your patch is a step in that direction, and may increase acausal trade by accelerating full understanding. It could decrease it in the medium term, though.

In the longer term, I guess everyone figures everything out, and this has no effect.

Comment by owencb on Neutral hours: a tool for valuing time · 2015-03-12T09:49:55.411Z · score: 3 (3 votes) · LW · GW

I wonder about this. I agree that fewer people will read it, but it's not clear that's that's bad -- they will presumably tend to be the people who were less interested in it. In general there's a lot of good content on the internet, and I view the scenario where everyone tries to maximise readership of their content as a defecting strategy. I'd rather give the best information so that people can decide whether to read it.

I'm really not sure about this, though -- maybe enough of the those who pass would benefit from it that it's worth trying to maximise readership at least among people here.

Another reason not to post it is that it's 14 pages.

Comment by owencb on Human Capital Contracts · 2015-03-11T22:45:39.130Z · score: 0 (0 votes) · LW · GW

I somewhat agree with that point, but this would bring it out into the open as an explicit effect, which might be more controversial.

Of course anti-discrimination legislation might mean that the contracts on offer were only allowed to depend on certain parameters.

Comment by owencb on Detecting agents and subagents · 2015-03-11T19:18:12.227Z · score: 4 (4 votes) · LW · GW

I think your definition is really a definition of powerful things (which is of course extremely relevant!).

I'd had some incomplete thoughts in this direction. I'd taken a slightly different tack to you. I'll paste the relevant notes in.

Descriptions (or models) of systems vary in their complexity and their accuracy. Descriptions which are simpler and more accurate are preferred. Good descriptions are those which are unusually accurate for their level of complexity. For example ‘spherical’ is a good description of the shape of the earth, because it’s much more accurate than other descriptions of that length.

Often we want to describe subsystems. To work out how good such descriptions are we can ask how good the implied description of the whole system is, if we add a perfect description of the rest of the system.

Definition: The agency of a subsystem is the degree to which good models of that system predict its behaviour in terms of high-level effects on the world around.

Note this definition is not that precise: it replaces a difficult notion (agency) with several other imprecise notions (degree to which; good models of the that system; high-level effects). My suggestion is that while still awkward, they are more tractable than agent. I shy away from giving explicit forms, but I think this should generally be possible and indeed I could give guesses in several cases, but at the moment questions about precise functional forms seem a distraction from the architecture. Also note that this definition is continuous rather than binary.

Proposition: Very simple systems cannot have high degrees of agency. This is because if the system in its entirety admits a short description, you can’t do much better by appealing to motivation.

Observation: Some subsystems may have high agency just with respect to a limited set of environments. A chess-playing program has high agency when attached to a game of chess (and we care about the outcome), and low agency otherwise. A subsystem of an AI may have high agency when properly embedded in the AI, and low agency if cut off from its tools and levers.

Comment: this definition picks out agents, but also picks out powerful agents. Giving someone an army increases their agency. I'm not sure whether this is a desirable feature. If we wanted to abstract away from that, we could do something like:

Define power: the degree to which a subsystem has large effects on systems it is embedded in.

Define relative agency = agency - power

Comment by owencb on Counterfactual trade · 2015-03-10T18:28:54.222Z · score: 0 (0 votes) · LW · GW

The first version isn't inside your own mind.

If you think there is a large multiverse, then there are many worlds including people very much like you in a variety of situations (this is a sense of 'counterfactual' which isn't all in the mind). Suppose that you care about people who are very similar to you. Then you would like to trade with real entities in these branches, when they are able to affect something you care about. Of course any trade with them will be acausal.

In general it's very hard to predict the relative likelihoods of different worlds, and the likelihood of agents in them predicting the existence of your world. This provides a barrier to acausal trade. Salient counterfactuals (in the 'in the mind' sense) give you a relatively easy way of reasoning about a slice of worlds you care about, including the fact that your putative trade partner also has a relatively easy way of reasoning about your world. This helps to enable trade between these branches.

Comment by owencb on Counterfactual trade · 2015-03-10T12:08:19.124Z · score: 0 (0 votes) · LW · GW

The direct interpretation is that "they" are people elsewhere in a large multiverse. That they could be pictured as a figment of imagination gives the agent evidence about their existence.

The instrumental interpretation is that one acts as though trading with the figment of one's imagination, as a method of trade with other real people (who also act this way), because it is computationally tractable and tends to produce better outcomes all-round.

Comment by owencb on Human Capital Contracts · 2015-03-10T10:50:01.882Z · score: 2 (2 votes) · LW · GW

I think this is an interesting idea with some promise. I think the details are going to matter a lot.

Note that the UK university tuition system has moved in this direction. It has the risk reduction element, but it doesn't have the competitive market and showing information element. I believe the Economist ran an article supporting moves in this direction for college tuition a few months ago (emphasising the benefits it would bring in encouraging colleges to teach the things that would help increase their students' earnings), but I can't find it now.

It might have difficulty making it past social outrage at the idea of trading in people's lives. It might also run into opposition after it started, and contracts offered depended on things like the race and socio-economic background of the applicant..

Comment by owencb on Counterfactual trade · 2015-03-09T22:46:24.752Z · score: 0 (0 votes) · LW · GW

Note that you can potentially trade with counterfactuals that aren't strongly symmetric. You can trade with a counterfactual where the person who is your slave is in a position of power over you, even if that's not an owner/slave relationship.

Comment by owencb on Counterfactual trade · 2015-03-09T15:26:38.359Z · score: 3 (3 votes) · LW · GW

I agree that not everyone will be interested in engaging in counterfactual trade. I gestured towards some reasons why you might be:

Agents might engage in counterfactual trade either because they do care about the agents in the counterfactuals (at least seems plausible for some beliefs about a large multiverse), or because it’s instrumentally useful as a tractable decision rule which works as a better approximation to what they’d ideally like to do than similarly tractable versions.