Posts

Urgent & important: How (not) to do your to-do list 2019-02-01T17:44:34.573Z · score: 30 (17 votes)
Rationality of demonstrating & voting 2018-11-07T00:09:44.239Z · score: 24 (7 votes)

Comments

Comment by bfinn on April Fools: Announcing LessWrong 3.0 – Now in VR! · 2020-04-02T23:27:33.731Z · score: 6 (4 votes) · LW · GW

Despite all the obvious signs and the date, it took me a while (well, a couple of minutes) to figure out this was entirely an April Fool's Day joke.

Comment by bfinn on Urgent & important: How (not) to do your to-do list · 2020-03-09T18:34:15.483Z · score: 1 (1 votes) · LW · GW

I'd just caution (in case you hadn't noticed!) that Must Do Immediately is mixing importance and timing (cf 'urgent'), as is Someday/Paused. Better to keep them separate if you can.

Comment by bfinn on Urgent & important: How (not) to do your to-do list · 2020-03-09T18:30:54.748Z · score: 1 (1 votes) · LW · GW

Apologies for the slow reply.

Indeed, this is a problem - not too much for everyday to-do lists, but more so for formal prioritisation.

So for choosing features I'd recommend doing a cost-benefit analysis, of which even a quick rough one is far better than relying on gut feel to assign priorities. It so happens I've given a lecture all about how to do this for software features - see here:

https://www.youtube.com/watch?v=LUCtNJYcbxQ&t=0m1s

The slides (to view alongside it) are here: http://www.benfinn.uk/uploads/5/3/1/0/53108699/how_to_choose_features.pdf

Comment by bfinn on My attempt to explain Looking, insight meditation, and enlightenment in non-mysterious terms · 2019-12-11T18:15:03.667Z · score: 1 (1 votes) · LW · GW

On a small point, maybe it would be helpful to use a more natural term than 'defusion', e.g. 'detachment' (if that expresses it clearly), or perhaps something like 'objectivity'.

As better to avoid the confusion of introducing a new technical term if something can be expressed just as well with a familiar one.

Comment by bfinn on Contra double crux · 2019-12-09T17:04:33.596Z · score: 1 (1 votes) · LW · GW

This is an interesting topic and post. My thoughts following from the God exists / priors bit (and apologies if this is an obvious point, or dealt with elsewhere - e.g. too many long comments below to read more than cursorily!):

Many deeply-held beliefs - particularly broadly ideological ones (e.g. theological, ethical, or political) - are held emotionally rather than rationally, and not really debated in a search for the truth, but to proclaim one's own beliefs, and perhaps in the vain hope of converting others.

So any apparently strong counter-evidence or counter-arguments are met with fall-back arguments, or so-called 'saving hypotheses' (where special reasons are invoked for why God didn't answer your entirely justified prayer). Savvy arguers will have an endless supply of these, including perhaps some so general that they can escape all attack (e.g. that God deliberately evades all attempts at testing). Unsavvy arguers will run out of responses, but still won't be convinced, and will think there is some valid response that they just happen not to know. (I've even heard this used by one church as an official ultimate response to the problem of evil: 'we don't know why God allows evil, but he does (so there must be a good reason we just don't know about)'.)

That is, the double-crux model that evidence (e.g. the universe) comes first and beliefs follow from it is reversed in these cases. The beliefs come first, and any supporting evidence and reasoning are merely used to justify the beliefs to others. (Counter-evidence and counter-arguments are ignored.) Gut feel is all that counts. So there aren't really cruxes to be had.

I don't think these are very special cases; probably quite a wide variety of topics are treated like this by many people. E.g. a lot of 'debates' I see on Facebook are of this kind; they lead nowhere, no-one ever changes their mind, and they usually turn unpleasant quickly. The problem isn't the debating technique, but the nature of the beliefs.

Comment by bfinn on Is Clickbait Destroying Our General Intelligence? · 2019-12-02T20:19:04.609Z · score: 8 (4 votes) · LW · GW

Re your addendum, to make an almost-obvious point, over-optimizing producing worse results is what large parts of modern life are all about; typically over-optimizing on evolved behaviours. Fat/sugar, porn, watching TV (as a substitute for real life), gambling (risk-taking to seek reward), consumerism and indeed excess money-seeking (accumulating unnecessary resources), etc. The bad results often take the form of addictions.

Though some such things are arguably harmless (e.g. professional sport - building unnecessary muscles/abilities full-time to win a pointless status contest).

Comment by bfinn on Is Clickbait Destroying Our General Intelligence? · 2019-12-02T20:04:52.411Z · score: 1 (1 votes) · LW · GW

I reckon a bit of both - viz.:

(a) The Internet (and TV before it) make it in platform's interests, via ad revenue, to produce clickbait (soaps/game shows), because humans are more interest-seekers than truth-seekers. This phenomenon is aka 'dumbing down'. And also:

(b) the Internet enables all consumers to broadcast their own stuff regardless of truth/quality. This is another kind of dumbing down; though note TV didn't do this, making it clear that it's a different kind.

Comment by bfinn on The Correct Contrarian Cluster · 2019-11-30T22:57:26.594Z · score: 3 (2 votes) · LW · GW

The alignment problem is arguably another example, like my above response re quantum physics, of a field spilling over into philosophy, such that even a strong amateur philosopher can point things out that the AI professionals hadn't thought through. I.e. it shows that AI alignment is an interdisciplinary topic which (I assume) went beyond existing mainstream AI.

Comment by bfinn on The Correct Contrarian Cluster · 2019-11-30T22:49:15.961Z · score: 3 (2 votes) · LW · GW

First, thanks for your comments on my comments, which I thought no-one would read on such an old article!

Re your quantum physics point, with unusual topics like this that overlap with philosophy (specifically metaphysics), it is true that physicists can be out of their depth on that part of it, and so someone with a strong understanding of metaphysics (even if not a professional philosopher as such) can point out errors in the physicists' metaphysics. That said, saying X is clearly wrong (due to faulty metaphysics) is a weaker claim than that Y is clearly right, particularly if there are many competing views. (As there are AFAIK even in the philosophy of QM.) Just as a professional physicist can't be certain about getting the metaphysics bit of QM right, even a professional philosopher couldn't be certain about the physics bit of it; not certain enough to claim a slam-dunk. So without going into the specifics of the case (which I'm not qualified to do) it still seems like an overreach.

Also, more generally, I assume interdisciplinary topics like this (for which a highly knowledgeable amateur could spot flaws in the reasoning of someone who's a professional in one discipline but not the other) are the exception rather than the rule.

Re the economics case, well, for all I know, EY may well have been right in this case (and for the right reasons), but if so then it's just a rare example of an amateur who has a very high professional-level understanding of a particular topic (though presumably not of various other parts of economics). I.e. this is an exception.

That said, and without going into the fine details of the case, the professionals here presumably include the top macroeconomists in Japan. Is it really plausible that EY understands the relevant economics and knows more relevant information than them? (E.g. they may well have considered all kinds of facts & figures that aren't public or at least known to EY.) Which is presumably where the issue of other biases/influences on them would come in; and while I accept that there could be personal/political biases/reasons for doing the economically wrong thing, this can be too easy a way of dismissing expert opinion.

So I'd still put my money on the professional vs the amateur, however persuasive the latter's arguments might seem to me. And again, the fact that the Bank of Japan's decision turned out badly may just show that economics is an inexact science, in which correct bets can turn out badly and incorrect bets turn out well.

One other exception I'd like to add to my original comment: it is certainly true that a highly expert professional in a field can be very inexpert in topics that are close to but not within their own specialism. (I know of this in my own case, and have observed it in others, e.g. lawyers. E.g. a corporate lawyer may only have a sketchy understanding of IP law. Though they are well aware of this.)

Comment by bfinn on The Correct Contrarian Cluster · 2019-11-28T12:49:21.120Z · score: 0 (3 votes) · LW · GW

A decade late to the party, I'd like to join those skeptical of EY's use of many-worlds as a slam-dunk test of contrarian correctness. Without going into the physics (for which I'm unqualified), I have to make the obvious general objection that it is sophomoric for an amateur in an intellectual field - even an extremely intelligent and knowledgeable one - to claim a better understanding than those who have spent years studying it professionally. It is of course possible for an amateur to have an insight professionals have missed, but very rare.

I had a similar feeling on reading EY's Inadequate Equilibria, where I was far from convinced by his example that an amateur can adjudicate between an economics blogger and central bankers and tell who is right. (EY's argument that the central bankers may have perverse incentives to give a dishonest answer is not that strong, since they may give an honest answer anyway, and that fact that with 20-20 hindsight it might look like they were wrong just shows that economics is an inexact science.) The economics blogger may make points that seem highly plausible and convincing to an amateur, but then, one-sided arguments often do.

Back to physics, any amateur who says "many-worlds is just obvious if you understand it, so those who say otherwise are obviously wrong" is claiming a better understanding than many professionals in the field; again backed with allegations of perverse incentives. Though the latter carry some weight, I'd put my money on the amateur just being overconfident, and having missed something.

If anything I'd judge people on the sophistication of their reasons rather than the opinion itself. E.g. I'd take more notice of someone who had a sophisticated reason for denying that 1 + 1 = 2 than someone who said 'it's just obvious, and anyone who says otherwise is an idiot'.

(I for one have doubts that 1 + 1 = 2; the most I'd be prepared to say is that 1 + 1 often equals 2. And I'm in good company here - e.g. Wittgenstein had grave doubts that simple counting and addition (and indeed any following of rules) are determinate processes with definite results, something which he discussed in seminars with Alan Turing among other students of his.)

The one kind of case in which I'd prefer the factual opinion of a sophisticated amateur to a professional is in fields which don't involve enough intellectual rigour. For example I'd rather believe an amateur with an advanced understanding of evolutionary psychology than some gender studies professors to give a correct explanation of certain social phenomena; not just because the professors may well have an ideological axe to grind, but also because they may lack the scientific rigour necessary to understand the subtleties of causation and statistics.


Comment by bfinn on Competent Elites · 2019-11-19T16:04:56.999Z · score: -1 (2 votes) · LW · GW

AFAIK the first two aren't correlated with intelligence. Cf geeks stereotypically lack people skills.

Comment by bfinn on Some disjunctive reasons for urgency on AI risk · 2019-02-18T15:35:11.813Z · score: 3 (2 votes) · LW · GW

FWIW another reason, somewhat similar to the low hanging fruit point, is that because the remaining problems are increasingly specialized, they require more years' training before you can tackle them. I.e. not just harder to solve once you've started, but it takes longer for someone to get to the point where they can even start.

Also, I wonder if the increasing specialization means there are more problems to solve (albeit ever more niche), so people are being spread thinner among them. (Though conversely there are more people in the world, and many more scientists, than a century or two ago.)

Comment by bfinn on Some disjunctive reasons for urgency on AI risk · 2019-02-18T15:14:47.718Z · score: 10 (3 votes) · LW · GW

In software development, a perhaps relevant kind of problem solving, extra resources in the form of more programmers working on the same project doesn't speed things up much. My guesstimate is output = time x log programmers. I assume the main reason being because there's a limit to the extent that you can divide a project into independent parallel programming tasks. (Cf 9 women can't make a baby in 1 month.)

Except that if the people are working in independent smaller teams, each trying to crack the same problem, and *if* the solution requires a single breakthrough (or a few?) which can be made by a smaller team (e.g. public key encryption, as opposed to landing a man on the moon), then presumably it's proportional to the number of teams, because each has an independent probability of making the breakthrough. And it seems plausible that solving AI threats might be more like this.

Comment by bfinn on Some Thoughts on Metaphilosophy · 2019-02-12T17:01:06.922Z · score: 3 (2 votes) · LW · GW

But philosophers are good at proposing answers - they all do that, usually just after identifying a flaw with an existing proposal.

What they're not good at is convincing everyone else that their solution is the right one. (And presumably this is because multiple solutions are plausible. And maybe that's because of the nature of proof - it's impossible to prove something definitively, and disproving typically involves finding a counterexample, which may be hard to find.)

I'm not convinced philosophy is much less good at finding actual answers than say physics. It's not as if physics is completely solved, or even particularly stable. Perhaps its most promising period of stability was specifically the laws of motion & gravity after Newton - though for less than two centuries. Physics seems better than philosophy at forming a temporary consensus; but that's no use (and indeed is counterproductive) unless the solution is actually right.

Cf a rare example of consensus in philosophy: knowledge was 'solved' for 2300 years with the theory that it's a 'true justified belief'. Until Gettier thought of counterexamples.

Comment by bfinn on Urgent & important: How (not) to do your to-do list · 2019-02-10T23:13:08.580Z · score: 2 (2 votes) · LW · GW

Thanks - yes I think there is a case for having a 'Shouldn't' list. As you imply, it should only be for things you know are useless/harmful, not for things you've merely decided not to do because they are low importance (e.g. paint the bathroom). Hence 'shouldn't do' not merely 'don't do'.

Comment by bfinn on Urgent & important: How (not) to do your to-do list · 2019-02-02T20:11:49.571Z · score: 2 (2 votes) · LW · GW

Sometimes you can delegate things to your boss - e.g. by declining work he tries to delegate to you (say "I'm too busy").

Comment by bfinn on Urgent & important: How (not) to do your to-do list · 2019-02-02T11:24:03.802Z · score: 1 (1 votes) · LW · GW

Thanks - glad you like it. I don't know how the Eisenhower Box is usually taught, but from references to it online in e.g. blogs people don't seem to question its validity. But in practice they can't be following it that literally, e.g. they won't be doing all the things it tells them to do and delegating all the things it tells them to delegate, etc. So I suppose they must be treating it as just a rough-and-ready guide.

Comment by bfinn on Editor Mini-Guide · 2019-02-01T16:38:33.362Z · score: 2 (2 votes) · LW · GW

Looks like there is, but they must be LessWrong members.

Comment by bfinn on Jobs Inside the API · 2018-11-30T15:57:57.141Z · score: 1 (1 votes) · LW · GW
These agents are nothing but a stupid interface layer between me and the flight management system.

I suggest a possible term for this is MUI: Meat User Interface. The customer interacts with the MUI, and the MUI interacts with the GUI.

Comment by bfinn on Rationality of demonstrating & voting · 2018-11-19T15:07:12.974Z · score: 1 (1 votes) · LW · GW

Yes I follow your argument, though I'm a bit doubtful about a result that produces a large difference between utility function and moral credit.

Re your Supreme Court example (and I agree this is a clearer way of thinking about it), I don't quite follow the argument. It's true that if the other justices had voted differently, more of them would have had to vote differently ('flip') had you done so, but as it's a given that you knew how everyone else was going to vote, flipping is ruled out - their votes are set in stone.

And re 'still each justice's preference... matters', I wasn't clear if this is the same point or a separate point - i.e. a signalling or similar argument that the size of the majority matters, e.g. politically.

Comment by bfinn on Rationality of demonstrating & voting · 2018-11-13T11:23:00.455Z · score: 0 (2 votes) · LW · GW

A little bit of altruism still seems to make it rational even if you care almost entirely about yourself - see the example calculations.

I used to think that making voting mandatory was a good solution, but nowadays I think it's a draconian measure. Because what if you disapprove for example of the particular voting system (First Past the Post in the UK/US)? Then forcing you to comply with it, perhaps only symbolically (as you can discomply in other ways like spoiling your ballot paper - unless that will be criminalized too) is a waste of everyone's time.

Similarly if you don't want to vote because you are indifferent between the candidates, or think you don't know enough about the issues to choose a candidate, etc.

Something somewhat similar to, but less draconian than, compulsory voting would be to pay people to vote, e.g. £5 / $5 in cash or vouchers as you exit the polling station. Which would also somewhat correct the current skew in turnout - poorer people are currently less likely to vote.

Comment by bfinn on Productivity: Instrumental Rationality · 2018-11-13T10:26:54.905Z · score: 1 (1 votes) · LW · GW

Having at least a plan for when to work, and being strict about that, works for me. I set alarms on my phone to work in 1 hour focussed bursts, with 15 minute breaks in between, all morning and late afternoon - it seems most people do their best focussed work in the morning; there's also that famous violin/piano student research which indicates that the best students also practice late afternoon. I reserve early/mid afternoon for light work (admin etc.)

In addition, I suggest you have a general plan for which projects to work on during a week & month, and make a daily more specific (though not necessarily detailed) plan first thing in the morning, or (better) at the end of the work day for the next day.

Comment by bfinn on Productivity: Instrumental Rationality · 2018-11-11T20:56:27.583Z · score: 1 (1 votes) · LW · GW

Yes, I've been tracking my productivity daily for over 6 years. I do it using a simple iPhone app called ATracker, which lets you define projects & categories and hit a button whenever you start/stop them.

I use about a dozen categories (for different types of work & broad types of leisure, also broad locations). Every week or two I export the data into a spreadsheet and produce some pretty charts and also many metrics, e.g. about how my time usage matches up to various targets.

It's kind of useful but I'm not that rigorous in keeping to the targets. Nonetheless if I start getting lax, then after a few days or weeks I can't pretend it's not happening, and the data helps nudge me back into being more productive.

By the way, I think you're being overly ambitious aiming at 12 hours of proper deep work per day. I think it's very hard to average more than about 6 hours per day over long periods.

If you start doing a similar kind of tracking, I'd be happy to share with you the kinds of charts, metrics etc. I produce, some of which aren't obvious.

Comment by bfinn on Help Me Refactor Myself I am Lost · 2018-11-09T11:14:33.048Z · score: 2 (2 votes) · LW · GW

Yes, I found my thought processes improved dramatically recently when I stopped listening to the radio after waking in the morning, and in the shower. I now have excellent thinking & ideas at that time of day. Silence and no distractions are golden. (No wonder so many people have good ideas in the shower.)

I also recommend having a notepad by your bed. I've done this for years. Sometimes ideas (or things I forgot to do) occur to me shortly before going to sleep, or occasionally when waking in the night, and I write them down in the dark. It gets them out of your head, which helps you sleep too.

Comment by bfinn on Subsystem Alignment · 2018-11-09T11:05:57.175Z · score: 1 (1 votes) · LW · GW

Thanks, I'll read those with interest.

I didn't think it likely that business has solved any of these problems, but I just wonder if it might include particular failures not found in other fields that help point towards a solution; or even if it has found one or two pieces of a solution.

Business may at least provide some instructive examples of how things go wrong. Because business involves lots of people & motives & cooperation & conflict, things which I suspect humans are particularly good at thinking & reasoning about (as opposed to more abstract things).

Comment by bfinn on Rationality of demonstrating & voting · 2018-11-09T10:43:48.762Z · score: 1 (1 votes) · LW · GW

I.e. hence you can't tell how effective a president will be from their party's policies, because sometimes their most effective actions are following their opponents' policies.

Yes, could be. It's in line with the Putanumonit arguments that you just can't tell which party will be better for the country.

I can't think of particular instances of this in the UK, so I don't know if this is more of a US thing. What quite often happens in the UK (particularly since Tony Blair) is parties stealing each others' policies, even sometimes in stronger form than the other party. But presumably that's just them trying to tempt voters across from the other side with occasional juicy little morsels. I.e. both parties converging on the median voter. [ADDED] Though similar to your point that the other party may implement your party's policies, and perhaps more effectively, which makes it harder to predict which party would run the country better.

Comment by bfinn on Rationality of demonstrating & voting · 2018-11-09T10:28:02.946Z · score: 2 (2 votes) · LW · GW

Yes, interesting points. I haven't really given any thought to voting as a reward/punishment, but many voters do this. Though of course it's mixed up with forward-looking voting, since (for many people) you vote against a politician who did something bad so that they won't be around to do more bad things.

And politicians anticipate punishment-voting as a deterrent to them doing bad things, since there isn't much other deterrent (except the law).

Also an interesting point re voting as reciprocation to similar voters as a kind of solidarity group. (Parties are themselves solidarity groups, but so of course are special interest groups and other supporters of particular policies.)

I'm not sure whether or how all this affects the calculus. Eliezer wrote an article on voting a while back in which if I recall his line was something like 'it's all too complicated to model, so just stick to simple reasoning'.

Re your pentobarbital example, this could be something where the 0.7 cents direct effect on you is bigger - though it would indeed have to be something approaching a $1 billion effect to count (since the expected benefit to you is this / 3 million, in the UK). Though that said almost all issues like this affect quite a few other people too, so altruism makes it worthwhile anyway.

Comment by bfinn on Rationality of demonstrating & voting · 2018-11-09T10:07:27.416Z · score: 1 (1 votes) · LW · GW

I suspect that the chances of a 3rd party winning are orders of magnitude lower than a 1st or 2nd, so the expected value from you having the deciding vote would be too small. But in terms of policy influence, if the 3rd party does unusually well (without winning), I agree that can be significant. Indeed I recall an example of this happening in the UK in the 1990s, when in one national election the Green party (then the 4th or 5th party) did unexpectedly well, albeit still only getting a few % of the vote, which immediately made the major parties start saying how important the environment was and announcing new policies.

Comment by bfinn on Rationality of demonstrating & voting · 2018-11-08T14:45:26.786Z · score: 2 (2 votes) · LW · GW

Yes, but since on my numbers the benefits of voting are so huge, a tiny difference between parties can still justify it. E.g. near the end of the article I calculate that in a UK general election, if the difference between the two main parties equals 10% of government spending (in benefit to the country, not necessarily actual spend), that equals 7% of Brexit or about $7,000 to a marginal voter.

So even if it's only worth 0.1% of government spending (e.g. a small confidence that one party will make a small execution improvement on a few policies), that's $70 - enough to justify voting.

Comment by bfinn on Rationality of demonstrating & voting · 2018-11-07T23:04:45.011Z · score: 1 (1 votes) · LW · GW

[Response substantially edited:]

If I understand you right, you're saying that if Remain were to happen then Leavers would incur a large actual loss (relative to the Leave scenario), because they reckon the benefits of leaving in terms of social cohesion, security etc. will not occur.

Perhaps those aren't the best examples, as arguably those are matters of fact, so Leavers could find out they were wrong if it turns out there is no loss in social cohesion & security by remaining; so they wouldn't necessarily lose utils. A better example might be national self-determination, which a Leave supporter would value come what may, and a Remain supporter might put little value on. That is, Leavers aren't merely predicting that leaving the EU would make things better for the UK, they are expressing a (non-falsifiable) preference for being out of the EU.

I haven't thought of that, and that could be so - or perhaps more likely it's a mixture of prediction and preference. In which case Leavers would only get some negative utils, still leaving tens of thousands of $ per extra Remain voter. (And still plenty enough to justify voting even after major shrinkage by the uncertainty that policies will turn out/be implemented as expected.)

Complicated by the fact that if Remain happens, Leave supporters would always feel things would have been better if Leave had happened, even if their predictions were unknowingly false, because they never get to try out & compare both scenarios. I.e. Leavers will never be satisfied if the UK remains, and Remainers will never be satisfied if the UK leaves, regardless of how the other possible world would have been. (Maybe that's your main point here.) But I reckon that dissatisfaction is small compared with the economic harm caused by leaving (if the median GDP predictions are true).

By the way, I'm not convinced voting is rational (hence I have never voted in my life), and believed that it wasn't, until the altruism calculation occurred to me a year or so ago. My current suspicion is about the validity of multiplying a very small probability by a very large benefit to get a justification; but I haven't yet read/thought of a strong argument against this.

(PS Ah, you're Jacob F - good to meet you! I enjoy your blog.)

Comment by bfinn on Rationality of demonstrating & voting · 2018-11-07T16:16:04.735Z · score: 1 (1 votes) · LW · GW

I'm inclined to agree with you - the Twitter comparison doesn't make sense.

I think he half has a point when saying you can't tell which party will do a better job, inasmuch as there is an information asymmetry between government and opposition. In the UK anyway, opposition parties don't have access to the civil service who understand the detailed workings of government and have internal information, which means that opposition policies are somewhat speculative, and may have to be modified (or not) when they get into government. Particularly so on matters of national security, where it's almost impossible for a voter to assess whether a government policy (based on secret information) is better/worse than an opposition policy (based only on public information). (Aside from your personal preferences about these policies - e.g. if you have an absolute moral opposition to nuclear weapons, or something.)

But that doesn't apply to all policies, and in any case, the expected benefit of voting is so huge that you only need a minimal amount of information about which party is better (on some objective measure like GDP, or relative to your preferences). E.g. it's fine to be only 1% confident.

Comment by bfinn on Subsystem Alignment · 2018-11-07T13:06:06.721Z · score: 1 (1 votes) · LW · GW

Business practice and management studies might provide some useful examples, and possibly some useful insights, here. E.g. off the top of my head:

Fairly obviously, how shareholders try to align managers with their goals, and managers align their subordinates with hopefully the same or close enough goals (and so on down the management chain to the juniors). Incompetent/scheming middle managers with their own agendas (and who may do things like unintentionally or deliberately alter information passed via them up & down the management chain) are a common problem. As are incorrectly incentivized CEOs (not least because their incentive package is typically devised by a committee of fellow board directors, of whom only some may be shareholders).

Less obviously, recruitment as an example of searching for optimizers: how do shareholders find managers who are best able to optimize shareholders' interests, how do managers recruit subordinates, how do shareholders ensure managers are recruiting subordinates aligned with the shareholders' goals rather than some agenda of their own, how are recruiters themselves incentivized and recruited, are there relevant differences between internal & external recruiters (e.g. HR vs headhunters), etc.

Comment by bfinn on Rationality of demonstrating & voting · 2018-11-07T09:24:06.104Z · score: 1 (1 votes) · LW · GW

Hmm, I see your point; but if each vote is independent, then given how all the other voters voted, my vote really does decide the election. E.g. if I go into the ballot box, what I write on my poll slip does not cause and is not caused by what's written on all the other slips (as I don't see them and they don't see mine).

How about this thought experiment: I am the very last person in the country to vote. Unknown to anyone, all the votes made before mine constitute a tie, so my vote will be the deciding vote. Then it really is the case that if I vote one way, 10,000 lives are saved, and the other way, none are. And it is also the case that, given how I voted, if my neighbour had voted the other way, he would have changed the outcome too. (Incidentally it seems only people who vote the same way as me have the power to decide the outcome, given how I voted.)

I do sense the counterfactual complications. Is your argument that the 10,000 lives saved should be apportioned among all the voters in the case of a tie-break, and hence it still isn't worth anyone's while voting? What is the argument for apportioning?

[ADDED:]

Here's a further hand-wavy argument:

You're saying that in the case of a tie-break, everyone who voted for the winning party each gets to save 10,000 lives (overcounting the benefit). But in a normal outcome with no tie-break, none of them do, even though 10,000 lives are still saved (undercounting the benefit). If we account differently, with only the final voter getting the 10,000-life benefit in the tie-break case, and all voters for the winning party (or all after a majority was reached?) sharing the 10,000 in the normal case, so that in every winning scenario the benefit adds up to exactly 10,000 lives (more intuitively), doesn't it all work out the same in terms of expected benefit per voter? (I wonder, without thinking/calculating further.)

Comment by bfinn on Rationality of demonstrating & voting · 2018-11-07T09:00:30.430Z · score: 2 (2 votes) · LW · GW

Yes, I didn't get into more detailed arguments about the pros/cons of voting & voting systems; the Put A Number On It post I linked to has a quite good discussion of these, and I didn't mention other reservations of my own (in particular I'm suspicious of multiplying very small probabilities by very large benefits).

But on your particular point, my brief thought is that not participating in a voting system doesn't make it change (though organizing a mass boycott of it could do). And on my estimated numbers, in the same way that even a tiny bit of altruism makes it worth voting, if you have even a tiny preference between the two lead parties, and even if it's very uncertain they will implement their policies, it is probably still worth voting to keep out the worse one.

Comment by bfinn on New Improved Lottery · 2018-07-14T11:56:47.133Z · score: 1 (1 votes) · LW · GW

I think there's a flaw in this reasoning. You're assuming that the harm from lotteries increases monotonically with the time spend dreaming about winning. The form of your argument is: "a huge amount of dreaming is harmful (because it stops you improving your life in more effective ways), therefore a small amount is harmful (i.e. worse than none)".

Non sequitur. A tablespoon of salt in your soup makes it taste terrible, therefore a pinch of salt makes it taste worse than no salt?

It may well be that spending $1 per week to buy 10 minutes of false but pleasant hope is the best use of that 10 minutes and $1, or at least, no worse than any other use you're likely to make of it. E.g. if you're taking that time and money out of your leisure budget, then you may well use it instead on smoking, or beer, or fries.

And if you instead allocate it to thinking about how to get a promotion, sure you could do that, but why not do both? (I.e. spend a different 10 minutes on your promotion.) So this is a false dilemma. People who play the lottery may exaggerate the probability of winning, but I doubt they make plans - and it's not clear they displace other attempts at self-improvement - on the assumption they'll win.