A Personal Rationality Wishlist 2019-08-27T03:40:00.669Z · score: 42 (25 votes)
Verification and Transparency 2019-08-08T01:50:00.935Z · score: 35 (15 votes)
DanielFilan's Shortform Feed 2019-03-25T23:32:38.314Z · score: 19 (5 votes)
Robin Hanson on Lumpiness of AI Services 2019-02-17T23:08:36.165Z · score: 16 (6 votes)
Test Cases for Impact Regularisation Methods 2019-02-06T21:50:00.760Z · score: 62 (18 votes)
Does freeze-dried mussel powder have good stuff that vegan diets don't? 2019-01-12T03:39:19.047Z · score: 17 (4 votes)
In what ways are holidays good? 2018-12-28T00:42:06.849Z · score: 22 (6 votes)
Kelly bettors 2018-11-13T00:40:01.074Z · score: 23 (7 votes)
Bottle Caps Aren't Optimisers 2018-08-31T18:30:01.108Z · score: 53 (21 votes)
Mechanistic Transparency for Machine Learning 2018-07-11T00:34:46.846Z · score: 55 (21 votes)
Research internship position at CHAI 2018-01-16T06:25:49.922Z · score: 25 (8 votes)
Insights from 'The Strategy of Conflict' 2018-01-04T05:05:43.091Z · score: 73 (27 votes)
Meetup : Canberra: Guilt 2015-07-27T09:39:18.923Z · score: 1 (2 votes)
Meetup : Canberra: The Efficient Market Hypothesis 2015-07-13T04:01:59.618Z · score: 1 (2 votes)
Meetup : Canberra: More Zendo! 2015-05-27T13:13:50.539Z · score: 1 (2 votes)
Meetup : Canberra: Deep Learning 2015-05-17T21:34:09.597Z · score: 1 (2 votes)
Meetup : Canberra: Putting Induction Into Practice 2015-04-28T14:40:55.876Z · score: 1 (2 votes)
Meetup : Canberra: Intro to Solomonoff induction 2015-04-19T10:58:17.933Z · score: 1 (2 votes)
Meetup : Canberra: A Sequence Post You Disagreed With + Discussion 2015-04-06T10:38:21.824Z · score: 1 (2 votes)
Meetup : Canberra HPMOR Wrap Party! 2015-03-08T22:56:53.578Z · score: 1 (2 votes)
Meetup : Canberra: Technology to help achieve goals 2015-02-17T09:37:41.334Z · score: 1 (2 votes)
Meetup : Canberra Less Wrong Meet Up - Favourite Sequence Post + Discussion 2015-02-05T05:49:29.620Z · score: 1 (2 votes)
Meetup : Canberra: the Hedonic Treadmill 2015-01-15T04:02:44.807Z · score: 1 (2 votes)
Meetup : Canberra: End of year party 2014-12-03T11:49:07.022Z · score: 1 (2 votes)
Meetup : Canberra: Liar's Dice! 2014-11-13T12:36:06.912Z · score: 1 (2 votes)
Meetup : Canberra: Econ 101 and its Discontents 2014-10-29T12:11:42.638Z · score: 1 (2 votes)
Meetup : Canberra: Would I Lie To You? 2014-10-15T13:44:23.453Z · score: 1 (2 votes)
Meetup : Canberra: Contrarianism 2014-10-02T11:53:37.350Z · score: 1 (2 votes)
Meetup : Canberra: More rationalist fun and games! 2014-09-15T01:47:58.425Z · score: 1 (2 votes)
Meetup : Canberra: Akrasia-busters! 2014-08-27T02:47:14.264Z · score: 1 (2 votes)
Meetup : Canberra: Cooking for LessWrongers 2014-08-13T14:12:54.548Z · score: 1 (2 votes)
Meetup : Canberra: Effective Altruism 2014-08-01T03:39:53.433Z · score: 1 (2 votes)
Meetup : Canberra: Intro to Anthropic Reasoning 2014-07-16T13:10:40.109Z · score: 1 (2 votes)
Meetup : Canberra: Paranoid Debating 2014-07-01T09:52:26.939Z · score: 1 (2 votes)
Meetup : Canberra: Many Worlds + Paranoid Debating 2014-06-17T13:44:22.361Z · score: 1 (2 votes)
Meetup : Canberra: Decision Theory 2014-05-26T14:44:31.621Z · score: 1 (2 votes)
[LINK] Scott Aaronson on Integrated Information Theory 2014-05-22T08:40:40.065Z · score: 22 (23 votes)
Meetup : Canberra: Rationalist Fun and Games! 2014-05-01T12:44:58.481Z · score: 0 (3 votes)
Meetup : Canberra: Life Hacks Part 2 2014-04-14T01:11:27.419Z · score: 0 (1 votes)
Meetup : Canberra Meetup: Life hacks part 1 2014-03-31T07:28:32.358Z · score: 0 (1 votes)
Meetup : Canberra: Meta-meetup + meditation 2014-03-07T01:04:58.151Z · score: 3 (4 votes)
Meetup : Second Canberra Meetup - Paranoid Debating 2014-02-19T04:00:42.751Z · score: 1 (2 votes)


Comment by danielfilan on Can this model grade a test without knowing the answers? · 2019-09-13T03:57:10.322Z · score: 9 (3 votes) · LW · GW

One example of the ability of the model: in the paper, the model is run on 120 responses to a quiz consisting of 60 Raven's Progressive Matrices questions, each question with 8 possible answers. As it happens, no responder got more than 50 questions right. The model correctly inferred the answers to 46 of the questions.

A key assumption in the model is that errors are random: so, in domains where you're only asking a small number of questions, and for most questions a priori you have reason to expect some wrong answers to be more common than the right one (e.g. "What's the capital of Canada/Australia/New Zealand"), I think this model would not work (although if there were enough other questions such that good estimates of responder ability could be made, that could ameliorate the problem). If I wanted to learn more, I would read this 2016 review paper of the general field.

Comment by danielfilan on Open & Welcome Thread - September 2019 · 2019-09-10T20:19:19.774Z · score: 2 (1 votes) · LW · GW

I often find myself seeing a cool post, and then thinking that it would take too much time to read it now but that I don't want to forget it. I don't like browser-based solutions for this.

Comment by danielfilan on Open & Welcome Thread - September 2019 · 2019-09-10T20:18:28.008Z · score: 10 (4 votes) · LW · GW

Feature request: let me mark a post as 'to read', which should have it appear in my recommendations until I read it.

Comment by danielfilan on What Programming Language Characteristics Would Allow Provably Safe AI? · 2019-09-05T19:00:05.551Z · score: 6 (3 votes) · LW · GW

Here's a public github for coda, the language he's been working on, with a bit written about it.

Comment by danielfilan on One Way to Think About ML Transparency · 2019-09-04T00:15:25.460Z · score: 15 (5 votes) · LW · GW

Update: I reread the post (between commenting that and now, as prep for another post currently in draft form). It is better than I remember, and I'm pretty proud of it.

Comment by danielfilan on One Way to Think About ML Transparency · 2019-09-03T16:05:44.767Z · score: 3 (2 votes) · LW · GW

If the human knows the logic of the random number generator that was used to initialize the parameters of the original network, they have no problem to manually run the same logic themselves.

Presumably the random seed is going to be big and complicated.

Comment by danielfilan on One Way to Think About ML Transparency · 2019-09-03T00:27:01.921Z · score: 5 (3 votes) · LW · GW

Ah, gotcha. I think this is a bit different to compressibility: if you formalise it as Kolmogorov complexity, then you can have a very compressible algorithm that in fact you can't compress given time limits, because it's too hard to figure out how to compress it. This seems more like 'de facto compressibility', which might be formalised using the speed prior or a variant.

Comment by danielfilan on One Way to Think About ML Transparency · 2019-09-03T00:18:46.443Z · score: 3 (2 votes) · LW · GW

One question remains: are these models simulatable? Strictly speaking, no. A human given the decision tree would still be able to get a rough idea of why the neural network was performing a particular decision. However, without the model weights, a human would still be forced to make an approximate inference rather than follow the decision procedure exactly. That's because after the training procedure, we can only extract a decision tree that approximates the neural network decisions, not extract a tree that perfectly simulates it.

Presumably if the extraction procedure is good enough, then the decision tree gets about as much accuracy as the neural network, and if inference times are similar, then you could just use the decision tree instead, and think of this as a neat way of training decision trees by using neural networks as an intermediate space where gradient descent works nicely.

Comment by danielfilan on One Way to Think About ML Transparency · 2019-09-03T00:08:54.423Z · score: 8 (5 votes) · LW · GW

In a more complex ad-hoc approach, we could instead design a way to extract a theory simulatable algorithm that our model is implementing. In other words, given a neural network, we run some type of meta-algorithm that analyzes the neural network and spits out psuedocode which describes what the neural network uses to make decisions. As I understand, this is roughly what Daniel Filan writes about in Mechanistic Transparency for Machine Learning.

I endorse this as a description of how I currently think about mechanistic transparency, although I haven't reread the post (and imagine finding it somewhat painful to), so can't fully endorse your claim.

Comment by danielfilan on One Way to Think About ML Transparency · 2019-09-03T00:05:59.657Z · score: 3 (2 votes) · LW · GW

In theory simulatability, the human would not necessarily be able to simulate the algorithm perfectly, but they would still say that they algorithm is simulatable in their head, "given enough empty scratch paper and time." Therefore, MCTS is interpretable because a human could in theory sit down and work through an entire example on piece of paper. It may take ages, but the human would eventually get it done; at least, that's the idea. However, we would not say that some black box ANN is interpretable, because even if the human had several hours to stare at the weight matrices, once they were no longer acquainted with the exact parameters of the model, they would have no clue as to why the ANN was making decisions.

I'm not sure what distinction you're drawing here - in both cases, you can simulate the algorithm in your head given enough scratch paper and time. To me, the natural distinction between the two is compressibility, not simulability, since all algorithms that can be written in standard programming languages can be simulated by a Turing machine, which can be simulated by a human with time and scratch paper.

Comment by danielfilan on [AN #62] Are adversarial examples caused by real but imperceptible features? · 2019-08-30T22:04:20.124Z · score: 2 (1 votes) · LW · GW

I'm sort of confused by the main point of that post. Is the idea that the robot can't stack blocks because of a physical limitation? If so, it seems like this is addressed by the first initial objection. Is it rather that the model space might not have the capacity to correctly imitate the human? I'd be somewhat surprised by this being a big issue, and at any rate it seems like you could use the Wasserstein metric as a cost function and get a desirable outcome. I guess instead we're instead imagining a problem where there's no great metric (e.g. text answers to questions)?

Comment by danielfilan on Open & Welcome Thread - August 2019 · 2019-08-30T21:41:17.799Z · score: 7 (3 votes) · LW · GW

A colleague notes:

  • an into deep learning course will be useful even once you've taken the coursera course
  • this textbook is mathematically oriented and good (although I can't vouch for that personally)
  • depth-first search from research agendas seems infeasible for someone without machine learning experience, with the exception of MIRI's agent foundations agenda
Comment by danielfilan on How to Make Billions of Dollars Reducing Loneliness · 2019-08-30T20:39:36.520Z · score: 8 (5 votes) · LW · GW

It takes less effort to rinse a dish before putting it in a diswasher than it does to clean it by hand (in fact often you don't need to rinse it), and the machine beeps once your dishes are dry. These factors, plus the batch processing, make dishwashers less effortful per dish for me.

Comment by danielfilan on Test Cases for Impact Regularisation Methods · 2019-08-30T20:33:13.215Z · score: 2 (1 votes) · LW · GW

Another example is described in Stuart Armstrong's post about a bucket of water. Unlike the test cases in this post, it doesn't have an unambiguous answer independent of the task specification.

Comment by danielfilan on Open & Welcome Thread - August 2019 · 2019-08-30T03:17:16.311Z · score: 5 (3 votes) · LW · GW

My guess is that taking an ML coursera course is the best next step (or perhaps a ML course taught at your university, if that's a viable option).

More speculatively, it might be a good idea to read a research agenda (e.g. Concrete Problems in AI Safety, the Embedded Agency Sequence), dig into sections that seem interesting, and figure out what you need to know to understand the content and the cited papers. But this probably won't work until you understand the basics of ML (for things like CPAIS) or mathematical logic and Bayesian decision theory (for things like the embedded agency sequence).

Comment by danielfilan on A Personal Rationality Wishlist · 2019-08-29T21:51:01.537Z · score: 3 (2 votes) · LW · GW

Sensible advice, although I'm more interested in the metaphorical case where this isn't possible (which is natural to me, since my actual room has curtains but no doors, partially because I'm not actually worried about housemate snooping).

Comment by danielfilan on A Personal Rationality Wishlist · 2019-08-29T21:49:44.402Z · score: 6 (3 votes) · LW · GW

A counterpoint to your first sentence:

The quality of roads is relevant, but not really the answer. Bicycles can be ridden on dirt roads or sidewalks (although the latter led to run-ins with pedestrians and made bicycles unpopular among the public at first). And historically, roads didn’t improve until after bicycles became common—indeed it seems that it was in part the cyclists who called for the improvement of roads.

From this post about why humanity waited so long for the bicycle. I particularly recommend the discussion of how long it took to invent pedals and gears.

Comment by danielfilan on A Personal Rationality Wishlist · 2019-08-29T21:46:11.574Z · score: 3 (2 votes) · LW · GW

At this juncture, it seems important to note that all examples I can think of took place on Facebook, where you can just end interactions like this without it being awkward.

Comment by danielfilan on A Personal Rationality Wishlist · 2019-08-29T21:45:13.758Z · score: 2 (1 votes) · LW · GW

I assume OP is taking the perspective of his friends, who are annoyed by this behavior, rather than the perspective of the anime-fans, who don't necessarily see anything wrong with the situation.

In the literal world, I'm an anime fan, but the situation seems basically futile: the people recommending anime seem like they're accomplishing nothing but generating frustration. More metaphorically, I'm mostly interested in how to prevent the behaviour either as somebody complaining about anime or as a third party, and secondarily interested in how to restrain myself from recommending anime.

Comment by danielfilan on A Personal Rationality Wishlist · 2019-08-27T21:53:20.948Z · score: 8 (7 votes) · LW · GW

I agree that many people do not understand how bicycles work, if that was your point. My claim was that it is possible to look at a bicycle and understand how it works, not that it was inevitable for everybody who interacts with a bicycle to do so. I think the prevalence of misunderstanding of bicycles is not strong evidence against my claim, since my guess is that most people who interact with bicycles don't spend time looking at them and trying to figure out how they work. If people looking at bicycles still couldn't reproduce them, that would be strong evidence against my claim, but that was relatively uncommon.

[ETA: although I see how this undermines the idea that it only requires 'a little' thought, since that brings to mind thought that only takes a few seconds.]

Comment by danielfilan on A Personal Rationality Wishlist · 2019-08-27T20:02:20.463Z · score: 5 (3 votes) · LW · GW

Sorry, I meant the feelings that are more prevalent among more neurotic people, like "anxiety, worry, fear, anger, frustration, envy, jealousy, guilt, depressed mood, and loneliness" (list taken from Wikipedia).

Comment by danielfilan on A Personal Rationality Wishlist · 2019-08-27T20:00:04.428Z · score: 4 (6 votes) · LW · GW

Note that the amusingly high failure rate in that paper is in the condition where people were not looking at bicycles with their own eyes, and when they were, the vast majority of respondents did fine at the task.

Comment by danielfilan on A Personal Rationality Wishlist · 2019-08-27T05:57:12.545Z · score: 2 (1 votes) · LW · GW

I am pro "go back to sleep if you're tired enough to actually fall asleep again", although I will note that this is quite different from staying curled up under the covers – I enjoy that too but I am more sympathetic to Marcus here

Huh - in my imagination, curling up under the covers and 'daydreaming' is like 75% of the way between wakeful alertness and sleep, and has many of the same functions.

Comment by danielfilan on Verification and Transparency · 2019-08-09T05:03:53.864Z · score: 3 (2 votes) · LW · GW

[Y]ou say that transparency and verification are the same thing

It's important to me to note that I only claimed that they are "sort of the same".

[Y]our examples only seem to support that transparency enables verification. Is that closer to what you were trying to say?

No, but you've picked up a weakness in my exposition (or rather something that I just forgot to say). Verification also enables transparency: by verifying a large number of properties of a system, one provides a 'view' for a user to understand the system, just as a transparency method can itself be thought of as verifying some properties of a system: for example, sharing the source code of a binary verifies that that source code compiles into the given binary, and that the binary when executed will use such and such memory (if the source code is written in a language that makes that explicit), etc. As such, one can think of both verification and transparency as providing artefacts that prove certain properties of systems, although they 'prove' these properties in somewhat different ways.

There is a tradeoff between verification&transparency and expressiveness.

Indeed, an important point - although in many cases, this is a plus, since sometimes precisely the thing you want to do is to make it impossible to create certain malformed systems (e.g. type systems that ensure that you never attempt to divide one string by another). As such, these methods work better when one has good reasons to rule out the class of objects that cannot be transparified/verified by them.

Comment by danielfilan on The AI Timelines Scam · 2019-07-12T07:07:53.432Z · score: 16 (7 votes) · LW · GW
  • Doesn't engage with the post's arguments.
  • I think that it's wrong to assume that the prior on 'short' vs 'long' timelines should be 50/50.
  • I think that it's wrong to just rely on a prior, when it seems like one could obtain relevant evidence.
Comment by danielfilan on DanielFilan's Shortform Feed · 2019-07-04T22:44:38.428Z · score: 32 (7 votes) · LW · GW

The Indian grammarian Pāṇini wanted to exactly specify what Sanskrit grammar was in the shortest possible length. As a result, he did some crazy stuff:

Pāṇini's theory of morphological analysis was more advanced than any equivalent Western theory before the 20th century. His treatise is generative and descriptive, uses metalanguage and meta-rules, and has been compared to the Turing machine wherein the logical structure of any computing device has been reduced to its essentials using an idealized mathematical model.

There are two surprising facts about this:

  1. His grammar was written in the 4th century BC.
  2. People then failed to build on this machinery to do things like formalise the foundations of mathematics, formalise a bunch of linguistics, or even do the same thing for languages other than Sanskrit, in a way that is preserved in the historical record.

I've been obsessing about this for the last few days.

Comment by danielfilan on steven0461's Shortform Feed · 2019-06-30T18:01:48.080Z · score: 4 (2 votes) · LW · GW

Maybe Good Judgement Open? I don't know how they actually get their probabilities though.

Comment by danielfilan on Is there a guide to 'Problems that are too fast to Google'? · 2019-06-18T07:32:04.565Z · score: 4 (3 votes) · LW · GW

First aid seems very close to this category, consisting of immediate assistance to an injured person. The major differences are that (a) it's specific to physical injuries and (b) it involves things one person can do to help another, rather than things one should do to help oneself.

I've taken first aid training in Berkeley, California, and the guide to CPR was helpful, although the rest seemed to be mostly about meeting legal requirements and not that effective in actually teaching stuff (as evidenced by me not remembering it).

Comment by danielfilan on Is there a guide to 'Problems that are too fast to Google'? · 2019-06-18T07:26:52.043Z · score: 2 (1 votes) · LW · GW

Judo also recommends slapping the ground - see e.g. this link

Comment by danielfilan on Conditions for Mesa-Optimization · 2019-06-05T20:49:57.630Z · score: 8 (5 votes) · LW · GW

To see this, we can think of optimization power as being measured in terms of the number of times the optimizer is able to divide the search space in half—that is, the number of bits of information provided.

This is pretty confusing for me: If I'm doing gradient descent, how many times am I halving the entire search space? (although I appreciate that it's hard to come up with a better measure of optimisation)

Comment by danielfilan on Conditions for Mesa-Optimization · 2019-06-05T20:47:47.566Z · score: 5 (3 votes) · LW · GW

AFAICT, algorithmic range isn't the same thing as model capacity: I think that tabular learners have low algorithmic range, as the terms are used in this post, but high model capacity.

Comment by danielfilan on Risks from Learned Optimization: Introduction · 2019-05-31T02:46:15.118Z · score: 17 (9 votes) · LW · GW

Another example of trained optimisers that is imo worth checking out is Value Iteration Networks.

Comment by danielfilan on Totalitarian ethical systems · 2019-05-14T17:59:21.723Z · score: 4 (2 votes) · LW · GW

I guess I'd first like to disagree with the implication that using a single metric implies collapsing everything into a single metric, without getting curious about details and causal chains. The latter seems bad, for the reasons that you've mentioned, but I think there are reasons to like the former. Those reasons:

  • Many comparisons have a large number of different features. Choosing a single metric that's a function of only some features can make the comparison simpler by stopping you from considering features that you consider irrelevant, and inducing you to focus on features that are important for your decision (e.g. "gardening looks strictly better than charter cities because it makes me more productive, and that's the important thing in my metric - can I check if that's actually true, or quantify that?").
  • Many comparisons have a large number of varying features. If you think that by default you have biases or, more generally, unendorsed subroutines that cause you to focus on features you shouldn't, it can be useful to think about them when constructing a metric, and then using the metric in a way that 'crowds out' relevant biases (e.g. you might tie yourself to using QALYs if you're worried that by default you'll tend to favour interventions that help people of your own ethnicity more than you would consciously endorse). See Hanson's recent discussion of simple rules vs the use of discretion.
  • By having your metric be a function of a comparatively small number of features, you give yourself the ability to search the space of things you could possibly do by how those things stack up against those features, focussing the options you consider on things that you're more likely to endorse (e.g. "hmm, if I wanted to maximise QALYs, what jobs would I want to take that I'm not currently considering?" or "hmm, if I wanted to maximise QALYs, what systems in the world would I be interested in affecting, and what instrumental goals would I want to pursue?"). I don't see how to do this without, if not a single metric, then a small number of metrics.
  • Metrics can crystallise tradeoffs. If I'm regularly thinking about different interventions that affect the lives of different farmed animals, then after making several decisions, it's probably computationally easier for me to come up with a rule for how I tend to trade off cow effects vs sheep effects, and/or freedom effects vs pain reduction effects, then to make that tradeoff every time independently.
  • Metrics help with legibility. This is less important in the case of an individual choosing career options to take, but suppose that I want to be GiveWell, and recommend charities I think are high-value, or I want to let other people who I don't know very well invest in my career. In that case, it's useful to have a legible metric that explains what decisions I'm making, so that other people can predict my future actions better, and so that they can clearly see reasons for why they should support me.
Comment by danielfilan on Totalitarian ethical systems · 2019-05-10T02:03:35.250Z · score: 2 (1 votes) · LW · GW

Profit is a helpful unifying decision metric, but it's not actually good to literally just maximize profits, this leads in the long run to destructive rent-seeking, regulatory capture, and trying to maximize negative externalities.

Agreed. That being said, it does seem like the frame in which it's important to evaluate global states of the business using the simple metric of profit is also right: like, maybe you also need strategic vision and ethics, but if you're not assessing expected future profits, it certainly seems to me that you're going to miss some things and go off the rails. [NB: I am more tied to the personal impact example than the business example, so I'd like to focus discussion in that thread, if it continues].

Comment by danielfilan on Tales From the American Medical System · 2019-05-10T01:54:16.962Z · score: 18 (8 votes) · LW · GW

Your friend has a deadly disease that requires regular doctor visits and prescriptions.

I think that this is a sketchy way to phrase this. Presumably, what a disease requires is a cure (or one of several cures). 'Doctor visits' and 'prescriptions' are one system society can have to assign cures to people, but there could also be other systems, like 'you get to walk to a store and buy insulin if you want some without needing anybody's seal of approval, and you can also see somebody to advise you on how much insulin to take'. Saying that the disease requires regular doctor visits and prescriptions seems to me to rhetorically imply that the costs associated with those are due to the disease, not due to the health care system, without doing the work of checking how well the system works (after all, if the system were about as well as we could manage, the costs really would be due to the disease).

Comment by danielfilan on Totalitarian ethical systems · 2019-05-08T03:38:52.377Z · score: 14 (4 votes) · LW · GW

Re: the section on coming up with simple metrics to evaluate global states, which I couldn't quickly figure out how to nicely excerpt:

I tentatively disagree with the claim that "Only if you go all the way to the extreme of total central planning do you really need a single totalizing metric", at least the way I think 'totalizing' is being applied. As a human in the world, I can see a few cool things I could potentially do: I could continue my PhD and try to do some important research in AI alignment, I could try to get involved with projects to build charter cities, I could try to advocate for my city to adopt policies that I think are good for local flourishing, I could try to give people info that makes it easier for them to eat a vegan diet, or I could make a nice garden. Since I can't do all of these, I need some way to pick between them. One important way is how productive I would be at each activity (as measured by to what extent I can get the activity done), but I think that for many of these my productivity is about in the same ballpark. To compare between these different activites, it seems like it's really useful to have a single metric on the future history of the world that can trade off the different bits of the world that these activities affect. Similarly, if I'm running a business, it's hard to understand how I could make do without the single metric of profit to guide my decisions.

Comment by danielfilan on Coordination Surveys: why we should survey to organize responsibilities, not just predictions · 2019-05-08T00:09:56.652Z · score: 12 (7 votes) · LW · GW

Causing people to change their behaviour to your favourite behaviour by means other than adding safe input to people's rational deliberation processes seems questionable. Causing people to learn more about the world and give them the opportunity to change their behaviour if they feel it's warranted by the state of the world seems good. This post seems like it's proposing the latter to me - if you disagree, could you point out why?

Comment by danielfilan on DanielFilan's Shortform Feed · 2019-05-02T19:58:26.356Z · score: 13 (5 votes) · LW · GW

I often see (and sometimes take part in) discussion of Facebook here. I'm not sure whether when I partake in these discussions I should disclaim that my income is largely due to Good Ventures, whose money largely comes from Facebook investments. Nobody else does this, so shrug.

Comment by danielfilan on Habryka's Shortform Feed · 2019-04-30T21:40:06.551Z · score: 2 (1 votes) · LW · GW

In my case, it sure feels like I check my karma often because I often want to know what my karma is, but maybe others differ.

Comment by danielfilan on Habryka's Shortform Feed · 2019-04-30T18:36:08.424Z · score: 5 (3 votes) · LW · GW

I mean, you can definitely check your karma multiple times a day to see where the last two sig digits are at, which is something I sometimes do.

Comment by danielfilan on DanielFilan's Shortform Feed · 2019-04-30T02:35:28.567Z · score: 5 (3 votes) · LW · GW

Often big things are made of smaller things: e.g., the economy is made of humans and machines interacting, and neural networks are made of linear functions and ReLUs composed together. Say that a property P survives composition if knowing that P holds for all the smaller things tells you that P holds for the bigger thing. It's nice if properties survive composition, because it's easier to figure out if they hold for small things than to directly tackle the problem of whether they hold for a big thing. Boundedness doesn't survive composition: people and machines are bounded, but the economy isn't. Interpretability doesn't survive composition: linear functions and ReLUs are interpretable, but neural networks aren't.

Comment by danielfilan on DanielFilan's Shortform Feed · 2019-04-30T00:23:22.069Z · score: 19 (4 votes) · LW · GW

Shower thought[*]: the notion of a task being bounded doesn't survive composition. Specifically, say a task is bounded if the agent doing it is only using bounded resources and only optimising a small bit of the world to a limited extent. The task of 'be a human in the enterprise of doing research' is bounded, but the enterprise of research in general is not bounded. Similarly, being a human with a job vs the entire human economy. I imagine keeping this in mind would be useful when thinking about CAIS.

Similarly, the notion of a function being interpretable doesn't survive composition. Linear functions are interpretable (citation: the field of linear algebra), as is the ReLU function, but the consensus is that neural networks are not, or at least not in the same way.

I basically wish that the concepts that I used survived composition.

[*] Actually I had this on a stroll.

Comment by danielfilan on When is rationality useful? · 2019-04-27T01:26:53.281Z · score: 11 (3 votes) · LW · GW

All else equal, do you think a rationalist mathematician will become more successful in their field than a non-rationalist mathematician?

This post by Jacob Steinhardt seems relevant: it's a sequence of models of research, and describes what good research strategies look like in them. He says, of the final model:

Before implementing this approach, I made little research progress for over a year; afterwards, I completed one project every four months on average. Other changes also contributed, but I expect the ideas here to at least double your productivity if you aren't already employing a similar process.

Comment by danielfilan on Speaking for myself (re: how the LW2.0 team communicates) · 2019-04-26T00:29:15.509Z · score: 10 (2 votes) · LW · GW

FWIW, we spend loads of time on belief-communication.

To clarify, I didn't think otherwise (and also, right now, I'm not confident that you thought I did think otherwise).

We still converge on a course of action.

Sure - I now think that my comment overrated how much convergence was necessary for decision-making.

Comment by danielfilan on Speaking for myself (re: how the LW2.0 team communicates) · 2019-04-26T00:27:00.228Z · score: 2 (1 votes) · LW · GW

I get the sense that you don't understand me here.

In a system of mutual understanding, I have a model of your model, and you have a model of my model, but nevertheless any prediction about the world is a result of one of our two models (which might have converged, or at the very least include parts of one another).

We can choose to live in a world where the model in my head is the same as the model in your head, and that this is common knowledge. In this world, you could think about a prediction being made by either the model in my head or the model in your head, but it makes more sense to think about it as being made by our model, the one that results from all the information we both have (just like the integer 3 in my head is the same number as the integer 3 in your head, not two numbers that happen to coincide). If I believed that this was possible, I wouldn't talk about how official group models are going to be impoverished 'common denominator' models, or conclude a paragraph with a sentence like "Organizations don’t have models, people do."

Comment by danielfilan on Speaking for myself (re: how the LW2.0 team communicates) · 2019-04-25T23:05:08.753Z · score: 2 (1 votes) · LW · GW

[E]ven if there are collective decisions, there are no collective models. Not real models.

When the team agrees to do something, it is only because enough of the individual team members individually have models which indicate it is the right thing to do.

There's something kind of worrying/sad about this. One would hope that with a small enough group, you'd be able to have discussion and Aumann magicconvergence lead to common models (and perhaps values?) being held by everybody. In this world, the process of making decisions is about gathering information from team members about the relevant considerations, and then a consensus emerges about what the right thing to do is, driven by consensus beliefs about the likely outcomes. When you can't do this, you end up in voting theory land, where even if each individual is rational, methods to aggregate group preferences about plans can lead to self-contradictory results.

I don't particularly have advice for you here - presumably you've already thought about the cost-benefit analysis of spending marginal time on belief communication - but the downside here felt worth pointing out.

Comment by danielfilan on The Principle of Predicted Improvement · 2019-04-25T18:36:01.140Z · score: 8 (5 votes) · LW · GW

Just to add an additional voice here, I would view that as incorrect in this context, instead referring to the thing that the CEE is saying. The way I'd try to clarify this would be to put the variables varying in the expectation in subscripts after the , so the CEE equation would look like , and the PPI inequality would be .

Comment by danielfilan on The Principle of Predicted Improvement · 2019-04-25T05:28:36.913Z · score: 8 (5 votes) · LW · GW

I should note that when I first saw the PPI inequality, I also didn't get what it was saying, just because I had very low prior probability mass on it saying the thing it actually says. (I can't quite pin down what generalisation or principle led to this situation, but there you go.)

Comment by danielfilan on The Principle of Predicted Improvement · 2019-04-25T05:21:05.995Z · score: 9 (6 votes) · LW · GW

I have a very basic question about notation -- what tells me that H in the equation refers to the true hypothesis?

H stands for hypothesis. We're taking expectations over our distribution over hypotheses: that is, expectations over which hypothesis is true.

Put another way, I don't really understand why that equation has a different interpretation than the conservation-of-expected-evidence equation: E[P(H=hi|D)]=P(H=hi).

In the PPI inequality, the expectations are being taken over H and D jointly, in the CEE equation, the expectation is just being taken over D.

Comment by danielfilan on DanielFilan's Shortform Feed · 2019-04-25T05:18:51.938Z · score: 13 (3 votes) · LW · GW

One result that's related to Aumann's Agreement Theorem is that if you and I alternate saying our posterior probabilities of some event, we converge on the same probability if we have common priors. You might therefore wonder why we ever do anything else. The answer is that describing evidence is strictly more informative than stating one's posterior. For instance, imagine that we've both secretly flipped coins, and want to know whether both coins landed on the same side. If we just state our posteriors, we'll immediately converge to 50%, without actually learning the answer, which we could have learned pretty trivially by just saying how our coins landed. This is related to the original proof of the Aumann agreement theorem in a way that I can't describe shortly.