DanielFilan's Shortform Feed

post by DanielFilan · 2019-03-25T23:32:38.314Z · score: 19 (5 votes) · LW · GW · 14 comments

Rationality-related writings that are more comment-shaped than post-shaped. Please don't leave top-level comments here unless they're indistinguishable to me from something I would say here.

14 comments

Comments sorted by top scores.

comment by DanielFilan · 2019-10-11T23:48:53.653Z · score: 41 (14 votes) · LW · GW

Hot take: if you think that we'll have at least 30 more years of future where geopolitics and nations are relevant, I think you should pay at least 50% as much attention to India as to China. Similarly large population, similarly large number of great thinkers and researchers. Currently seems less 'interesting', but that sort of thing changes over 30-year timescales. As such, I think there should probably be some number of 'India specialists' in EA policy positions that isn't dwarfed by the number of 'China specialists'.

comment by G Gordon Worley III (gworley) · 2019-10-14T17:41:22.455Z · score: 15 (5 votes) · LW · GW

For comparison, in a universe where EA existed 30 years ago we would have thought it very important to have many Russia specialists.

comment by Adam Scholl (adam_scholl) · 2019-10-18T06:58:58.876Z · score: 5 (3 votes) · LW · GW

I've been wondering recently whether CFAR should try having some workshops in India for this reason. Far more people speak English than in China, and I expect we'd encounter fewer political impediments.

comment by DanielFilan · 2019-07-04T22:44:38.428Z · score: 35 (9 votes) · LW · GW

The Indian grammarian Pāṇini wanted to exactly specify what Sanskrit grammar was in the shortest possible length. As a result, he did some crazy stuff:

Pāṇini's theory of morphological analysis was more advanced than any equivalent Western theory before the 20th century. His treatise is generative and descriptive, uses metalanguage and meta-rules, and has been compared to the Turing machine wherein the logical structure of any computing device has been reduced to its essentials using an idealized mathematical model.

There are two surprising facts about this:

  1. His grammar was written in the 4th century BC.
  2. People then failed to build on this machinery to do things like formalise the foundations of mathematics, formalise a bunch of linguistics, or even do the same thing for languages other than Sanskrit, in a way that is preserved in the historical record.

I've been obsessing about this for the last few days.

comment by DanielFilan · 2019-04-30T00:23:22.069Z · score: 20 (5 votes) · LW · GW

Shower thought[*]: the notion of a task being bounded doesn't survive composition. Specifically, say a task is bounded if the agent doing it is only using bounded resources and only optimising a small bit of the world to a limited extent. The task of 'be a human in the enterprise of doing research' is bounded, but the enterprise of research in general is not bounded. Similarly, being a human with a job vs the entire human economy. I imagine keeping this in mind would be useful when thinking about CAIS.

Similarly, the notion of a function being interpretable doesn't survive composition. Linear functions are interpretable (citation: the field of linear algebra), as is the ReLU function, but the consensus is that neural networks are not, or at least not in the same way.

I basically wish that the concepts that I used survived composition.

[*] Actually I had this on a stroll.

comment by Raemon · 2019-04-30T02:26:33.585Z · score: 4 (2 votes) · LW · GW

Fwiw, this seems like an interesting thought but I'm not sure I understand it, and curious if you could say it in different words. (but, also, if the prospect of being asked to do that for your shortform comments feels ughy, no worries)

comment by DanielFilan · 2019-04-30T02:35:28.567Z · score: 5 (3 votes) · LW · GW

Often big things are made of smaller things: e.g., the economy is made of humans and machines interacting, and neural networks are made of linear functions and ReLUs composed together. Say that a property P survives composition if knowing that P holds for all the smaller things tells you that P holds for the bigger thing. It's nice if properties survive composition, because it's easier to figure out if they hold for small things than to directly tackle the problem of whether they hold for a big thing. Boundedness doesn't survive composition: people and machines are bounded, but the economy isn't. Interpretability doesn't survive composition: linear functions and ReLUs are interpretable, but neural networks aren't.

comment by DanielFilan · 2019-09-26T18:25:14.942Z · score: 15 (4 votes) · LW · GW

I get to nuke LW today AMA.

comment by DanielFilan · 2019-05-02T19:58:26.356Z · score: 13 (5 votes) · LW · GW

I often see (and sometimes take part in) discussion of Facebook here. I'm not sure whether when I partake in these discussions I should disclaim that my income is largely due to Good Ventures, whose money largely comes from Facebook investments. Nobody else does this, so shrug.

comment by Raemon · 2019-05-02T21:49:43.324Z · score: 5 (2 votes) · LW · GW

Huh. Indeed seems good to at least have talked about talking about.

comment by DanielFilan · 2019-04-25T05:18:51.938Z · score: 13 (3 votes) · LW · GW

One result that's related to Aumann's Agreement Theorem is that if you and I alternate saying our posterior probabilities of some event, we converge on the same probability if we have common priors. You might therefore wonder why we ever do anything else. The answer is that describing evidence is strictly more informative than stating one's posterior. For instance, imagine that we've both secretly flipped coins, and want to know whether both coins landed on the same side. If we just state our posteriors, we'll immediately converge to 50%, without actually learning the answer, which we could have learned pretty trivially by just saying how our coins landed. This is related to the original proof of the Aumann agreement theorem in a way that I can't describe shortly.

comment by DanielFilan · 2019-03-25T23:41:11.929Z · score: 7 (4 votes) · LW · GW

I made this post with the intent to write a comment, but the process of writing the comment out made it less persuasive to me. The planning fallacy?

comment by Raemon · 2019-03-25T23:42:42.216Z · score: 7 (4 votes) · LW · GW

If this is all that Shortform Feed posts ever do it still seems net positive. :P

[edit: conditional on, you know, you endorsing it being less persuasive]

comment by Raemon · 2019-03-26T00:04:30.435Z · score: 4 (3 votes) · LW · GW

Similarly, I sometimes start a shortform post and then realize "you know what, this is actually a long post". And I think that's also shortform doing an important job of lowering the barrier to getting started even if it doesn't directly get used.