Posts

Bets and updating 2019-10-07T23:06:18.778Z · score: 30 (12 votes)
Joy in Discovery: Galois theory 2019-09-02T19:16:46.542Z · score: 31 (10 votes)
Eigil Rischel's Shortform 2019-08-30T20:37:38.828Z · score: 1 (1 votes)

Comments

Comment by eigil-rischel on Category Theory Without The Baggage · 2020-02-03T23:01:08.503Z · score: 11 (7 votes) · LW · GW

I think, rather than "category theory is about paths in graphs", it would be more reasonable to say that category theory is about paths in graphs up to equivalence, and in particular about properties of paths which depend on their relations to other paths (more than on their relationship to the vertices)*. If your problem is most usefully conceptualized as a question about paths (finding the shortest path between two vertices, or counting paths, or something in that genre, you should definitely look to the graph theory literature instead)

* I realize this is totally incomprehensible, and doesn't make the case that there are any interesting problems like this. I'm not trying to argue that category theory is useful, just clarifying that your intuition that it's not useful for problems that look like these examples is right.

Comment by eigil-rischel on Category Theory Without The Baggage · 2020-02-03T22:05:10.767Z · score: 19 (10 votes) · LW · GW

As an algebraic abstractologist, let me just say this is an absolutely great post. My comments:

Category theorists don't distinguish between a category with two objects and an edge between them, and a category with two objects and two identified edges between them (the latter object doesn't really even make sense in the usual account). In general, the extra equivalence relation that you have to carry around makes certain things more complicated in this version.

I do tend to agree with you that thinking of categories as objects, edges and an equivalence relation on paths is a more intuitive perspective, but let me defend the traditional presentation. By far the most essential/prototypical examples are the categories of sets and functions, or types and functions. Here, it's more natural to speak of functions from x to y, than to speak of "composable sequences of functions beginning at x and ending at y, up to the equivalence relation which identifies two sequences if they have the same composite".

Again, I absolutely love this post. I am frankly a bit shocked that nobody seems to have written an introduction using this language - I think everyone is too enamored with sets as an example.

Comment by eigil-rischel on Underappreciated points about utility functions (of both sorts) · 2020-01-04T23:25:03.705Z · score: 1 (1 votes) · LW · GW

This is a reasonable way to resolve the paradox, but note that you're required to fix the max number of people ahead of time - and it can't change as you receive evidence (it must be a maximum across all possible worlds, and evidence just restricts the set of possible worlds). This essentially resolves Pascal's mugging by fixing some large number X and assigning probability 0 to claims about more than X people.

Comment by eigil-rischel on Underappreciated points about utility functions (of both sorts) · 2020-01-04T11:20:23.539Z · score: 5 (3 votes) · LW · GW

Just to sketch out the contradiction between unbounded utilities and gambles involving infinitely many outcomes a bit more explicitly.

If your probability function is unbounded, we can consider the following wager: You win 2 utils with probability 1/2, 4 utils with probability 1/4, and so on. The expected utility of this wager is infinite. (If there are no outcomes with utility exactly 2, 4, etc, we can award more - this is possible because utility is unbounded).

Now consider these wagers on a (fair) coinflip:

  • A: Play the above game if heads, pay out 0 utils if tails
  • B: Play the above game if heads, pay out 100000 utils if tails

(0 and 100000 can be any two non-equal numbers).

Both of these wagers have infinite expected utility, so we must be indifferent between them. But since they agree on heads, and B is strictly preferred to A on tails, we must prefer B (since tails occurs with positive probability)

Comment by eigil-rischel on We run the Center for Applied Rationality, AMA · 2019-12-20T20:59:41.955Z · score: 6 (5 votes) · LW · GW

Information about people behaving erratically/violently is better at grabbing your brain's "important" sensor? (Noting that I had exactly the same instinctual reaction). This seems to be roughly what you'd expect from naive evopsych (which doesn't mean it's a good explanation, of course)

Comment by eigil-rischel on We run the Center for Applied Rationality, AMA · 2019-12-19T23:04:40.477Z · score: 20 (10 votes) · LW · GW

CFAR must have a lot of information about the efficacy of various rationality techniques and training methods (compared to any other org, at least). Is this information, or recommendations based on it, available somewhere? Say, as a list of techniques currently taught at CFAR - which are presumably the best ones in this sense. Or does one have to attend a workshop to find out?

Comment by eigil-rischel on Examples of Causal Abstraction · 2019-12-16T10:51:13.026Z · score: 3 (2 votes) · LW · GW

There's some recent work in the statistics literature exploring similar ideas. I don't know if you're aware of this, or if it's really relevant to what you're doing (I haven't thought a lot about the comparisons yet), but here are some papers.

Comment by eigil-rischel on Eigil Rischel's Shortform · 2019-11-10T12:33:54.168Z · score: 1 (1 votes) · LW · GW

A thought about productivity systems/workflow optimization:

One principle of good design is "make the thing you want people to do, the easy thing to do". However, this idea is susceptible to the following form of Goodhart: often a lot of the value in some desirable action comes from the things that make it difficult.

For instance, sometimes I decide to migrate some notes from one note-taking system to another. This is usually extremely useful, because it forces me to review the notes and think about how they relate to each other and to the new system. If I make this easier for myself by writing a script to do the work (as I have sometimes done), this important value is lost.

Or think about spaced repetition cards: You can save a ton of time by reusing cards made by other people covering the same material - but the mental work of breaking the material down into chunks that can go into the spaced-repetition system, which is usually very important, is lost.

Comment by eigil-rischel on The best of the www, in my opinion · 2019-10-17T18:38:26.734Z · score: 2 (2 votes) · LW · GW

This is a great list.

The main criticism I have is that this list overlaps way too much with my own internal list of high-quality sites, making it not very useful.

Comment by eigil-rischel on Examples of Categories · 2019-10-10T10:55:37.363Z · score: 11 (4 votes) · LW · GW

The example of associativity seems a little strange, I'm note sure what's going on there. What are the three functions that are being composed?

Comment by eigil-rischel on Computational Model: Causal Diagrams with Symmetry · 2019-10-08T08:19:00.022Z · score: 1 (1 votes) · LW · GW

Should there be an arrow going from n*f(n-1) to f (around n==0?) ? The output of the system also depends on n*f(n-1), not just on whether or not n is zero.

Comment by eigil-rischel on Occam's Razor May Be Sufficient to Infer the Preferences of Irrational Agents: A reply to Armstrong & Mindermann · 2019-10-07T22:52:46.855Z · score: 2 (2 votes) · LW · GW

A simple remark: we don't have access to all of , only up until the current time. So we have to make sure that we don't get a degenerate pair which diverges wildly from the actual universe at some point in the future.

Maybe this is similar to the fact that we don't want AIs to diverge from human values once we go off-distribution? But you're definitely right that there's a difference: we do want AIs to diverge from human behaviour (even in common situations).

Comment by eigil-rischel on Two Dark Side Statistics Papers · 2019-10-02T18:14:04.667Z · score: 3 (3 votes) · LW · GW

I'm curious about the remaining 3% of people in the 97% program, who apparently both managed to smuggle some booze into rehab, and then admitted this to the staff while they were checking out. Lizardman's constant?

Comment by eigil-rischel on Eigil Rischel's Shortform · 2019-10-02T17:52:38.999Z · score: 10 (4 votes) · LW · GW

I've noticed a sort of tradeoff in how I use planning/todo systems (having experimented with several such systems recently). This mainly applies to planning things with no immediate deadline, where it's more about how to split a large amount of available time between a large number of tasks, rather than about remembering which things to do when. For instance, think of a personal reading list - there is no hurry to read any particular things on it, but you do want to be spending your reading time effectively.

On one extreme, I make a commitment to myself to do all the things on the list eventually. At first, this has the desired effect of making me get things done. But eventually, things that I don't want to do start to accumulate. I procrastinate on these things by working on more attractive items on the list. This makes the list much less useful from a planning perspective, since it's cluttered with a bunch of old things I no longer want to spend time on (which make me feel bad about not doing them whenever I'm looking at the list).

On the other extreme, I make no commitment like that, and remove things from the list whenever I feel like it. This avoids the problem of accumulating things I don't want to do, but makes the list completely useless as a tool for getting me to do boring tasks.

I have a hard time balancing these issues. I'm currently trying an approach to my academic reading list where I keep a mostly unsorted list, and whenever I look at it to find something to read, I have to work on the top item, or remove it from the list. This is hardly ideal, but it mitigates the "stale items" problem, and still manages to provide some motivation, since it feels bad to take items off the list.

Comment by eigil-rischel on What are your recommendations on books to listen to when doing, e.g., chores? · 2019-09-28T11:35:27.413Z · score: 1 (1 votes) · LW · GW

I found Predictably Irrational, Superforecasting, and Influence to be good.

Comment by eigil-rischel on Don't clean your glasses · 2019-09-24T08:55:39.056Z · score: 1 (1 votes) · LW · GW

I've managed to implement this for computer monitors, but not for glasses. But my glasses seem to get smudged frequently enough that I need to wipe them about every day anyways. I guess I fidget with them much more than you?

Comment by eigil-rischel on The Zettelkasten Method · 2019-09-23T19:17:14.862Z · score: 1 (1 votes) · LW · GW

If "such techniques usually give a boost for some time before dropping back towards baseline", the obvious way to use this information would seem to be starting a new note-taking system every so often. That way you can keep on taking advantage of the boost, at least as long as you can keep finding new systems (which may eventually become a problem, but even so doesn't leave you worse off than before). Of course, this does suggest a bound on how many resources you should invest in these new systems.

Comment by eigil-rischel on Non-anthropically, what makes us think human-level intelligence is possible? · 2019-09-16T09:27:19.081Z · score: 2 (2 votes) · LW · GW

This still leaves the question of why the chemical reactions on other planets haven't begun colonizing the galaxy, since it seems likely that the chemical reactions on Earth will (eventually) do so.

Comment by eigil-rischel on Proving Too Much (w/ exercises) · 2019-09-15T12:00:01.095Z · score: 2 (2 votes) · LW · GW

"If a tree falls in the woods, but no one is around to hear it, does it make a sound?" doesn't sound like an argument, but a question. "Yes, because the presence of a person with ears doesn't affect the physical behavior of the air" or "No, because air waves shouldn't be considered sound until they interact with a mind" are arguments.

Or do you mean "argument" in the sense of a debate or discussion (as in "we're having an argument about X")?

Comment by eigil-rischel on Humans can be assigned any values whatsoever… · 2019-09-11T21:44:01.056Z · score: 1 (1 votes) · LW · GW

Could one approach to detecting biases be to look for "dominated strategies"? For instance, suppose the human model is observed making various trades, exchanging sets of tokens for other sets of tokens, and the objective of the machine is to infer "intrinsic values" for each type of token.

(Maybe conditional on certain factors, i.e "An A is valuable, but only if you have a B", or "a C is only valuable on Tuesday").

Then if the human trades an A and an E for a B, a B for a C, and a C for an A, but then trades an A for ten Es, we can infer that the human has some form of bias, maybe neglecting tokens with small value (not realizing that the value of an E matters until you have ten of them), or maybe an "eagerness" to make trades.

This clearly relies on some "Strong assumptions" (for instance, that tokens are only valuable in themselves - that executing a trade has no inherent value).

Comment by eigil-rischel on Mistakes with Conservation of Expected Evidence · 2019-09-09T22:19:35.244Z · score: 1 (1 votes) · LW · GW

This is great. A point which helped me understand number 6: If you ask someone "why do you believe X", since you're presumably going to update your probability of X upwards if they give a reason, you should update downwards if they don't give a reason. But you probably already updated upwards as soon as they said "I believe X", and there is no theorem which says this update has to be smaller than the latter update. So you can still end up with a higher or equal probability of X compared to where you were at the beginning of the conversation.

Comment by eigil-rischel on Do you have algorithms for passing time productively with only your own mind? · 2019-09-08T19:40:23.321Z · score: 2 (2 votes) · LW · GW

I tend to favor your own approach - think about whatever I'm working on. The solution to not having enough questions is to always keep a question around which is A: hard enough that you're unlikely to solve it during a brief wait, and B: in a state where you can work on it without something to write on. Combining these two is not always easy, so you sometimes need to plan ahead.

Departing a bit from the question as stated, adding a phone(and headphones), I've also found that listening to audiobooks is a good way to use e.g. a commute.

Comment by eigil-rischel on Joy in Discovery: Galois theory · 2019-09-03T14:42:32.938Z · score: 2 (2 votes) · LW · GW

I added some clarification, but you are right.

(Since has the root , it's clearly not true that all fifth-degree polynomials have this property)

Comment by eigil-rischel on Say Wrong Things · 2019-09-02T08:11:01.097Z · score: 12 (3 votes) · LW · GW

"If you've never missed a flight, you spend too much time hanging around in airports" ~ "If you've never been publicly proven wrong, you don't state your beliefs enough" ?

Comment by eigil-rischel on Decision Theory · 2019-08-31T20:36:49.883Z · score: 2 (2 votes) · LW · GW

(There was a LaTeX error in my comment, which made it totally illegible. But I think you managed to resolve my confusion anyway).

I see! It's not provable that Provable() implies . It seems like it should be provable, but the obvious argument relies on the assumption that, if * is provable, then it's not also provable that - in other words, that the proof system is consistent! Which may be true, but is not provable.

The asymmetry between 5 and 10 is that, to choose 5, we only need a proof that 5 is optimal, but to choose 10, we need to not find a proof that 5 is optimal. Which seems easier than finding a proof that 10 is optimal, but is not provably easier.

Comment by eigil-rischel on Decision Theory · 2019-08-31T19:34:40.270Z · score: 1 (1 votes) · LW · GW

I think I don't understand the Löb's theorem example.

If is provable, then , so it is true (because the statement about is vacuously true). Hence by Löb's theorem, it's provable, so we get .

If is provable, then it's true, for the dual reason. So by Löb, it's provable, so .

The broader point about being unable to reason yourself out of a bad decision if your prior for your own decisions doesn't contain a "grain of truth" makes sense, but it's not clear we can show that the agent in this example will definitely get stuck on the bad decision - if anything, the above argument seems to show that the system has to be inconsistent! If that's true, I would guess that the source of this inconsistency is assuming the agent has sufficient reflective capacity to prove "If I can prove , then . Which would suggest learning the lesson that it's hard for agents to reason about their own behaviour with logical consistency.

Comment by eigil-rischel on Noticing the Taste of Lotus · 2019-08-31T18:25:56.519Z · score: 1 (1 votes) · LW · GW

I think I managed to avoid the Inbox Zero thing by not reading my emails, if the little bit of text that Gmail displays is enough for be to be confident that I don't need to read or respond to the mail. This means that I have a huge, constantly growing number of unread mails in my inbox, so the idea of getting it down to zero isn't really attractive.

I still check my email unnecessarily often, but I don't feel a compulsion to read any new mails immediately.

Comment by eigil-rischel on Eigil Rischel's Shortform · 2019-08-30T20:37:38.842Z · score: 3 (2 votes) · LW · GW

Belief: There is no amount of computing power which would make AlphaGo Zero(AGZ) turn the world into computronium in order to make the best possible Go moves (even if we assume there is some strategy which would let the system achieve this, like manipulating humans with cleverly chosen Go moves).

My reasoning is that AGZ is trained by recursively approximating a Monte Carlo Tree Search guided by its current model (very rough explanation which is probably missing something important). And it seems the "attractor" in this system is "perfect Go play", not "whatever Go play leads to better Go play in the future". There is no way for a system like this to learn that humans exist, or that it's running on a computer of a certain type, or even to conceptualize that certain moves may alter certain parameters of the system, because these things aren't captured in the MCTS, only the rules of Go.

This isn't an argument against dangerous AGI in general - I'm trying to clarify my thinking about the whole "Tool AI vs Agent AI" thing, before I read reframing superintelligence.

Am I right? And is this a sound argument?