Comment by cousin_it on Coercive Formats · 2019-06-10T06:02:29.385Z · score: 8 (2 votes) · LW · GW

How about sequential vs random-access?

Comment by cousin_it on Coercive Formats · 2019-06-09T20:59:23.262Z · score: 6 (3 votes) · LW · GW

Or linear vs open-world, as in video games.

Comment by cousin_it on Quotes from Moral Mazes · 2019-05-31T12:36:09.874Z · score: 3 (1 votes) · LW · GW

Well, he was talking about marketing vs choice of market, and my comment was riffing on that :-) The book uses the quote to make a point about individual credit, but I'm not sure it fits - even if success depends only on choice of market, individuals can still deserve credit for choosing a great market.

Comment by cousin_it on Quotes from Moral Mazes · 2019-05-31T10:17:24.519Z · score: 3 (1 votes) · LW · GW

One of the top executives in Weft Corporation echoes this sentiment: I always say that there is no such thing as a marketing genius; there are only great markets.

Some markets are mainly propped up by marketing though, like diamonds. But I agree that it's more virtuous to fulfill needs that already exist.

Comment by cousin_it on Infinity is an adjective like positive rather than an amount · 2019-05-30T18:06:54.140Z · score: 5 (2 votes) · LW · GW

Another possible metaphor is to think of infinities as second class citizens. For example, in our world dragons don't exist, but if they existed they wouldn't be able to ride the subway as easily as humans, because that would pose practical problems for both dragons and humans. Same for infinities - in the world of numbers they don't really exist, but if they existed, it's not clear how we could extend them equal rights of addition and so on. It's up to politicians/mathematicians to imagine a world where dragons/infinities can live on equal terms with humans/numbers, and maybe such a world just can't be imagined in a way that makes sense.

Comment by cousin_it on Say Wrong Things · 2019-05-28T12:52:03.179Z · score: 9 (4 votes) · LW · GW

I think both overconfidence and underconfidence are widespread, so it's hard to tell which advice would do more good. Maybe we can agree that people tend to over-share their conclusions and under-share their evidence? That seems plausible overall; advising people to shift toward sharing evidence might help address both underconfidence (because evidence feels safer to share) and overconfidence (because people will notice if the evidence doesn't warrant the conclusion); and it might help with other problems as well, like double-counting evidence due to many people stating the same conclusion.

Comment by cousin_it on Micro feedback loops and learning · 2019-05-28T12:18:30.390Z · score: 5 (2 votes) · LW · GW

It's in equal temperament, right? Have you thought about using just intonation?

I had always bought the story that the two are close enough for most musical purposes, but a few weeks ago I unfretted my guitar and started playing music in JI, and it's night and day. For example, in ET the major third from C to E sounds kind of restless, while in JI it's peaceful with all harmonics overlapping as they should (4:5 frequency ratio). Same for the minor third (5:6), hearing the JI interval makes me feel like "ah, so that's what the ET interval was trying to hint at". And then there's the harmonic seventh chord (4:5:6:7), which sounds very musical but can't be imitated on an ET guitar or piano at all.

Comment by cousin_it on Von Neumann’s critique of automata theory and logic in computer science · 2019-05-28T11:56:18.370Z · score: 14 (7 votes) · LW · GW

I'm confused. It seems to me that if we already have a discrete/combinatorial mess on our hands, sprinkling some chance of failure on each little gear won't summon the analysis fairy and make the mess easier to reason about. Or at least we need to be clever about the way randomness is added, to make the mess settle into something analytical. But von Neumann's article sounds more optimistic about this approach. Does anyone understand why?

Comment by cousin_it on What is your personal experience with "having a meaningful life"? · 2019-05-24T11:20:58.636Z · score: 4 (2 votes) · LW · GW

I guess I simply noticed that when I spend a lot of time and attention on something, it becomes important to me.

For example, as a Russian person living abroad, I can choose every day to either read Russian domestic news or abstain. If I spend a few days reading news, they start feeling important to my life and kinda unpleasant. Then I stop and it fades away again.

So more generally, when something "meaningful" is making me miserable, I can spend time on something else, knowing that soon enough it will be the new "meaningful" thing. Sometimes it's hard, if I'm very locked in to the old thing, but I've been in that situation many times (miserable about something that feels super important in the moment) and it does get a bit easier with experience.

Comment by cousin_it on What is your personal experience with "having a meaningful life"? · 2019-05-23T09:59:08.680Z · score: 4 (2 votes) · LW · GW

Well, I can't help having some story about myself, and I do feel happier when it's a positive story. I'd be surprised if there are people who don't feel that way. But at the same time, I know from experience that the stories I tell myself are mostly made up, and whenever they become counterproductive, I can just rewrite them with no hard feelings. Does that makes sense?

Comment by cousin_it on Open Thread April 2019 · 2019-05-13T20:02:13.389Z · score: 3 (1 votes) · LW · GW

Yeah, I think that works. Nice!

Comment by cousin_it on What are some "Communities and Cultures Different From Our Own?" · 2019-05-13T13:33:41.103Z · score: 12 (6 votes) · LW · GW

Check out "The Culture Map" by Erin Meyer. It's a book about how to work with people from different cultures - different expectations of bosses, different ways to give positive and negative feedback, different balance of work and personal relationships, etc.

Comment by cousin_it on Disincentives for participating on LW/AF · 2019-05-11T21:38:18.044Z · score: 6 (2 votes) · LW · GW

Thanks, great post! And good comments too. Not sure how I missed it at the time.

Comment by cousin_it on Disincentives for participating on LW/AF · 2019-05-11T11:44:33.678Z · score: 18 (6 votes) · LW · GW

There's a common view that a researcher's output should look like a bunch of results: "A, B, C therefore X, Y, Z". But we also feel, subconsciously and correctly, that such output has an air of finality and won't attract many comments. Look at musical subreddits for example - people posting their music get few comments, people asking for help get more. So when I post a finished result on LW and get few comments, the problem is on me. There must be a better way to write posts, less focused on answering questions and more on making questions as interesting as possible. But that's easier said than done - I don't think I have that skill.

Comment by cousin_it on Nash equilibriums can be arbitrarily bad · 2019-05-01T21:33:17.470Z · score: 19 (6 votes) · LW · GW

Yeah. Usually the centipede game is used to teach this lesson, but your game is also very nice :-)

Comment by cousin_it on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-26T11:41:59.388Z · score: 4 (2 votes) · LW · GW

Can you give some examples of where human cooperation is mainly being stopped by difficulty with bargaining?

Two kids fighting over a toy; a married couple arguing about who should do the dishes; war.

But now I think I can answer my own question. War only happens if two agents don't have common knowledge about who would win (otherwise they'd agree to skip the costs of war). So if AIs are better than humans at establishing that kind of common knowledge, that makes bargaining failure less likely.

Comment by cousin_it on Strategic implications of AIs' ability to coordinate at low cost, for example by merging · 2019-04-26T09:25:04.921Z · score: 8 (4 votes) · LW · GW

A big obstacle to human cooperation is bargaining: deciding how to split the benefit from cooperation. If it didn't exist, I think humans would cooperate more. But the same obstacle also applies to AIs. Sure, there's a general argument that AIs will outperform humans at all tasks and bargaining too, but I'd like to understand it more specifically. Do you have any mechanisms in mind that would make bargaining easier for AIs?

Comment by cousin_it on Against Street Epistemology · 2019-04-25T20:21:32.788Z · score: 9 (5 votes) · LW · GW

I didn't know "street epistemology" was a thing, but over the years I've run into a few people who thought asking incisive questions was a good way to make conversation with me. I usually start ignoring them after the second or third question, because to me that's just not what conversation is about! If someone consistently fails to tell me anything new or surprising (even when directly prompted), what's the point of talking? Thankfully, LW has many people who, upon joining a conversation, openly state their beliefs and give interesting arguments for them instead of trying to snipe others. Scott is maybe the brightest example, I try to imitate him whenever I can.

Comment by cousin_it on Open Thread March 2019 · 2019-04-25T19:57:45.511Z · score: 3 (1 votes) · LW · GW

Yeah, fair enough. I'm just thinking that individualism has already done quite a lot in that direction. We're much more isolated than people in past societies: many of us barely know our neighbors, can go months without talking to parents, and have few friends outside of work. So if we're discussing further individualist proposals, like basic income, maybe it's worth spending some time thinking about the consequences to the social fabric (never thought I'd use that phrase...)

Comment by cousin_it on Why does category theory exist? · 2019-04-25T14:12:20.601Z · score: 16 (8 votes) · LW · GW

This mathoverflow question seems to have many nice answers.

Comment by cousin_it on Moral Weight Doesn't Accrue Linearly · 2019-04-24T09:58:44.287Z · score: 8 (4 votes) · LW · GW

Yeah, there are tons of associative functions. For example, f(x,y)=(x^k+y^k)^(1/k) is associative for any k, but linear only for k=1.

Comment by cousin_it on Counterfactuals about Social Media · 2019-04-23T22:36:03.602Z · score: 3 (1 votes) · LW · GW

Yeah, that'd probably work.

Comment by cousin_it on Counterfactuals about Social Media · 2019-04-23T22:06:56.062Z · score: 3 (1 votes) · LW · GW

Ah I see, I misread your comment. Yeah, if my plan were implemented, people wouldn't be able to make a living on youtube. But that seems like a small effect, because very few people make a living on youtube.

Comment by cousin_it on Counterfactuals about Social Media · 2019-04-23T19:32:05.367Z · score: 5 (2 votes) · LW · GW

Is the idea here that the internet becomes a place only for people who are intrinsically motivated to do whatever they’re doing?

No, why? I think it's fine if people waste time online. The harm from that is self-limiting, as long as there are no companies trying to escalate, manipulate & profit from that harm.

Comment by cousin_it on Counterfactuals about Social Media · 2019-04-23T12:42:31.740Z · score: 7 (5 votes) · LW · GW

Here's a radical solution: we could try outlawing or taxing away most kinds of profit-making on the internet, except for online stores (with clear criteria on what counts as a store). That could deal with most "attention economy" websites that profit from making users spend more time online, compete with each other on popularity, etc.

Comment by cousin_it on Robin Hanson on Simple, Evidence Backed Models · 2019-04-17T22:51:52.313Z · score: 6 (3 votes) · LW · GW

It seems to me that removing privacy would mostly help religions, political movements and other movements that feed on conformity of their members. That doesn't seem like a small thing - I'm not sure what benefit could counterbalance that.

Comment by cousin_it on Robin Hanson on Simple, Evidence Backed Models · 2019-04-17T14:48:57.496Z · score: 11 (2 votes) · LW · GW

In our simplest modes of information, people are better off when they have more information

In what simple model of information are people never worse off when others have more information about them?

Comment by cousin_it on Where to Draw the Boundaries? · 2019-04-15T18:02:32.310Z · score: 6 (3 votes) · LW · GW

This makes me curious - have you found that terminological debates often lead to interesting ideas? Can you give an example?

Comment by cousin_it on The Simple Solow Model of Software Engineering · 2019-04-10T21:54:02.280Z · score: 15 (4 votes) · LW · GW

Ah, I see. Your post drew a distinction between "repairing buildings" vs "creating new roads, machines and buildings", but you meant something more subtle - do we build the new building to replace the old one, or because we need two buildings? That makes sense and I see that my comment was a bit off base.

Comment by cousin_it on The Simple Solow Model of Software Engineering · 2019-04-10T16:12:48.293Z · score: 9 (4 votes) · LW · GW

In most macroeconomic models, new capital accumulates until it reaches an equilibrium level, where all investment goes toward repairing/replacing depreciated capital—resurfacing roads, replacing machines, repairing buildings rather than creating new roads, machines and buildings.

Must be something wrong with the models then, because that doesn't sound like any place I've ever been. People don't prioritize maintenance above any and all new stuff; it's human nature to invest in new stuff even while old stuff crumbles.

The same is true for software. I wish there was a natural law limiting software bloat, but look around - do you think there's such a law? Can you name any large project that reached equilibrium and stopped growing? I can't. Sure, as the project grows it gets harder to maintain at the same quality, but people don't let that stop them! They just relax the standard of quality, let older and less important features go unmaintained, and keep piling on new features anyway.

Comment by cousin_it on Open Thread April 2019 · 2019-04-09T11:05:16.761Z · score: 6 (3 votes) · LW · GW

Yes, I think you can. If there's a bunch of linear functions F_i defined on a simplex, and for any point P in the simplex there's at least one i such that F_i(P) > 0, then some linear combination of F_i with non-negative coefficients will be positive everywhere on the simplex.

Unfortunately I couldn't come up with a simple proof yet. Here's how a not so simple proof could work: consider the function G(P) = max F_i(P). Let Q be the point where G reaches minimum. Q exists because the simplex is compact, and G(Q) > 0 by assumption. Then you can take a linear combination of those F_i whose value at Q coincides with G. There are two cases: 1) Q is in the interior of the simplex, in this case you can make the linear combination come out as a positive constant; 2) Q is on one of the faces (or edges, etc), in this case you can recurse to that face which is itself a simplex. Eventually you get a function that's a positive constant on that face and greater everywhere else.

Does that make sense?

Comment by cousin_it on Open Thread April 2019 · 2019-04-09T01:32:40.894Z · score: 6 (3 votes) · LW · GW

Yeah, Venetian masks are amazing, very hard to resist buying. We bought several when visiting Venice, gave some of them away as gifts, painted them, etc.

If you can't buy one, the next best thing is to make one yourself. No 3D printing, just learn papier mache, it's easy enough that 4 year olds can do it. Painting it is harder, but I'm sure you have acquaintances who would love to paint a Venetian mask or two. It's also a fun thing to do at parties.

Comment by cousin_it on Would solving logical counterfactuals solve anthropics? · 2019-04-06T07:45:03.197Z · score: 3 (1 votes) · LW · GW

I don't know your definition.

Comment by cousin_it on Would solving logical counterfactuals solve anthropics? · 2019-04-05T23:30:18.628Z · score: 3 (1 votes) · LW · GW

Yeah, looks like a definitional disagreement.

Comment by cousin_it on Would solving logical counterfactuals solve anthropics? · 2019-04-05T14:24:10.473Z · score: 5 (2 votes) · LW · GW

No. If there's a coinflip that determines whether an identical copy of me is created tomorrow, my ability to perfectly coordinate the actions of all copies (logical counterfactuals) doesn't help me at all with figuring out if I should value the well-being of these copies with SIA, SSA or some other rule.

Comment by cousin_it on Degrees of Freedom · 2019-04-03T14:58:21.683Z · score: 8 (4 votes) · LW · GW

That's not how I read Venkat's post. To me he seems to be talking about freedom as a certain style or manner which comes across to others as "free". It's not about your circumstances - famously, even in concentration camps there have been people behaving visibly free and inspiring others. I find this view fruitful: it encourages you to do something free, not just be happy with how many options you have.

Comment by cousin_it on Open Thread March 2019 · 2019-03-27T23:36:06.377Z · score: 9 (4 votes) · LW · GW

When I see people write things like “with unconditional basic income, people would not need to work, but without work their lives would lose meaning”, I wonder whether they considered the following:

There are meaningful things besides work, such as spending time with your friends, or with your family.

Friendship arises from a common stressor and withers without it. Family arises from a need to support each other and kids. If you remove all stressors and all need of support, but assert that friendships and families will continue exactly as they are, you aren't thinking seriously.

Comment by cousin_it on Dependability · 2019-03-27T23:28:07.863Z · score: 3 (1 votes) · LW · GW

The kid who skips a day of school every month on a whim is only “years behind” in the sense that he hasn’t allowed his natural judgment of what’s a good use of his time to be hammered away and subjected to arbitrary rules which need to be followed for the sake of being followed.

Yeah, I know a kid like that. Follows his natural judgment all the way. Mostly it tells him to play Fortnite.

Comment by cousin_it on Dependability · 2019-03-27T08:11:46.673Z · score: 5 (2 votes) · LW · GW

When Caplan's "case against education" came out, I thought for a while why we need schools, and came to the conclusion that schools train reliability. The ability to sit there and do the task even if you don't like it. It's not a natural human skill: kids are flaky and hate learning to be less flaky. School has to change the kid's personality by force and make it stick. That's why it takes ten years.

If you want to learn that skill as an adult, the bad news is it will take a while and it will feel bad, like any forced personality change. Like training a dog, but the dog is you. The only way is to commit to doing specific things on a daily or weekly basis, and stick with them for a long time (years). And if you fall of the wagon, you must start over, not tell yourself you've made "progress". The kid who skips a day of school every month on a whim isn't 5% less reliable than his classmates, he's years behind.

Comment by cousin_it on The Game Theory of Blackmail · 2019-03-26T15:31:03.449Z · score: 4 (2 votes) · LW · GW

Threatening to crash your car unless the passenger gives you a dollar is also not credible in the common meaning of the word...

Comment by cousin_it on The Game Theory of Blackmail · 2019-03-23T07:36:57.222Z · score: 6 (4 votes) · LW · GW

As Dagon said, blackmail is a sequential game. And the chicken payoff matrix is a poor fit: if the blackmailer faces a large penalty for revealing their information to the world, then the blackmailer's threat is not credible.

Comment by cousin_it on Has "politics is the mind-killer" been a mind-killer? · 2019-03-22T08:52:20.021Z · score: 17 (5 votes) · LW · GW

Until now no one has managed to make political discussion healthy. There's certainly room for experiment, e.g. I'd like to see a space for political discussion with a norm like "please stick to easily checkable claims". But norms like "please speak in good faith" or "please try to be rational in a vague general way" have been already tried and found not to work.

Comment by cousin_it on Rest Days vs Recovery Days · 2019-03-20T14:43:26.914Z · score: 9 (4 votes) · LW · GW

Wow, you've laid out the benefits very nicely. I'd like to try it this Sunday. But I have some personal projects that are very fun and take up most of my free time. Should I stay away from those too?

Comment by cousin_it on Is there a difference between uncertainty over your utility function and uncertainty over outcomes? · 2019-03-18T23:49:39.334Z · score: 4 (2 votes) · LW · GW

I'm normalizing on my effort - eventually, on my pleasure and pain as a common currency. That's not quite the same as normalizing on chickens, because the number of dead chickens in the world isn't directly qualia.

Comment by cousin_it on Is there a difference between uncertainty over your utility function and uncertainty over outcomes? · 2019-03-18T22:25:27.405Z · score: 5 (3 votes) · LW · GW

But if you can answer questions like "how much money would I pay to save a human life under the first hypothesis" and "under the second hypothesis", which seem like questions you should be able to answer, then the conversion stops being a problem.

Comment by cousin_it on Humans aren't agents - what then for value learning? · 2019-03-18T12:00:58.821Z · score: 3 (1 votes) · LW · GW

Sure. Though learning from verbal descriptions of hypothetical behavior doesn't seem much harder than learning from actual behavior - they're both about equally far from "utility function on states of the universe" :-)

Comment by cousin_it on Privacy · 2019-03-18T08:08:44.087Z · score: 11 (6 votes) · LW · GW

Yes, less privacy leads to more conformity. But I don't think that will disproportionately help small projects that you like. Mostly it will help big projects that feed on conformity - ideologies and religions.

Comment by cousin_it on Privacy · 2019-03-17T13:57:12.956Z · score: 9 (4 votes) · LW · GW

Implication: more generally available information about what strategies people are using helps “our” enemies more than it helps “us”. (This seems false to me, for notions of “us” that I usually use in strategy)

Maybe I'm misreading and you're arguing that it will help us and enemies equally? But even that seems impossible. If Big Bad Wolf can run faster than Little Red Hood, mutual visibility ensures that Little Red Hood gets eaten.

Comment by cousin_it on Privacy · 2019-03-17T09:09:34.859Z · score: 12 (4 votes) · LW · GW

Let's get back to the world of angels problem. You do seem to be saying that removing privacy would get us closer to a world of angels. Why?

Comment by cousin_it on Humans aren't agents - what then for value learning? · 2019-03-17T08:59:10.589Z · score: 5 (2 votes) · LW · GW

We've been over this:

steven0461: Who cares about the ques­tion what the robot “ac­tu­ally wants”? Cer­tainly not the robot. Hu­mans care about the ques­tion what they “ac­tu­ally want”, but that’s be­cause they have ad­di­tional struc­ture that this robot lacks.

Wei_Dai: In other words, our “ac­tual val­ues” come from our be­ing philoso­phers, not our be­ing con­se­quen­tial­ists.

That's the right answer as far as I can tell. Humans do have a part that "actually wants" something - we can introspect on our own desires - and the thermostat analogy discards it. Yes, that means any good model of our desires must also be a model of our introspective abilities, which makes the problem much harder.

Announcement: AI alignment prize round 4 winners

2019-01-20T14:46:47.912Z · score: 80 (19 votes)

Announcement: AI alignment prize round 3 winners and next round

2018-07-15T07:40:20.507Z · score: 102 (29 votes)

How to formalize predictors

2018-06-28T13:08:11.549Z · score: 16 (5 votes)

UDT can learn anthropic probabilities

2018-06-24T18:04:37.262Z · score: 63 (19 votes)

Using the universal prior for logical uncertainty

2018-06-16T14:11:27.000Z · score: 0 (0 votes)

Understanding is translation

2018-05-28T13:56:11.903Z · score: 132 (43 votes)

Announcement: AI alignment prize round 2 winners and next round

2018-04-16T03:08:20.412Z · score: 153 (45 votes)

Using the universal prior for logical uncertainty (retracted)

2018-02-28T13:07:23.644Z · score: 39 (10 votes)

UDT as a Nash Equilibrium

2018-02-06T14:08:30.211Z · score: 34 (11 votes)

Beware arguments from possibility

2018-02-03T10:21:12.914Z · score: 13 (9 votes)

An experiment

2018-01-31T12:20:25.248Z · score: 32 (11 votes)

Biological humans and the rising tide of AI

2018-01-29T16:04:54.749Z · score: 55 (18 votes)

A simpler way to think about positive test bias

2018-01-22T09:38:03.535Z · score: 34 (13 votes)

How the LW2.0 front page could be better at incentivizing good content

2018-01-21T16:11:17.092Z · score: 38 (19 votes)

Beware of black boxes in AI alignment research

2018-01-18T15:07:08.461Z · score: 70 (29 votes)

Announcement: AI alignment prize winners and next round

2018-01-15T14:33:59.892Z · score: 166 (63 votes)

Announcing the AI Alignment Prize

2017-11-04T11:44:19.000Z · score: 1 (1 votes)

Announcing the AI Alignment Prize

2017-11-03T15:47:00.092Z · score: 154 (66 votes)

Announcing the AI Alignment Prize

2017-11-03T15:45:14.810Z · score: 7 (7 votes)

The Limits of Correctness, by Bryan Cantwell Smith [pdf]

2017-08-25T11:36:38.585Z · score: 3 (3 votes)

Using modal fixed points to formalize logical causality

2017-08-24T14:33:09.000Z · score: 3 (3 votes)

Against lone wolf self-improvement

2017-07-07T15:31:46.908Z · score: 30 (28 votes)

Steelmanning the Chinese Room Argument

2017-07-06T09:37:06.760Z · score: 5 (5 votes)

A cheating approach to the tiling agents problem

2017-06-30T13:56:46.000Z · score: 3 (3 votes)

What useless things did you understand recently?

2017-06-28T19:32:20.513Z · score: 7 (7 votes)

Self-modification as a game theory problem

2017-06-26T20:47:54.080Z · score: 10 (10 votes)

Loebian cooperation in the tiling agents problem

2017-06-26T14:52:54.000Z · score: 5 (5 votes)

Thought experiment: coarse-grained VR utopia

2017-06-14T08:03:20.276Z · score: 16 (16 votes)

Bet or update: fixing the will-to-wager assumption

2017-06-07T15:03:23.923Z · score: 26 (26 votes)

Overpaying for happiness?

2015-01-01T12:22:31.833Z · score: 32 (33 votes)

A proof of Löb's theorem in Haskell

2014-09-19T13:01:41.032Z · score: 29 (30 votes)

Consistent extrapolated beliefs about math?

2014-09-04T11:32:06.282Z · score: 6 (7 votes)

Hal Finney has just died.

2014-08-28T19:39:51.866Z · score: 33 (35 votes)

"Follow your dreams" as a case study in incorrect thinking

2014-08-20T13:18:02.863Z · score: 29 (31 votes)

Three questions about source code uncertainty

2014-07-24T13:18:01.363Z · score: 9 (10 votes)

Single player extensive-form games as a model of UDT

2014-02-25T10:43:12.746Z · score: 12 (11 votes)

True numbers and fake numbers

2014-02-06T12:29:08.136Z · score: 19 (29 votes)

Rationality, competitiveness and akrasia

2013-10-02T13:45:31.589Z · score: 14 (15 votes)

Bayesian probability as an approximate theory of uncertainty?

2013-09-26T09:16:04.448Z · score: 16 (18 votes)

Notes on logical priors from the MIRI workshop

2013-09-15T22:43:35.864Z · score: 18 (19 votes)

An argument against indirect normativity

2013-07-24T18:35:04.130Z · score: 1 (14 votes)

"Epiphany addiction"

2012-08-03T17:52:47.311Z · score: 52 (56 votes)

AI cooperation is already studied in academia as "program equilibrium"

2012-07-30T15:22:32.031Z · score: 36 (37 votes)

Should you try to do good work on LW?

2012-07-05T12:36:41.277Z · score: 36 (41 votes)

Bounded versions of Gödel's and Löb's theorems

2012-06-27T18:28:04.744Z · score: 32 (33 votes)

Loebian cooperation, version 2

2012-05-31T18:41:52.131Z · score: 13 (14 votes)

Should logical probabilities be updateless too?

2012-03-28T10:02:09.575Z · score: 9 (14 votes)

Common mistakes people make when thinking about decision theory

2012-03-27T20:03:08.340Z · score: 51 (46 votes)

An example of self-fulfilling spurious proofs in UDT

2012-03-25T11:47:16.343Z · score: 20 (21 votes)

The limited predictor problem

2012-03-21T00:15:26.176Z · score: 10 (11 votes)