Comment by cousin_it on Robin Hanson on Simple, Evidence Backed Models · 2019-04-17T22:51:52.313Z · score: 6 (3 votes) · LW · GW

It seems to me that removing privacy would mostly help religions, political movements and other movements that feed on conformity of their members. That doesn't seem like a small thing - I'm not sure what benefit could counterbalance that.

Comment by cousin_it on Robin Hanson on Simple, Evidence Backed Models · 2019-04-17T14:48:57.496Z · score: 11 (2 votes) · LW · GW

In our simplest modes of information, people are better off when they have more information

In what simple model of information are people never worse off when others have more information about them?

Comment by cousin_it on Where to Draw the Boundaries? · 2019-04-15T18:02:32.310Z · score: 3 (1 votes) · LW · GW

This makes me curious - have you found that terminological debates often lead to interesting ideas? Can you give an example?

Comment by cousin_it on The Simple Solow Model of Software Engineering · 2019-04-10T21:54:02.280Z · score: 15 (4 votes) · LW · GW

Ah, I see. Your post drew a distinction between "repairing buildings" vs "creating new roads, machines and buildings", but you meant something more subtle - do we build the new building to replace the old one, or because we need two buildings? That makes sense and I see that my comment was a bit off base.

Comment by cousin_it on The Simple Solow Model of Software Engineering · 2019-04-10T16:12:48.293Z · score: 9 (4 votes) · LW · GW

In most macroeconomic models, new capital accumulates until it reaches an equilibrium level, where all investment goes toward repairing/replacing depreciated capital—resurfacing roads, replacing machines, repairing buildings rather than creating new roads, machines and buildings.

Must be something wrong with the models then, because that doesn't sound like any place I've ever been. People don't prioritize maintenance above any and all new stuff; it's human nature to invest in new stuff even while old stuff crumbles.

The same is true for software. I wish there was a natural law limiting software bloat, but look around - do you think there's such a law? Can you name any large project that reached equilibrium and stopped growing? I can't. Sure, as the project grows it gets harder to maintain at the same quality, but people don't let that stop them! They just relax the standard of quality, let older and less important features go unmaintained, and keep piling on new features anyway.

Comment by cousin_it on Open Thread April 2019 · 2019-04-09T11:05:16.761Z · score: 6 (3 votes) · LW · GW

Yes, I think you can. If there's a bunch of linear functions F_i defined on a simplex, and for any point P in the simplex there's at least one i such that F_i(P) > 0, then some linear combination of F_i with non-negative coefficients will be positive everywhere on the simplex.

Unfortunately I couldn't come up with a simple proof yet. Here's how a not so simple proof could work: consider the function G(P) = max F_i(P). Let Q be the point where G reaches minimum. Q exists because the simplex is compact, and G(Q) > 0 by assumption. Then you can take a linear combination of those F_i whose value at Q coincides with G. There are two cases: 1) Q is in the interior of the simplex, in this case you can make the linear combination come out as a positive constant; 2) Q is on one of the faces (or edges, etc), in this case you can recurse to that face which is itself a simplex. Eventually you get a function that's a positive constant on that face and greater everywhere else.

Does that make sense?

Comment by cousin_it on Open Thread April 2019 · 2019-04-09T01:32:40.894Z · score: 6 (3 votes) · LW · GW

Yeah, Venetian masks are amazing, very hard to resist buying. We bought several when visiting Venice, gave some of them away as gifts, painted them, etc.

If you can't buy one, the next best thing is to make one yourself. No 3D printing, just learn papier mache, it's easy enough that 4 year olds can do it. Painting it is harder, but I'm sure you have acquaintances who would love to paint a Venetian mask or two. It's also a fun thing to do at parties.

Comment by cousin_it on Would solving logical counterfactuals solve anthropics? · 2019-04-06T07:45:03.197Z · score: 3 (1 votes) · LW · GW

I don't know your definition.

Comment by cousin_it on Would solving logical counterfactuals solve anthropics? · 2019-04-05T23:30:18.628Z · score: 3 (1 votes) · LW · GW

Yeah, looks like a definitional disagreement.

Comment by cousin_it on Would solving logical counterfactuals solve anthropics? · 2019-04-05T14:24:10.473Z · score: 5 (2 votes) · LW · GW

No. If there's a coinflip that determines whether an identical copy of me is created tomorrow, my ability to perfectly coordinate the actions of all copies (logical counterfactuals) doesn't help me at all with figuring out if I should value the well-being of these copies with SIA, SSA or some other rule.

Comment by cousin_it on Degrees of Freedom · 2019-04-03T14:58:21.683Z · score: 7 (3 votes) · LW · GW

That's not how I read Venkat's post. To me he seems to be talking about freedom as a way of behavior. A certain style or manner that comes across as "free". It's not about your circumstances - famously, even in concentration camps there have been people behaving visibly free and inspiring others. I find this view fruitful: it encourages you to do something free, not just be happy with how many options you have.

Comment by cousin_it on Open Thread March 2019 · 2019-03-27T23:36:06.377Z · score: 9 (4 votes) · LW · GW

When I see people write things like “with unconditional basic income, people would not need to work, but without work their lives would lose meaning”, I wonder whether they considered the following:

There are meaningful things besides work, such as spending time with your friends, or with your family.

Friendship arises from a common stressor and withers without it. Family arises from a need to support each other and kids. If you remove all stressors and all need of support, but assert that friendships and families will continue exactly as they are, you aren't thinking seriously.

Comment by cousin_it on Dependability · 2019-03-27T23:28:07.863Z · score: 3 (1 votes) · LW · GW

The kid who skips a day of school every month on a whim is only “years behind” in the sense that he hasn’t allowed his natural judgment of what’s a good use of his time to be hammered away and subjected to arbitrary rules which need to be followed for the sake of being followed.

Yeah, I know a kid like that. Follows his natural judgment all the way. Mostly it tells him to play Fortnite.

Comment by cousin_it on Dependability · 2019-03-27T08:11:46.673Z · score: 5 (2 votes) · LW · GW

When Caplan's "case against education" came out, I thought for a while why we need schools, and came to the conclusion that schools train reliability. The ability to sit there and do the task even if you don't like it. It's not a natural human skill: kids are flaky and hate learning to be less flaky. School has to change the kid's personality by force and make it stick. That's why it takes ten years.

If you want to learn that skill as an adult, the bad news is it will take a while and it will feel bad, like any forced personality change. Like training a dog, but the dog is you. The only way is to commit to doing specific things on a daily or weekly basis, and stick with them for a long time (years). And if you fall of the wagon, you must start over, not tell yourself you've made "progress". The kid who skips a day of school every month on a whim isn't 5% less reliable than his classmates, he's years behind.

Comment by cousin_it on The Game Theory of Blackmail · 2019-03-26T15:31:03.449Z · score: 3 (1 votes) · LW · GW

Threatening to crash your car unless the passenger gives you a dollar is also not credible in the common meaning of the word...

Comment by cousin_it on The Game Theory of Blackmail · 2019-03-23T07:36:57.222Z · score: 5 (3 votes) · LW · GW

As Dagon said, blackmail is a sequential game. And the chicken payoff matrix is a poor fit: if the blackmailer faces a large penalty for revealing their information to the world, then the blackmailer's threat is not credible.

Comment by cousin_it on Has "politics is the mind-killer" been a mind-killer? · 2019-03-22T08:52:20.021Z · score: 17 (5 votes) · LW · GW

Until now no one has managed to make political discussion healthy. There's certainly room for experiment, e.g. I'd like to see a space for political discussion with a norm like "please stick to easily checkable claims". But norms like "please speak in good faith" or "please try to be rational in a vague general way" have been already tried and found not to work.

Comment by cousin_it on Rest Days vs Recovery Days · 2019-03-20T14:43:26.914Z · score: 9 (4 votes) · LW · GW

Wow, you've laid out the benefits very nicely. I'd like to try it this Sunday. But I have some personal projects that are very fun and take up most of my free time. Should I stay away from those too?

Comment by cousin_it on Is there a difference between uncertainty over your utility function and uncertainty over outcomes? · 2019-03-18T23:49:39.334Z · score: 4 (2 votes) · LW · GW

I'm normalizing on my effort - eventually, on my pleasure and pain as a common currency. That's not quite the same as normalizing on chickens, because the number of dead chickens in the world isn't directly qualia.

Comment by cousin_it on Is there a difference between uncertainty over your utility function and uncertainty over outcomes? · 2019-03-18T22:25:27.405Z · score: 5 (3 votes) · LW · GW

But if you can answer questions like "how much money would I pay to save a human life under the first hypothesis" and "under the second hypothesis", which seem like questions you should be able to answer, then the conversion stops being a problem.

Comment by cousin_it on Humans aren't agents - what then for value learning? · 2019-03-18T12:00:58.821Z · score: 3 (1 votes) · LW · GW

Sure. Though learning from verbal descriptions of hypothetical behavior doesn't seem much harder than learning from actual behavior - they're both about equally far from "utility function on states of the universe" :-)

Comment by cousin_it on Privacy · 2019-03-18T08:08:44.087Z · score: 11 (6 votes) · LW · GW

Yes, less privacy leads to more conformity. But I don't think that will disproportionately help small projects that you like. Mostly it will help big projects that feed on conformity - ideologies and religions.

Comment by cousin_it on Privacy · 2019-03-17T13:57:12.956Z · score: 7 (3 votes) · LW · GW

Implication: more generally available information about what strategies people are using helps “our” enemies more than it helps “us”. (This seems false to me, for notions of “us” that I usually use in strategy)

Maybe I'm misreading and you're arguing that it will help us and enemies equally? But even that seems impossible. If Big Bad Wolf can run faster than Little Red Hood, mutual visibility ensures that Little Red Hood gets eaten.

Comment by cousin_it on Privacy · 2019-03-17T09:09:34.859Z · score: 10 (3 votes) · LW · GW

Let's get back to the world of angels problem. You do seem to be saying that removing privacy would get us closer to a world of angels. Why?

Comment by cousin_it on Humans aren't agents - what then for value learning? · 2019-03-17T08:59:10.589Z · score: 5 (2 votes) · LW · GW

We've been over this:

steven0461: Who cares about the ques­tion what the robot “ac­tu­ally wants”? Cer­tainly not the robot. Hu­mans care about the ques­tion what they “ac­tu­ally want”, but that’s be­cause they have ad­di­tional struc­ture that this robot lacks.

Wei_Dai: In other words, our “ac­tual val­ues” come from our be­ing philoso­phers, not our be­ing con­se­quen­tial­ists.

That's the right answer as far as I can tell. Humans do have a part that "actually wants" something - we can introspect on our own desires - and the thermostat analogy discards it. Yes, that means any good model of our desires must also be a model of our introspective abilities, which makes the problem much harder.

Comment by cousin_it on Privacy · 2019-03-17T08:00:01.007Z · score: 13 (5 votes) · LW · GW

I think you have been. In every comment you try to cast doubt on justifications for privacy.

Comment by cousin_it on Privacy · 2019-03-17T07:38:20.800Z · score: 14 (6 votes) · LW · GW

I agree that privacy would be less necessary in a hypothetical world of angels. But I don't find it convincing that removing privacy would bring about such a world, and arguments of this type (let's discard a human right like property / free speech / privacy, and a world of angels will result) have a very poor track record.

Comment by cousin_it on Active Curiosity vs Open Curiosity · 2019-03-15T13:01:05.527Z · score: 12 (4 votes) · LW · GW

I'd call it concentrated vs. diffuse curiosity instead. Strong desire to know one particular thing vs. many weaker desires to know many things.

  1. You receive a big wrapped gift but can't unwrap it until tomorrow. This is concentrated curiosity. (Less pleasant variant: the results of your health test are due tomorrow.)

  2. You unwrap the gift and it's a strange device with a big green button. You wonder what will happen if you press it? This is still concentrated curiosity.

  3. You press the button and a panel opens up, showing fifty more buttons. Now your curiosity has become more diffuse and exploratory, spread out across all the buttons.

Comment by cousin_it on How dangerous is it to ride a bicycle without a helmet? · 2019-03-13T18:37:25.415Z · score: 6 (3 votes) · LW · GW

I think if you try weightlifting twice a week for a month and pay a lot of attention to form, there's a >20% chance that by the end of the month you'll find the fun in it and keep going. Though I agree that going to a gym can be a hassle. Luckily we have a gym at the office, so I just go every day.

Comment by cousin_it on How dangerous is it to ride a bicycle without a helmet? · 2019-03-13T14:07:42.135Z · score: 3 (1 votes) · LW · GW

Exercise can also make you stronger, more flexible, improve your posture, give you muscle mass that burns fat just by existing, teach your mind that you can overcome difficulties, etc. Biking to work every morning has almost none of these benefits. Why not spend the same 30min/day on something that gives you all the benefits?

Comment by cousin_it on Muqaata'a by Fahad Himsi (I.) · 2019-03-11T18:31:34.665Z · score: 8 (4 votes) · LW · GW

Wow, if this is you writing metafiction a la Northern Caves, count me in.

Comment by cousin_it on How dangerous is it to ride a bicycle without a helmet? · 2019-03-09T23:00:23.093Z · score: 3 (1 votes) · LW · GW

Compare photos of sprinters (anaerobic) to marathon runners (aerobic) and ask yourself who you'd rather look like.

Comment by cousin_it on [NeedAdvice]How to stay Focused on a long-term goal? · 2019-03-09T13:07:17.536Z · score: 5 (5 votes) · LW · GW

It's great that you've seen past your philosophy and noticed another living person. That realization has blown your mind and rewritten your values. Good job! But here's a hint: there are many other alive people in the world. Imagine how full of value your life would become if you removed some more of your philosophy and noticed a few more people!

Comment by cousin_it on How dangerous is it to ride a bicycle without a helmet? · 2019-03-09T12:55:55.179Z · score: 4 (2 votes) · LW · GW

I think taking risks is okay if it's fun, but biking on car roads isn't very fun, so why? And I'm not sure about the exercise benefits: breathing heavily while next to cars is probably more harm than help, and having aerobic as your only exercise is overrated anyway. So I guess in your place I'd choose the most pleasant way to commute, without regard for risk or exercise or whatever. If it's still biking, then fine!

Comment by cousin_it on Asking for help teaching a critical thinking class. · 2019-03-07T19:19:01.162Z · score: 3 (1 votes) · LW · GW

Yeah, it came across.

Comment by cousin_it on Asking for help teaching a critical thinking class. · 2019-03-07T11:25:16.458Z · score: 6 (4 votes) · LW · GW

the kind of cognition Malfoy used in hpmor to think about whether magic is heritable

Magic is heritable in hpmor. The only question was whether it's binary (one gene) or continuous (many genes, leading to a spectrum of blood purity). If you want to teach that kind of critical thinking, get ready for the fireworks when your students start asking which abilities in our world are heritable, binary or continuous.

Comment by cousin_it on Three ways that "Sufficiently optimized agents appear coherent" can be false · 2019-03-06T13:57:50.772Z · score: 5 (2 votes) · LW · GW

I think there's another important reason: a powerful agent might sometimes need to "burn" its own utility to get a bargaining edge, and we don't have a full theory for that yet.

Comment by cousin_it on Asymptotically Benign AGI · 2019-03-06T10:37:56.487Z · score: 4 (2 votes) · LW · GW

Are there UDT-ish concerns with breaking isolation of episodes? For example, if the AI receives a low reward at the beginning of episode 117, does it have an incentive to manipulate the external world to make episode 117 happen many times somehow, with most of these times giving it a higher reward? For another example, can the AI at episode 117 realize that it's in a game theory situation with the AI at episodes 116 and 118 and trade rewards with them acausally, leading to long-term goal directed behavior?

Comment by cousin_it on Asymptotically Benign AGI · 2019-03-06T10:30:09.509Z · score: 5 (2 votes) · LW · GW

How does your AI know to avoid running internal simulations containing lots of suffering?

Comment by cousin_it on Personalized Medicine For Real · 2019-03-05T14:45:00.805Z · score: 5 (2 votes) · LW · GW

Have you looked at any startups for mass market healthcare? Do any of them seem especially promising?

Comment by cousin_it on IRL 1/8: Inverse Reinforcement Learning and the problem of degeneracy · 2019-03-04T15:13:15.039Z · score: 7 (6 votes) · LW · GW

Behind a login wall.

Comment by cousin_it on Rule Thinkers In, Not Out · 2019-02-28T09:13:19.047Z · score: 20 (8 votes) · LW · GW

I decided to ignore Michael after our first in-person conversation, where he said I shouldn't praise the Swiss healthcare system which I have lots of experience with, because MetaMed is the only working healthcare system in the world (and a roomful of rationalists nodded along to that, suggesting that I bet money against him or something).

This isn't to single out Michael or the LW community. The world is full of people who spout nonsense confidently. Their ideas can deserve close attention from a few "angel investors", but that doesn't mean they deserve everyone's attention by default, as Scott seems to say.

Comment by cousin_it on So You Want to Colonize The Universe Part 4: Velocity Changes and Energy · 2019-02-28T09:02:06.835Z · score: 3 (1 votes) · LW · GW

Interesting idea about magnetic braking, I didn't know about it.

Not sure you need a very powerful laser to accelerate. Stuart recently pointed out here that you can add another mirror at the source, so the light keeps bouncing between the ship and the source, improving efficiency by a huge factor.

Comment by cousin_it on Rule Thinkers In, Not Out · 2019-02-27T07:43:34.872Z · score: 11 (6 votes) · LW · GW

In science or art bad ideas have no downside, so we judge talent at its best. But in policy bad ideas have disproportionate downside.

Comment by cousin_it on Is LessWrong a "classic style intellectual world"? · 2019-02-27T07:35:13.273Z · score: 5 (2 votes) · LW · GW

I would guess that working on a big church painting felt like more responsibility before people and God, not more license for creativity.

Comment by cousin_it on Tiles: Report on Programmatic Code Generation · 2019-02-23T11:28:33.533Z · score: 3 (1 votes) · LW · GW

"Blocks of code" is a leaky, untyped abstraction. It's better to have something like a DOM API for the target language, where each node in the parse tree is represented as a typed object with properties and children. (For example, if you're generating C, an #include directive would be represented as a typed object that's part of a larger object.) Then you can use the same API to generate, analyze, or transform code. It looks like overkill in toy examples, but can be impressively concise in longer examples.

Comment by cousin_it on "Other people are wrong" vs "I am right" · 2019-02-23T11:08:50.493Z · score: 13 (9 votes) · LW · GW

In the last year or so, I've noticed people are getting tired of staring at their phones all the time or having many strong opinions prompted by the internet. Looks like the internet is settling into its niche alongside books and TV, which are nice but everyone knows they shouldn't be the center of your being.

Comment by cousin_it on Avoiding Jargon Confusion · 2019-02-20T23:15:14.965Z · score: 3 (1 votes) · LW · GW

"Escalation spiral" is mixing two spatial metaphors, both far removed from the thing we're talking about. That's too abstract for me: being in a bad online argument doesn't feel like walking up a spiral staircase. I prefer words that say how I feel about the thing - something like "quarreling", "petty disagreement", or "argumentative black hole".

Comment by cousin_it on Avoiding Jargon Confusion · 2019-02-20T13:08:57.400Z · score: 5 (2 votes) · LW · GW

Not objecting to the concept - having more concepts is good. But I think if you want to contribute to language, concepts are less than half of the work. Most of the work is finding the right words and making them work well with other words. Here's a programming analogy: if you come up with a cool new algorithm and want to add it to a system that already has a billion lines of code, most of your effort should be spent on integrating with the system. Otherwise the whole system becomes crap over time. That's how I think about these things: coining an ugly new word is affixing an ugly shed to the cathedral of language.

Comment by cousin_it on Announcement: AI alignment prize round 4 winners · 2019-02-20T12:50:36.102Z · score: 13 (3 votes) · LW · GW

Sorry about not replying so long.

I don't think the money incentive is strong enough. Nobody will do good AI safety work just for a chance at 5K dollars. The prestige incentive is stronger, but if we get fewer entries over time, the prestige incentive falls and we get even fewer entries next time etc.

Canceling was my suggestion to which others agreed. Can't speak for others, but my main reason was that it's not fun to work on a project without growth, even if it's for an important cause. The choice was between canceling it and tweaking it to achieve growth, and I didn't have good ideas for tweaks.

Announcement: AI alignment prize round 4 winners

2019-01-20T14:46:47.912Z · score: 80 (19 votes)

Announcement: AI alignment prize round 3 winners and next round

2018-07-15T07:40:20.507Z · score: 102 (29 votes)

How to formalize predictors

2018-06-28T13:08:11.549Z · score: 16 (5 votes)

UDT can learn anthropic probabilities

2018-06-24T18:04:37.262Z · score: 63 (19 votes)

Using the universal prior for logical uncertainty

2018-06-16T14:11:27.000Z · score: 0 (0 votes)

Understanding is translation

2018-05-28T13:56:11.903Z · score: 132 (43 votes)

Announcement: AI alignment prize round 2 winners and next round

2018-04-16T03:08:20.412Z · score: 153 (45 votes)

Using the universal prior for logical uncertainty (retracted)

2018-02-28T13:07:23.644Z · score: 39 (10 votes)

UDT as a Nash Equilibrium

2018-02-06T14:08:30.211Z · score: 34 (11 votes)

Beware arguments from possibility

2018-02-03T10:21:12.914Z · score: 13 (9 votes)

An experiment

2018-01-31T12:20:25.248Z · score: 32 (11 votes)

Biological humans and the rising tide of AI

2018-01-29T16:04:54.749Z · score: 55 (18 votes)

A simpler way to think about positive test bias

2018-01-22T09:38:03.535Z · score: 34 (13 votes)

How the LW2.0 front page could be better at incentivizing good content

2018-01-21T16:11:17.092Z · score: 38 (19 votes)

Beware of black boxes in AI alignment research

2018-01-18T15:07:08.461Z · score: 70 (29 votes)

Announcement: AI alignment prize winners and next round

2018-01-15T14:33:59.892Z · score: 166 (63 votes)

Announcing the AI Alignment Prize

2017-11-04T11:44:19.000Z · score: 1 (1 votes)

Announcing the AI Alignment Prize

2017-11-03T15:47:00.092Z · score: 154 (66 votes)

Announcing the AI Alignment Prize

2017-11-03T15:45:14.810Z · score: 7 (7 votes)

The Limits of Correctness, by Bryan Cantwell Smith [pdf]

2017-08-25T11:36:38.585Z · score: 3 (3 votes)

Using modal fixed points to formalize logical causality

2017-08-24T14:33:09.000Z · score: 3 (3 votes)

Against lone wolf self-improvement

2017-07-07T15:31:46.908Z · score: 30 (28 votes)

Steelmanning the Chinese Room Argument

2017-07-06T09:37:06.760Z · score: 5 (5 votes)

A cheating approach to the tiling agents problem

2017-06-30T13:56:46.000Z · score: 3 (3 votes)

What useless things did you understand recently?

2017-06-28T19:32:20.513Z · score: 7 (7 votes)

Self-modification as a game theory problem

2017-06-26T20:47:54.080Z · score: 10 (10 votes)

Loebian cooperation in the tiling agents problem

2017-06-26T14:52:54.000Z · score: 5 (5 votes)

Thought experiment: coarse-grained VR utopia

2017-06-14T08:03:20.276Z · score: 16 (16 votes)

Bet or update: fixing the will-to-wager assumption

2017-06-07T15:03:23.923Z · score: 26 (26 votes)

Overpaying for happiness?

2015-01-01T12:22:31.833Z · score: 32 (33 votes)

A proof of Löb's theorem in Haskell

2014-09-19T13:01:41.032Z · score: 29 (30 votes)

Consistent extrapolated beliefs about math?

2014-09-04T11:32:06.282Z · score: 6 (7 votes)

Hal Finney has just died.

2014-08-28T19:39:51.866Z · score: 33 (35 votes)

"Follow your dreams" as a case study in incorrect thinking

2014-08-20T13:18:02.863Z · score: 29 (31 votes)

Three questions about source code uncertainty

2014-07-24T13:18:01.363Z · score: 9 (10 votes)

Single player extensive-form games as a model of UDT

2014-02-25T10:43:12.746Z · score: 12 (11 votes)

True numbers and fake numbers

2014-02-06T12:29:08.136Z · score: 19 (29 votes)

Rationality, competitiveness and akrasia

2013-10-02T13:45:31.589Z · score: 14 (15 votes)

Bayesian probability as an approximate theory of uncertainty?

2013-09-26T09:16:04.448Z · score: 16 (18 votes)

Notes on logical priors from the MIRI workshop

2013-09-15T22:43:35.864Z · score: 18 (19 votes)

An argument against indirect normativity

2013-07-24T18:35:04.130Z · score: 1 (14 votes)

"Epiphany addiction"

2012-08-03T17:52:47.311Z · score: 52 (56 votes)

AI cooperation is already studied in academia as "program equilibrium"

2012-07-30T15:22:32.031Z · score: 36 (37 votes)

Should you try to do good work on LW?

2012-07-05T12:36:41.277Z · score: 36 (41 votes)

Bounded versions of Gödel's and Löb's theorems

2012-06-27T18:28:04.744Z · score: 32 (33 votes)

Loebian cooperation, version 2

2012-05-31T18:41:52.131Z · score: 13 (14 votes)

Should logical probabilities be updateless too?

2012-03-28T10:02:09.575Z · score: 9 (14 votes)

Common mistakes people make when thinking about decision theory

2012-03-27T20:03:08.340Z · score: 51 (46 votes)

An example of self-fulfilling spurious proofs in UDT

2012-03-25T11:47:16.343Z · score: 20 (21 votes)

The limited predictor problem

2012-03-21T00:15:26.176Z · score: 10 (11 votes)