cousin_it's Shortform 2019-10-26T17:37:44.390Z · score: 3 (1 votes)
Announcement: AI alignment prize round 4 winners 2019-01-20T14:46:47.912Z · score: 80 (19 votes)
Announcement: AI alignment prize round 3 winners and next round 2018-07-15T07:40:20.507Z · score: 102 (29 votes)
How to formalize predictors 2018-06-28T13:08:11.549Z · score: 16 (5 votes)
UDT can learn anthropic probabilities 2018-06-24T18:04:37.262Z · score: 63 (19 votes)
Using the universal prior for logical uncertainty 2018-06-16T14:11:27.000Z · score: 0 (0 votes)
Understanding is translation 2018-05-28T13:56:11.903Z · score: 139 (47 votes)
Announcement: AI alignment prize round 2 winners and next round 2018-04-16T03:08:20.412Z · score: 155 (46 votes)
Using the universal prior for logical uncertainty (retracted) 2018-02-28T13:07:23.644Z · score: 39 (10 votes)
UDT as a Nash Equilibrium 2018-02-06T14:08:30.211Z · score: 36 (12 votes)
Beware arguments from possibility 2018-02-03T10:21:12.914Z · score: 13 (9 votes)
An experiment 2018-01-31T12:20:25.248Z · score: 32 (11 votes)
Biological humans and the rising tide of AI 2018-01-29T16:04:54.749Z · score: 55 (18 votes)
A simpler way to think about positive test bias 2018-01-22T09:38:03.535Z · score: 34 (13 votes)
How the LW2.0 front page could be better at incentivizing good content 2018-01-21T16:11:17.092Z · score: 38 (19 votes)
Beware of black boxes in AI alignment research 2018-01-18T15:07:08.461Z · score: 71 (30 votes)
Announcement: AI alignment prize winners and next round 2018-01-15T14:33:59.892Z · score: 167 (64 votes)
Announcing the AI Alignment Prize 2017-11-04T11:44:19.000Z · score: 1 (1 votes)
Announcing the AI Alignment Prize 2017-11-03T15:47:00.092Z · score: 156 (68 votes)
Announcing the AI Alignment Prize 2017-11-03T15:45:14.810Z · score: 7 (7 votes)
The Limits of Correctness, by Bryan Cantwell Smith [pdf] 2017-08-25T11:36:38.585Z · score: 3 (3 votes)
Using modal fixed points to formalize logical causality 2017-08-24T14:33:09.000Z · score: 3 (3 votes)
Against lone wolf self-improvement 2017-07-07T15:31:46.908Z · score: 30 (28 votes)
Steelmanning the Chinese Room Argument 2017-07-06T09:37:06.760Z · score: 5 (5 votes)
A cheating approach to the tiling agents problem 2017-06-30T13:56:46.000Z · score: 3 (3 votes)
What useless things did you understand recently? 2017-06-28T19:32:20.513Z · score: 7 (7 votes)
Self-modification as a game theory problem 2017-06-26T20:47:54.080Z · score: 10 (10 votes)
Loebian cooperation in the tiling agents problem 2017-06-26T14:52:54.000Z · score: 5 (5 votes)
Thought experiment: coarse-grained VR utopia 2017-06-14T08:03:20.276Z · score: 16 (16 votes)
Bet or update: fixing the will-to-wager assumption 2017-06-07T15:03:23.923Z · score: 30 (27 votes)
Overpaying for happiness? 2015-01-01T12:22:31.833Z · score: 32 (33 votes)
A proof of Löb's theorem in Haskell 2014-09-19T13:01:41.032Z · score: 29 (30 votes)
Consistent extrapolated beliefs about math? 2014-09-04T11:32:06.282Z · score: 6 (7 votes)
Hal Finney has just died. 2014-08-28T19:39:51.866Z · score: 33 (35 votes)
"Follow your dreams" as a case study in incorrect thinking 2014-08-20T13:18:02.863Z · score: 29 (31 votes)
Three questions about source code uncertainty 2014-07-24T13:18:01.363Z · score: 9 (10 votes)
Single player extensive-form games as a model of UDT 2014-02-25T10:43:12.746Z · score: 21 (12 votes)
True numbers and fake numbers 2014-02-06T12:29:08.136Z · score: 19 (29 votes)
Rationality, competitiveness and akrasia 2013-10-02T13:45:31.589Z · score: 14 (15 votes)
Bayesian probability as an approximate theory of uncertainty? 2013-09-26T09:16:04.448Z · score: 16 (18 votes)
Notes on logical priors from the MIRI workshop 2013-09-15T22:43:35.864Z · score: 18 (19 votes)
An argument against indirect normativity 2013-07-24T18:35:04.130Z · score: 1 (14 votes)
"Epiphany addiction" 2012-08-03T17:52:47.311Z · score: 52 (56 votes)
AI cooperation is already studied in academia as "program equilibrium" 2012-07-30T15:22:32.031Z · score: 36 (37 votes)
Should you try to do good work on LW? 2012-07-05T12:36:41.277Z · score: 36 (41 votes)
Bounded versions of Gödel's and Löb's theorems 2012-06-27T18:28:04.744Z · score: 32 (33 votes)
Loebian cooperation, version 2 2012-05-31T18:41:52.131Z · score: 13 (14 votes)
Should logical probabilities be updateless too? 2012-03-28T10:02:09.575Z · score: 12 (15 votes)
Common mistakes people make when thinking about decision theory 2012-03-27T20:03:08.340Z · score: 54 (47 votes)
An example of self-fulfilling spurious proofs in UDT 2012-03-25T11:47:16.343Z · score: 20 (21 votes)


Comment by cousin_it on How About a Remote Variolation Study? · 2020-04-05T16:34:10.580Z · score: 2 (1 votes) · LW · GW

Yes, if the potential effect size is large, you can get away with imprecise answers to some questions. But if there are many questions, at some point your "imprecision budget" will be spent. For example, will you be able to detect if your dosing leads to later hospitalization instead of no hospitalization? Or it weakens immunity instead of strengthening it?

Comment by cousin_it on How About a Remote Variolation Study? · 2020-04-04T17:35:13.279Z · score: 7 (3 votes) · LW · GW

Let's say X% get hospitalized within 2 weeks. What's the highest value of X that would say variolation is a good idea? Keep in mind that:

  • The demographics of your sample aren't the same as the general population, hopefully you didn't include many 60+ folks.

  • You don't know how many botched the protocol. Could botch in any direction (dose too high, too low, or no dose at all).

  • You don't know the hospitalization rate after contacting corona in normal ways, which can also be low dose. Many people don't get tested now and the epidemic is spreading.

  • Etc.

Comment by cousin_it on How About a Remote Variolation Study? · 2020-04-04T13:38:20.848Z · score: 7 (3 votes) · LW · GW
  1. Spain has stabilized at 7K new cases/day, Italy at 5K new cases/day. At this rate it will take many months to reach a significant percentage of the population. The same will probably happen in the US. Most people won't get infected, so trying amateur vaccination is more dangerous than doing nothing.

  2. How will you send doses to volunteers? If I were a delivery company, I would refuse to deliver this and would call the cops.

  3. How will you measure the results? People have trouble measuring the death rate from corona, sometimes they can't even agree on the order of magnitude. It's really low and depends on demographic factors, environment, treatment and other things that aren't well understood. If you want to measure a change in that rate by looking at 10k remote volunteers in reasonable time, I'd like to see your methodology and error bounds.

Comment by cousin_it on mind viruses about body viruses · 2020-03-28T08:43:09.077Z · score: 12 (7 votes) · LW · GW

Counterpoint: most people who will read your post are already better than average at vetting-memes-before-spreading. If you succeed at making these folks even more cautious, everyone else in the world will still keep spreading unvetted memes, so worse memes will win.

Comment by cousin_it on March 24th: Daily Coronavirus Link Updates · 2020-03-27T07:29:11.912Z · score: 6 (3 votes) · LW · GW

Wait, so your graph shows the number of people having their 2-day "infectious period" at any given time, which could be much lower than the number of people infected at a given time? That doesn't seem to be explained on the page.

Anyway, I think the really important number is how many people are having their "required hospitalization period" at any given time (which is longer than 2 days). Maybe you could show that too, since you're already showing the "care capacity" line?

Comment by cousin_it on March 24th: Daily Coronavirus Link Updates · 2020-03-26T18:09:39.790Z · score: 2 (1 votes) · LW · GW

It still looks weird to me. For example, in Switzerland with no mitigation it estimates 1% of people infected now and 3% at the peak on Apr 14, which is 2.5 weeks from now. Since each infection lasts a couple weeks or more, and there have been few deaths and recoveries so far, that means <5% of the population will have been infected by that point. And then it says active infections will start falling. Why?

Comment by cousin_it on March 24th: Daily Coronavirus Link Updates · 2020-03-26T16:26:25.317Z · score: 3 (2 votes) · LW · GW

Does anyone know why the dashboard says infections will peak at 3% if no mitigation is done?

Comment by cousin_it on Occam's Guillotine · 2020-03-23T11:26:43.809Z · score: 4 (2 votes) · LW · GW

I think there are two issues here: 1) what are the right beliefs to have about life 2) what's the right emotional attitude to life. You paint a picture of truth as a harsh destroyer of illusions, but why not describe it as a source of wonder / beauty / power / progress instead?

Comment by cousin_it on Robin Hanson on whether governments can squash COVID-19 · 2020-03-19T23:05:29.377Z · score: 2 (1 votes) · LW · GW

Out of the four "obvious considerations" at the start of the post, two seem questionable to me.

you have to do a lot more to squash than to flatten

Afaik to get worthwhile flattening (not much overloading of hospital beds) we need to get R0 pretty close to 1 anyway, so the extra effort to get it below 1 (squash) could be relatively small.

while flattening policies need be maintained only for a few months, squashing policies must be maintained until a strong treatment is available, probably years

Afaik flattening over a few months = almost as many deaths as no flattening at all. The hump is too big, and the number of hospital beds too small, to safely "process" half of the population in a few months.

Comment by cousin_it on Rationalists, Post-Rationalists, And Rationalist-Adjacents · 2020-03-15T08:04:32.229Z · score: 13 (5 votes) · LW · GW

In terms of conversation style, I'd define a "rationalist" as someone who's against non-factual objections to factual claims: "you're not an expert", "you're motivated to say this", "your claim has bad consequences" and so on. An intermediate stage would be "grudging rationalist": someone who can refrain from using such objections if asked, but still listens to them, and relapses to using them when among non-rationalists.

Comment by cousin_it on The absurdity of un-referenceable entities · 2020-03-14T22:13:04.086Z · score: 4 (2 votes) · LW · GW

I think Jessica is right on this point. Within a system like ZFC, you can't define the system's own definability predicate, so the sentence "there are numbers undefinable in ZFC" can't even be said, let alone proved. (Which is just as well, since ZFC has a countable model, and even a model whose every member is definable.) The same applies to the system of everything you believe about math, as long as it's consistent and at least as strong as ZFC.

Comment by cousin_it on Puzzles for Physicalists · 2020-03-13T15:52:09.162Z · score: 2 (1 votes) · LW · GW

I think counterfactuals only make sense when talking about a part of a system from the perspective of another part. Maybe probabilities as well. Similar to how in quantum mechanics, a system of two qubits can be in a pure state, but from the perspective of the first qubit, the second is in a mixed state.

In this view, causality/counterfactuals don't have to be physically fundamental. For example, you can have a Game of Life world where "all causal claims reduce to claims about state" as you say: "if X then Y" where X and Y are successive states. Yet it makes perfect sense for an AI in that world to use probabilities or counterfactuals over another, demarcated part of the world.

There is of course a tension between that and logical decision theories, but maybe that's ok?

Comment by cousin_it on Name of Problem? · 2020-03-11T08:12:30.546Z · score: 2 (1 votes) · LW · GW

I'm actually not sure it's a regular grammar. Consider this program:

f(n) := n+f(n-1)

Which gives the tree


The path from any 1 to the root contains a bunch of minuses, then at least as many pluses. That's not regular.

So it's probably some other kind of grammar, and I don't know if it has decidable equivalence.

Comment by cousin_it on Name of Problem? · 2020-03-10T22:44:09.400Z · score: 10 (2 votes) · LW · GW

Ok, if we disallow cycles of outermost function calls, then it seems the trees are indeed infinite only in one direction. Here's a half-baked idea then: 1) interpret every path from node to root as a finite word 2) interpret the tree as a grammar for recognizing these words 3) figure out if equivalence of two such grammars is decidable. For example, if each tree corresponds to a regular grammar, then you're in luck because equivalence of regular grammars is decidable. Does that make sense?

Comment by cousin_it on Name of Problem? · 2020-03-10T07:57:30.120Z · score: 4 (2 votes) · LW · GW

Then isn't it possible to also have infinite expansions "in the middle", not only "inside" and "outside"? Something like this:

f(n) := f(g(n))
g(n) := g(n+1)

Maybe there's even some way to have infinite towers of infinite expansions. I'm having trouble wrapping my head around this.

Comment by cousin_it on Name of Problem? · 2020-03-09T21:53:15.020Z · score: 3 (2 votes) · LW · GW

I don't understand why the second looks like that, can you explain?

Comment by cousin_it on March Coronavirus Open Thread · 2020-03-09T21:45:10.932Z · score: 6 (3 votes) · LW · GW

In their baseline scenario (for a small Swiss city with good infection control)

Sad laugh. I'm in Switzerland, we have exponential growth and there's no infection control to speak of. They just told people with non-severe symptoms to not bother getting tested. Schools are open. Haven't seen even one person wearing a mask.

Comment by cousin_it on Name of Problem? · 2020-03-09T21:40:28.768Z · score: 4 (2 votes) · LW · GW

Not sure I understand the question. Consider these two programs:

  1. f(n) := f(n)

  2. f(n) := f(n+1)

Which expression trees do they correspond to? Are these trees equivalent?

Comment by cousin_it on Credibility of the CDC on SARS-CoV-2 · 2020-03-07T22:37:14.116Z · score: 37 (15 votes) · LW · GW

Yes, it's important that the public should trust authorities. Pushing authorities to be more truthful helps, not harms, that goal.

Comment by cousin_it on Coronavirus: Justified Practical Advice Thread · 2020-03-06T10:12:44.267Z · score: 17 (9 votes) · LW · GW

I just thought of this idea: carry a box of disinfectant wipes everywhere, and whenever you want to touch a surface that's touched by many people (handrail, door handle, elevator button, touchscreen kiosk...) give it a quick wipe-down first. That protects you, helps protect other people, shows them how to do the same, and also disinfects your hands many times a day. Any problems with this?

Comment by cousin_it on Covid-19: Things I'm Doing Differently · 2020-03-06T09:49:13.123Z · score: 8 (2 votes) · LW · GW

I just thought of this idea: carry a box of disinfectant wipes everywhere, and whenever you want to touch a surface that's touched by many people (handrail, door handle, elevator button, touchscreen kiosk...) give it a quick wipe-down first. That protects you, helps protect other people, shows them how to do the same, and also disinfects your hands many times a day. Any problems with this? If it's a good idea, how do we make it catch on?

Comment by cousin_it on Open & Welcome Thread - February 2020 · 2020-03-03T19:40:20.003Z · score: 7 (3 votes) · LW · GW

Look at Table 3, most people in the study received some kind of treatment, in particular 40% received oxygen. You can't figure out the untreated fatality rate from this.

Comment by cousin_it on I don't understand Rice's Theorem and it's killing me · 2020-03-02T11:04:16.272Z · score: 3 (1 votes) · LW · GW

I think the first catch is that "semantic property" is a bit tricky. For example, whether the program halts before 10 steps doesn't count as a "semantic property", because the programs "halt immediately" and "wait 10 steps then halt" are considered semantically equivalent. So most questions about what a real computer or physical object will do in 10 seconds are exempt from the theorem and you can sleep easy.

The second catch is that questions like "does this program ever halt", which do count as semantic, are all individually answerable. That's right - for any program A, there exists a program B that answers the halting problem for A. (Namely, if A halts, we know B exists - it's just the program "print 1". If A doesn't halt, we also know B exists - it's just "print 0". So B exists in any case.) The halting problem and Rice's theorem are more about having one B that works for all A, which is a much bigger ask.

Comment by cousin_it on cousin_it's Shortform · 2020-02-29T12:35:59.920Z · score: 4 (2 votes) · LW · GW

I just thought of a simple way to explain tensors. Imagine a linear function that accepts two numbers and returns a number, let's call it f(x,y). Except there are two ways to imagine it:

  1. Linear in both arguments combined: f(1,2)+f(1,3)=f(2,5). Every such function has the form f(x,y)=ax+by for some a and b, so the space of such functions is 2-dimensional. We say that the Cartesian product of R^1 and R^1 is R^2, because 1+1=2.

  2. Linear in each argument when the other is fixed: f(1,2)+f(1,3)=f(1,5). Every such function has the form f(x,y)=axy for some a, so the space of such functions is 1-dimensional. We say that the tensor product of R^1 and R^1 is R^1, because 1*1=1.

In this case the tensor product is lower dimensional than the Cartesian product. But if we take say R^3 and R^3, then the Cartesian product will be R^6 and the tensor product will be R^9, because it will have separate coefficients a_(ij)*x_i*y_j.

Comment by cousin_it on [HPMOR] Harry - example or anti-example? · 2020-02-27T10:26:06.145Z · score: 3 (1 votes) · LW · GW

Considering that 1) Harry is born special, a wizard 2) double special, a clone of a super smart guy 3) triple special, considered a savior of a country 4) quadruple special, guaranteed by prophecy to survive the events of the story - and this is all set up before the story even begins - his rationality doesn't matter, he could just get drunk every day and skip classes. It's the Paul Atreides situation all over again (outcome of super genetic program, unbeatable fighter, prophesied messiah of a planet-wide religion, also the lawful heir to that planet, all set up before the story begins). For all the nods to Tolkien, I wish HPMOR had taken more cues from LOTR, where Aragorn (supersoldier, prophesied heir to a kingdom, engaged to an immortal princess) ends up providing a distraction, while Sam (short guy, servant class, gets on critical missions by repeatedly cajoling people to include him) ends up the hero.

Comment by cousin_it on How does electricity work literally? · 2020-02-25T16:10:52.680Z · score: 6 (4 votes) · LW · GW

I think Bill Beaty's page on electricity might be what you're looking for. Here's a joking teaser which shows the kinds of questions he's trying to answer:

Electricity is quite simple: "electricity" is just the flowing motion of electricity! Electricity is a mysterious incomprehensible entity which is invisible and visible, both at the same time. Also, electricity is both a form of energy and a type of matter. Both. Electricity is a kind of low-frequency radio wave which is made of protons. It's a mysterious force which cannot be seen, and yet it looks like blue-white fire that arcs across the clouds. It moves forward at the speed of light... yet it sits and vibrates inside your AC cord without flowing forwards at all. It's totally weightless, yet it has a small weight. When electricity flows through a light bulb's filament, it gets changed entirely into light. Yet not one bit of electricity is ever used up by the light bulb, and all the electricity flows out of the filament and back down the other wire. College textbooks are full of electricity, yet they have no electric charge! Electricity is like sound waves, no no, it's just like wind, no, the electricity is like the air molecules. Electricity is like cars on a highway, no, the electricity is the speed of the cars, no, electricity is just like "traffic waves." Electricity is a class of phenomena ...a class of phenomena which can be stored in batteries! If you want to measure a quantity of electricity, what units should you use? Why Volts of electricity, of course. And also Coulombs of electricity. And Amperes of electricity. Watts of electricity and Joules, all at the same time. Yet "electricity" is definitely a class of phenomena; merely a type of event. Since we can't have an amount of an event, we can't really measure the quantity of electricity at all... right? Right?

And then he goes on to answer all the questions one by one, in a very straightforward way.

Comment by cousin_it on Will AI undergo discontinuous progress? · 2020-02-24T17:36:02.529Z · score: 4 (2 votes) · LW · GW

Yeah, if gray goo is impossible, the AI can't use that particular insta-win move. Though I think if the AI is smarter than humans, it can find other moves that will let it win slower but pretty much as surely.

Comment by cousin_it on Will AI undergo discontinuous progress? · 2020-02-24T16:16:21.857Z · score: 5 (3 votes) · LW · GW

Foom is more about growth in intelligence, which could be possible with existing computing resources and research into faster computers. Even if gray goo is impossible, once AI is much smarter than humans, it can manipulate humans so that most of the world's productive capacity ends up under the AI's control.

Comment by cousin_it on Babble · 2020-02-24T15:14:57.834Z · score: 9 (4 votes) · LW · GW

Update after two years: trying to think thoughts I never thought before (or words, images, sounds, etc) still works amazingly every time I try it. It's pretty much the key to creativity in every area. I find myself not doing it as often as I could, but when I know someone else will be interested (like when some friends want me to write lyrics for a song), it comes easily and naturally. So everyone, please demand more creativity from me and from each other!

Comment by cousin_it on You are an optimizer. Act like it! · 2020-02-22T01:32:19.796Z · score: 3 (1 votes) · LW · GW

So right now I'm X, and you say I should act like Y (an optimizer which is nearest to X according to some metric). But which metric to use? For example, if the metric is "design purpose", I'll act like an optimizer of the design purpose that evolution had for me, and have kids with lots of people and abandon them. That's awful. Or if the metric is "reward feelings", I'll act like an optimizer of the reward feelings that evolution gave me, and use lots of morphine every day. That's awful too. Or if the metric is "values", I'll act like an optimizer of X's values, but figuring out how to ascribe values to X is the very question we're trying to solve, so this doesn't help. Any thoughts?

Comment by cousin_it on How do you survive in the humanities? · 2020-02-21T10:21:35.870Z · score: 21 (7 votes) · LW · GW

If OP was taking a scientology class, you could say the exact same thing: the bullshit didn't hinder the teacher's ability to "succeed" (become a teacher of bullshit), therefore OP should learn empathy, etc. Presumably you wouldn't make that argument, so it's better to argue based on the qualities of the actual teaching.

Comment by cousin_it on On the falsifiability of hypercomputation, part 2: finite input streams · 2020-02-17T23:34:34.588Z · score: 7 (3 votes) · LW · GW

Your arbitration oracle seems equivalent to the consistent guessing problem described by Scott Aaronson here. Also see the comment from Andy D proving that it's indeed strictly simpler than the halting problem.

Comment by cousin_it on A 'Practice of Rationality' Sequence? · 2020-02-15T21:28:23.605Z · score: 5 (3 votes) · LW · GW

I'm wary of such mind hacks, because they teach you to treat a person (yourself) as a machine. Most people have an instinct for human connection that refuses to be satisfied by machines, so gradually teaching yourself that you live in a world of machines can lead to isolation and emptiness. That might have contributed to SquirrelInHell's suicide, though I didn't know them in person.

Comment by cousin_it on Confirmation Bias As Misfire Of Normal Bayesian Reasoning · 2020-02-13T15:57:01.005Z · score: 28 (13 votes) · LW · GW

When someone says something that confirms my preexisting beliefs (eg ‘coyotes live in this area, but not polar bears’), I believe it. If that same person provides the same evidence for something that challenges my preexisting beliefs, I reject it.

Is this confirmation bias?

Not as far as I know. Wikipedia gives three aspects of confirmation bias:

  1. Biased search: seeking out stories about coyotes but not polar bears.

  2. Biased interpretation: hearing an unknown animal rustle in the bushes, and treating that as additional evidence that coyotes outnumber polar bears.

  3. Biased recall: remembering coyote encounters more readily than polar bear encounters.

All of those seem different from your example, and none are valid Bayesian reasoning.

Comment by cousin_it on On the falsifiability of hypercomputation · 2020-02-07T08:40:52.833Z · score: 5 (2 votes) · LW · GW

I think your argument will also work for PA and many other theories. It's known as game semantics:

The simplest application of game semantics is to propositional logic. Each formula of this language is interpreted as a game between two players, known as the "Verifier" and the "Falsifier". The Verifier is given "ownership" of all the disjunctions in the formula, and the Falsifier is likewise given ownership of all the conjunctions. Each move of the game consists of allowing the owner of the dominant connective to pick one of its branches; play will then continue in that subformula, with whichever player controls its dominant connective making the next move. Play ends when a primitive proposition has been so chosen by the two players; at this point the Verifier is deemed the winner if the resulting proposition is true, and the Falsifier is deemed the winner if it is false. The original formula will be considered true precisely when the Verifier has a winning strategy, while it will be false whenever the Falsifier has the winning strategy.

Comment by cousin_it on The Adventure: a new Utopia story · 2020-02-06T13:40:58.180Z · score: 3 (1 votes) · LW · GW

I think this kind of utopian stories often feel a bit shallow, because they are so focused on circumstances. To me, the interesting core of a story is usually about personalities. Circumstances are also important, but more as a testing stone for personalities. Maybe you can write a utopian story with interesting personalities, but that task needs to be taken directly.

Comment by cousin_it on Category Theory Without The Baggage · 2020-02-05T14:55:26.735Z · score: 7 (3 votes) · LW · GW

In group theory, a group can be defined abstractly as a set with a binary operation obeying certain axioms, or concretely as a bunch of permutations on some set (which doesn't need to include all permutations, but must be closed under composition and inverse). The two views are equivalent by Cayley's theorem, and I think the second view is more helpful, at least for beginners.

I don't know very much about category theory, but maybe we could do something similar there? Since every small category has a faithful functor into Set, it can be defined as a bunch of sets and functions between them. It doesn't need to include all sets or functions, but must be closed under composition and include each set's identity function to itself.

For example, the divisibility category from the post can be seen as a category of sets like {1,...,n} and functions that are unital ring homomorphisms from Z/mZ to Z/nZ (of which there's exactly one if n divides m, and zero otherwise). And the category of types and functions in some programming language can be seen as a category containing some sets of things-with-bottoms and monotone functions between them. So in both of these cases, going to sets leads to some nice math.

I've heard that the set intuition starts to break down once you study more category theory, but haven't gotten to that point yet.

Comment by cousin_it on Meta-Preference Utilitarianism · 2020-02-05T12:05:44.832Z · score: 5 (2 votes) · LW · GW

My understanding of average vs total utilitarianism doesn't yet tell me which one to vote for. You could ask me to vote anyway, but is there a reason why voting would give high quality answers to such questions?

Comment by cousin_it on DanielFilan's Shortform Feed · 2020-02-04T08:18:31.960Z · score: 3 (1 votes) · LW · GW

I tend to view poor people and developing countries having higher reproduction rates as a consequence of less economic slack. That is to say, people who are poorer have more kids because those kids are decent long-term investments overall (ie old-age support, help-around-the-house). In contrast, wealthy people can make way more money by doing things that don’t involve kids.

Kids will grow up and move away no matter if you're rich or poor though, so I'm not sure the investment explanation makes sense. But your last sentence rings true to me. If someone cares more about career than family, they will always have "no time" for a family. I've heard it from well-paid professionals many times: "I'd like to have kids... eventually..."

Comment by cousin_it on Philosophical self-ratification · 2020-02-04T07:44:28.927Z · score: 4 (2 votes) · LW · GW

Sure, but I don't see why such self-trust is a good sign. All inconsistent theories have proofs of finitistic self-consistency up to n that are shorter than n (for some n), but only some consistent theories do. So seeing such a proof is Bayesian evidence in favor of inconsistency.

Comment by cousin_it on Philosophical self-ratification · 2020-02-03T23:07:26.654Z · score: 14 (4 votes) · LW · GW

I think it's more subtle. In mathematical logic, there's a few things that can happen to a theory:

  1. It can prove a falsehood. That's bad: the theory is busted.

  2. It can prove itself consistent. That's bad too: it implies the theory is inconsistent, by the second incompleteness theorem.

  3. It can prove itself inconsistent. That's not necessarily bad: the silly theory PA+¬Con(PA), which asserts its own inconsistency, is actually equiconsistent with PA. But it suggests that the theory has a funny relationship with reality (in this case, that any model of it must include some nonstandard integers).

Overall it seems we should prefer theories that don't say anything much about their own justifications one way or the other. I suspect the right approach in philosophy is the same.

Comment by cousin_it on What Money Cannot Buy · 2020-02-03T10:40:31.254Z · score: 5 (2 votes) · LW · GW

would create clear incentives for adversarial action, where currently there are none

Well, citation farms already exist, so we know roughly how many people are willing to do stuff like that. I still think the personalized PageRank algorithm (aka PageRank with priors, maybe initialized with a bunch of trustworthy researchers) is a good fit for solving this problem.

Comment by cousin_it on What Money Cannot Buy · 2020-02-03T08:21:24.170Z · score: 5 (2 votes) · LW · GW

Yeah, the reference to web of trust wasn't really important, I've edited it out. As for PageRank, AFAIK it works fine for recommendation systems. You do need another layer to prevent link farms and other abuse, but since research is a gated community with ethical standards, that should be easier than on the web.

Comment by cousin_it on What Money Cannot Buy · 2020-02-02T20:45:57.650Z · score: 18 (10 votes) · LW · GW

This problem crops up in many places.

I think the most promising solution is something like the personalized PageRank algorithm. It's a formalization of the idea "here's some sources I already trust, so let's walk the graph starting from them and find the best sources to answer my questions". It doesn't absolve you from figuring stuff out, but acts as a force multiplier on top of that.

One important use case is funding of research. Today prestigious journals judge the importance of research by accepting or rejecting papers, and funders make decisions based on that (and citations, but those come later). A system without journals at all, only PageRank-like endorsements between researchers, could be cheaper and just as reliable.

Comment by cousin_it on High-precision claims may be refuted without being replaced with other high-precision claims · 2020-02-01T08:29:58.664Z · score: 3 (1 votes) · LW · GW

Yeah, algorithms using floating point are deterministic in practice (they might return rubbish, but at least it's the same rubbish each time). If that's all the post is asking for, then I'm happy with that.

Comment by cousin_it on High-precision claims may be refuted without being replaced with other high-precision claims · 2020-01-31T09:10:35.472Z · score: 3 (1 votes) · LW · GW

I still don't think the programming example supports your point.

For example, in C and C++, integer overflow is undefined behavior. The compiler is allowed to break your program if it happens. Undefined behavior is useful for optimizations - for example, you can optimize x<x+1 to true, which helps eliminate branches - and there have been popular programs that quietly broke when a new compiler release got better at such optimizations. John Regehr's blog is a great source on this.

Almost nothing in programming is 100% reliable, most things just kinda seem to work. Maybe it would be better to use an example from math.

Comment by cousin_it on High-precision claims may be refuted without being replaced with other high-precision claims · 2020-01-31T08:41:41.042Z · score: 5 (2 votes) · LW · GW

You did also say that

The claim that a given algorithm or circuit really adds two numbers is very precise. Even a single pair of numbers that it adds incorrectly refutes the claim, and very much risks making this algorithm/circuit useless.

For almost every arithmetic operation in actual computers, on every type of numbers, there are many inputs for which that operation returns the wrong result. (Yeah, arbitrary size integers are an exception, but most programs don't use those, and even they can fail if you try making a number that doesn't fit in memory.) But still, lots of algorithms are useful.

Comment by cousin_it on High-precision claims may be refuted without being replaced with other high-precision claims · 2020-01-31T08:29:48.666Z · score: 3 (1 votes) · LW · GW

Well, your broader claim was that computer algorithms shouldn't kinda sorta work, they need to work 100%. And floating point arithmetic belies that claim. For that matter, so does integer arithmetic - practically no programs come with a rigorous analysis of when integer overflow or division by zero can or can't happen. For example, binary search in Java was buggy for many years, because the (high+low)/2 operation on integers is funny.

Comment by cousin_it on High-precision claims may be refuted without being replaced with other high-precision claims · 2020-01-31T08:18:17.934Z · score: 3 (1 votes) · LW · GW

If a computer makes random small errors 0.01% of the time in e.g. arithmetic operations, it’s not an almost-working computer, it’s a completely non-functioning computer, that will crash almost immediately.

Floating point arithmetic in computers is usually not precise, and has many failure modes that are hard to understand even for experts. Here's a simple one: when calculating the sum of many numbers, adding them from smallest to biggest or from biggest to smallest will often give different results, and the former one will be more correct. Here's a more complex one: a twenty page paper about computing the average of two numbers. But there are programs that do trillions of floating point operations and don't crash.

Comment by cousin_it on Using vector fields to visualise preferences and make them consistent · 2020-01-30T17:09:32.093Z · score: 3 (1 votes) · LW · GW

It makes sense to value other agents having power, but are you sure that value can't be encoded consistently?