Posts

SSC Zürich February Meetup 2020-01-25T17:21:46.229Z

Comments

Comment by Vitor on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-17T16:41:19.747Z · LW · GW

On re-reading this I messed up something with the direction of the signs. Don't have time to fix it now, but the idea is hopefully clear.

Comment by Vitor on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-17T12:18:27.311Z · LW · GW

Quick sketch of an idea (written before deeply digesting others' proposals):

Intuition: Just like player 1 has a best response (starting from a strategy profile , improve her own utility as much as possible), she also has an altruistic best response (which maximally improves the other player's utility).

Example: stag hunt. If we're at (rabbit, rabbit), then both players are perfectly aligned. Even if player 1 was infinitely altruistic, she can't unilaterally cause a better outcome for player 2.

Definition: given a strategy profile , an -altruistic better response is any strategy of one player that gives the other player at least  extra utility for each point of utility that this player sacrifices.

Definition: player 1 is -aligned with player 2 if player 1 doesn't have an -altruistic better response for any .

0-aligned: non-spiteful player. They'll give "free" utility to other players if possible, but they won't sacrifice any amount of their own utility for the sake of others.

-aligned for : slightly altruistic. Your happiness matters a little bit to them, but not as much as their own.

1-aligned: positive-sum maximizer. They'll yield their own utility as long as the total sum of utility increases.

-aligned for : subservient player: They'll optimize your utility with higher priority than their own.

-aligned: slave. They maximize others' utility, completely disregarding their own.

Obvious extension from players to strategy profiles: How altruistic would a player need to be before they would switch strategies?

Comment by Vitor on Which rationalists faced significant side-effects from COVID-19 vaccination? · 2021-06-15T00:42:22.626Z · LW · GW

1st day: soreness in arm

2nd-3rd day: strong flu-like symptoms

4th day: feeling completely normal

5th day: start of what I described.

 

I'll be happy to give more detail about my medical history in private.

Comment by Vitor on Which rationalists faced significant side-effects from COVID-19 vaccination? · 2021-06-14T23:57:46.650Z · LW · GW

I've been having similar long-term symptoms. Palpitations, frequent headaches, chest aches / trouble breathing. Also constantly slightly congested.

Started around 5 days after the first Pfizer shot, been going on for roughly 3 months now. It's hard to attribute definitely to the vaccine as I have other health issues that could explain most of these symptoms, and I'm also very out of shape right now. But the timing is very suspicious. It was really a very sudden worsening of my health, literally from one day to the next.

Comment by Vitor on Covid 5/13: Moving On · 2021-05-14T10:27:01.517Z · LW · GW

Regarding the positivity rate in India stalling out, I would be hesitant to over-interpret this. There is a natural limit to the positivity rate somewhere below 100% (it might even be roughly 25% as we're currently seeing).

A simple model as an intuition-pump: there are two subgroups. Group 1 would like a test, but will elastically respond to any extra hassle of getting one. Group 2 absolutely needs a test and will not desist, e.g., they strongly believe they have Covid, and without a test they cannot get access to medical treatment. As tests get scarce, group 1 will gradually drop away and test takers converge to being drawn 100% from group 2. But those test-takers will obviously have lower than 100% positivity rate: if this wasn't the case, we wouldn't need tests at all! There are other diseases and circumstances that cause people to join group 2.

In a nutshell, positivity rate is only a proxy for undiagnosed Covid cases within some unknown operating envelope (I'm guessing 5-20%?).

Comment by Vitor on Covid 4/15: Are We Seriously Doing This Again · 2021-04-17T10:27:35.540Z · LW · GW

News from Switzerland: health authorities have made the decision that people who had covid should only get one vaccine shot, 6 months after recovery. This sounds sensible to me, but I'm worried what this means regarding different strains: if someone was infected by the milder variant, aren't they still at high risk from the newer strains? Anyplace else pursuing similar policies?

Comment by Vitor on Covid 3/18: An Expected Quantity of Blood Clots · 2021-03-19T10:31:19.035Z · LW · GW

Germany has already reversed the AZ stop. Hopefully all other countries will do likewise soon.

Comment by Vitor on Kelly *is* (just) about logarithmic utility · 2021-03-02T17:18:03.230Z · LW · GW

That clears up my confusion, thanks!

Comment by Vitor on Kelly *is* (just) about logarithmic utility · 2021-03-02T11:52:18.308Z · LW · GW

Great post, I find it really valuable to engage in this type of meta-modeling, i.e., deriving when and why models are appropriate.

I think you're making a mistake in Section 2 though. You argue that a mode optimizer can be pretty terrible (agreed). Then, you argue that any other quantile optimizer can also be pretty terrible (also agreed). However, Kelly doesn't only optimize the mode, or 2% quantile, or whatever other quantile: it maximizes all those quantiles simultaneously! So, is there any distribution for which Kelly itself fails to optimize between meaningfully different states (as in your 2%-quantile with 10% bad outcome example)? I don't think such a distribution exists.

(Note: maybe I'm misunderstanding what johnswentworth said here, but if solving for any x%-quantile maximizer always yields Kelly, then Kelly maximizes for all quantiles, correct?)

Comment by Vitor on Kelly isn't (just) about logarithmic utility · 2021-02-24T09:51:25.833Z · LW · GW

Comment/question about St. Petersburg and utilities: given any utility function u which goes to infinity, it should be possible to construct a custom St. Petersburg lottery for that utility function, right? I.e. a lottery with infinite expectation but arbitrarily low probability of being in the green. If we want to model an agent as universally rejecting such lotteries, it follows that utility cannot diverge, and thus must asymptotically approach some supremum (also requiring the natural condition that u is strictly monotone). Has this shape of utility function been seriously proposed in economics? Does it have a name?

Comment by Vitor on Covid 12/10: Vaccine Approval Day in America · 2020-12-20T12:06:07.540Z · LW · GW

So, the Swiss just approved the Pfizer vaccine. I think this clearly proves you wrong. However, as I was already planning to write an answer before this new development, let me give you that answer for completeness' sake (better late than never).

Your original claim sounded a lot stronger than what you're now saying, where it seems you simply disagree with the exact quantity that is being ordered.

Do you consider it obvious that 5+ doses per person would be optimal? For starters, only about half the population even wants to get vaccinated. Also, the first million doses are clearly worth much more than each additional million. These decisions were made in a very high uncertainty environment, before the effectiveness numbers were known. Switzerland obviously doesn't have the same market power as other, larger countries, etc. I'm not saying I disagree with your position, just that it is far from obviously correct.

But your original comment went much further than just claiming the swiss ordered the wrong quantity. You implied that Swissmedic (the body in charge of approval) basically has no political independence, and that both the head of Swissmedic and the health minister were brazenly lying to the public when they claimed that they were moving as fast as possible towards approval.

My priors say that these things are pretty unlikely. The delay is much more easily explained by the fact that there is no emergency approval process in Switzerland, which yields a huge status-quo bias for the regular process. To move any faster, new laws would have had to be passed.

So yes, I do think that you made a very far reaching read based on very little information.

Comment by Vitor on Covid 12/10: Vaccine Approval Day in America · 2020-12-13T05:03:59.601Z · LW · GW

> Switzerland is going even slower, making the usual noises about the need for ‘caution.’ My read is that this is because they did not order vaccine doses early enough, and now they are all sold out, so why not spin that by not approving the vaccine for a while and calling it ‘caution?’

This "read" of yours is quite far reaching, going on very little information. It is also completely wrong: Switzerland has contracts with all 3 major vaccine providers, the first of which was signed way back in August, securing 4.5 million doses (covering ~25% of the population).

Source (in german): <https://www.srf.ch/news/schweiz/moderna-impfdosen-gesichert-schweiz-steigt-ins-rennen-um-einen-moeglichen-impfstoff-ein>

So, yes, these delays are indeed because the swiss health authorities are being cautious. The vaccines are going through expedited but otherwise normal approval processes.

Comment by Vitor on Being Productive With Chronic Health Conditions · 2020-11-12T11:38:27.817Z · LW · GW

Wholeheartedly agree with this. This covers most of the things I do / aspire to do to manage my own chronic condition.

One area that you don't really mention is finding work that is flexible and offers social support for the measures you're implementing for yourself. If your productivity is unreliable, it is a very bad thing to be working a job with lots of hard deadlines, or where your lack of progress will immediately block your co-workers from progressing in their own work. It's also important to prevent others from assuming you have more slack than you actually do.

In this area, I've found pre-committing to be an extremely valuable tool. When I work towards a deadline, I discuss with my boss in advance how bad it would be to miss the deadline, and what our plan B would be in that case (e.g. if 1 week before the deadline things are looking bad, we cut X which we agree is non-essential). This prevents high-stakes, high-pressure discussions from happening while I am in a health crisis, and protects my professional reputation. You don't want to find out 1 week before the deadline that your boss actually considers X to be essential! Better to find out a month in advance and adjust your plans accordingly.

Comment by Vitor on Design thoughts for building a better kind of social space with many webs of trust · 2020-09-06T12:02:03.515Z · LW · GW

I'm intrigued, but this is a bit vague. What kind of thing are you looking to build, concretely? The gestalt I'm getting is this: A social network with a transparent feed / recommendation algorithm that each user can explicitly tune, which as a side effect tags any piece of information with a trust score. This score is implicitly filtered through a particular lens / prior depending on who you trust. Looking at things through different lenses for different purposes is encouraged.

Two thoughts on this:

  • It seems to put a lot of administrative burden on users to keep their edges up to date. Can the system be made mostly automatic, e.g. alerting you to changes, suggesting manual review of certain nodes, adjusting weights over time based on your usage, etc? Basically, can you give a user-centric description of the idea, rather than a system-centric one?
  • What about strategic incentives? Sure, very blatant voting rings and such can be easily rooted out, but this system would give malicious actors a lot of information to embed themselves in a web in more subtle, insidious manners, e.g. sleeper accounts.
Comment by Vitor on What's the best overview of common Micromorts? · 2020-09-06T11:12:39.212Z · LW · GW

This chart might be misleading in that it doesn't account for the impact of a person's skill on the danger. Some of these activities have a fixed risk (commercial flying), while others directly depend on how fit/agile/careful the person is, so the risk probably varies by orders of magnitude between individuals (motorcycling). At the more dangerous end, I'd expect the risk to be underestimated significantly: many people go skiing, only a few very fit people attempt to climb Everest.

Comment by Vitor on Disasters · 2020-01-23T03:04:10.879Z · LW · GW

What is the "total cost of ownership" of supplies? keeping a stockpile fresh requires ongoing maintenance. In addition to direct cost and space required, you regularly have to devote time and attention to it. It just doesn't seem practical, specially a large supply of fresh water.

An additional cost is the hassle of moving/selling/giving away your stockpile if you move. If you have deep roots in the place you live, this might not be a large consideration, but I've moved at least once every 6 years all my life, some times considerably more often (future projection: even more often). "Unnecessary" stuff like this just adds an extra burden to an already burdensome time.

How would you say the value of having supplies changes as you reduce from 14 days to something smaller? It's non-linear for sure, so maybe there's a lower point that's a good compromise, e.g. 3 days of food and water. Another way of phrasing the question: where does the "two weeks" reference point in your post come from?

Comment by Vitor on Use-cases for computations, other than running them? · 2020-01-23T02:29:55.800Z · LW · GW

Any semantic question about the program (semantic = about input/output relation, as opposed to syntactic = about the source code itself). Note that this is uncomputable due to rice's theorem. "Output" includes whether the computation halts.

Find a semantically equivalent computation that fulfills/maximizes some criterion:

  • more obviously correct
  • shorter in length
  • faster to run (on hardware x or model of computation y)
  • doesn't use randomness
  • doesn't leak info through side-channels
  • compliant with design pattern x
  • any other task done by a source-to-source compiler

Find a computation that is semantically equivalent after applying some mapping to the input and/or output:

  • runs on encrypted input/output pairs (homomorphic encryption)
  • computation is reversible (required before running on a quantum computer)
  • redundantly encoded, add metadata to output, etc. Example: run program on untrusted hardware in such a way that the result is trusted (hardware exposed to outer space, folding@home, etc)

Any question about the set of programs performing the same computation.

  • this computation must take at least x time
  • this computation cannot be done by any program of length less than x (kolmogorov complexity)

Treat the program as an anthropological artifact.

  • deduce the state of mind of the person that wrote the program
  • deduce the social environment in which the program was written
  • deduce the technology level required to make running the program practical
  • etc.

(Thanks for reminding me why I love CS so much!)

Comment by Vitor on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-12T21:56:20.137Z · LW · GW

Did you also test what other software (optaplanner as mentioned by HungryHobo, any SAT solver or similar tool) can do to improve those same schedules?

Did you run your software on some standard benchmark? There exists a thing called the international timetabling competition, with publicly available datasets.

Sorry to be skeptical, but scheduling is an NP-hard problem with many practical applications and tons of research has already been done in this area. I will grant that many small organizations don't have the know-how to set up an automated tool, so there may still be a niche for you, specially if you target a specific market segment and focus on making it as painless as possible.

Comment by Vitor on Open thread, Jul. 11 - Jul. 17, 2016 · 2016-07-12T21:34:13.720Z · LW · GW

This will not have any practical consequences whatsoever, even in the long term. It is already possible to perform reversible computation (Paper by Bennett linked in the article) for which such lower bounds don't apply. The idea is very simple: just make sure that your individual logic gates are reversible, so you can uncompute everything after reading out the results. This is most easily achieved by writing the gate's output to a separate wire. For example an OR gate, instead of mapping 2 inputs to 1 output like

(x,y) --> (x OR y),

it would map 3 inputs to 3 outputs like

(x, y, z) --> (x,y, z XOR (x OR y)),

causing the gate to be its own inverse.

Secondly, I understand that the Landauer bound is so extremely small that worrying about it in practice is like worrying about the speed of light while designing an airplane.

Finally, I don't know how controversial the Landauer bound is among physicists, but I'm skeptical in general of any experimental result that violates established theory. Recall that just a while ago there were some experiments that appeared to show FTL communication, but were ultimately a sensor/timing problem. I can imagine many ways in which measurement errors sneak their way in, given the very small amount of energy being measured here.

Comment by Vitor on My Kind of Moral Responsibility · 2016-05-04T13:11:45.124Z · LW · GW

The real danger, of course, is being utterly convinced Christianity is true when it is not.

The actions described by Lumifer are horrific precisely because they are balanced against a hypothetical benefit, not a certain one. If there is only an epsilon chance of Christianity being true, but the utility loss of eternal torment is infinite, should you take radical steps anyway?

In a nutshell, Lumifer's position is just hedging against Pascal's mugging, and IMHO any moral system that doesn't do so is not appropriate for use out here in the real world.

Comment by Vitor on Open Thread March 21 - March 27, 2016 · 2016-03-22T16:18:04.547Z · LW · GW

Your problem is called a clustering problem. First of all, you need to answer how you measure your error (information loss, as you call it). Typical error norms used are l1 (sum of individual errors), l2 (sum of squares of errors, penalizes larger errors more) and l-infinity (maximum error).

Once you select a norm, there always exists a partition that minimizes your error, and to find it there are a bunch of heuristic algorithms, e.g. k-means clustering. Luckily, since your data is one-dimensional and you have very few categories, you can just brute force it (for 4 categories you need to correctly place 3 boundaries, and naively trying all possible positions takes only n^3 runtime)

Hope this helps.