Posts

Comments

Comment by Forged Invariant on The Base Rate Times, news through prediction markets · 2023-06-09T16:24:42.312Z · LW · GW

As an example of how Manifold reacted to a (crude) attempt at manipulation:

Dr P (a Manifold user) would create and bet yes on markets for "Will Trump be president on [some date]?" for various dates where there was no plausible way trump would be president. Other users quickly noticed and set up limit orders to capture this source of free money. Eventually Dr. P's bets were cancelled out quickly enough that they had little to no effect on the probability, and it became hard to find one of those bets profit from. Eventually Dr P gave up and their account became inactive. (There was some uncertainty about what would happen if Dr P misresolved the markets. Today I would expect false resolutions to be reversed. Various derivative/insurance markets were set up.)

Comment by Forged Invariant on The Base Rate Times, news through prediction markets · 2023-06-09T16:07:35.352Z · LW · GW

One thing that I have seen on manifold is markets that will resolve at a random time, with a distribution such that at any time, their expected duration (from the current day, conditional on not having already resolved) is 6 months. They do not seem particularly common, and are not quite equivalent to a market with a deadline exactly 6 months in the future. (I can't seem to find the market.)

Comment by Forged Invariant on Petrov Day Retrospective: 2022 · 2022-10-02T20:42:09.302Z · LW · GW

The timing evidence is thus hostile evidence and updating on it correctly requires superintelligence.

What do you mean by this? It seems trivially false that updating on hostile evidence requires superintelligence; for example poker players will still use their opponent's bets as evidence about their cards, even though these bets are frequently trying to mislead them in some way.

The evidence being from someone who went against the collective desire does mean that confidently taking it at face value is incorrect, but not that we can't update on it.

Comment by Forged Invariant on LW Petrov Day 2022 (Monday, 9/26) · 2022-09-24T06:21:29.623Z · LW · GW

The LW staff are necessary to take down the site. If we assume that there are multiple users that are willing to press the button, then the (shapely-attributed) blame for taking the site down mostly falls on the LW staff, rather than whoever happens to press the button first.

According to http://shapleyvalue.com/?example=8 if there were 6 people who were willing to push the button, the LW team would deserve 85% of the blame. (Here I am considering the people who take actions that act to facilitate bringing down the site as part of the coalition.)

I am not quite sure how to take into account all the people who choose not to take down the website and thus delay, and there is some value in running the Petrov day event, so the above does not take everything into account.

Tweaking some values in the website to model this, where value = 7 if either LW and/or all the other users refuse to shut down the site, and 7-i where i is the highest numbered player that shuts down the site (higher meaning they shut things down sooner), I get these values:

The Shapley value of player 1(Low Karma button pusher) is: -0.023809523809524
The Shapley value of player 2 is: -0.057142857142857
The Shapley value of player 3 is: -0.10714285714286
The Shapley value of player 4 is: -0.19047619047619
The Shapley value of player 5 is: -0.35714285714286
The Shapley value of player 6(High karma button pusher) is: -0.85714285714286
The Shapley value of player 7(LW team) is: -4.4071428571429

(All the values are negative, since this assigns no value to running the experiment or to keeping the site online despite running the experiment and for simplicity's sake measures things in site uptime, and not shutting down the site achieves that.)

Comment by Forged Invariant on Gene drives: why the wait? · 2022-09-21T04:23:24.424Z · LW · GW

Here is an example of something that comes close from "The Selfish Gene":

One of the best-known segregation distorters is the so-called t gene in mice. When a mouse has two t genes it either dies young or is sterile, t is therefore said to be lethal in the homozygous state. If a male mouse has only one t gene it will be a normal, healthy mouse except in one remarkable respect. If you examine such a male's sperms you will find that up to 95 per cent of them contain the t gene, only 5 per cent the normal allele. This is obviously a gross distortion of the 50 per cent ratio that we expect. Whenever, in a wild population, a t allele happens to arise by mutation, it immediately spreads like a brushfire. How could it not, when it has such a huge unfair advantage in the meiotic lottery? It spreads so fast that, pretty soon, large numbers of individuals in the population inherit the t gene in double dose (that is, from both their parents). These individuals die or are sterile, and before long the whole local population is likely to be driven extinct. There is some evidence that wild populations of mice have, in the past, gone extinct through epidemics of t genes.

Not all segregation distorters have such destructive side-effects as t. Nevertheless, most of them have at least some adverse consequences.

From the discussion of human-engineered gene drives, they would only cause sterility in one sex, which would help avoid the gene dying off as quickly.

Comment by Forged Invariant on Why all the fuss about recursive self-improvement? · 2022-06-15T05:18:16.118Z · LW · GW

I had not thought of self-play as a form of recursive self-improvement, but now that you point it out, it seems like a great fit. Thank you.

I had been assuming (without articulating the assumption) that any recursive self improvement would be improving things at an architectural level, and rather complex (I had pondered improvement of modular components, but the idea was still to improve the whole model). After your example, this assumption seems obviously incorrect.

Alpha-go was improving its training environment, but not any other part of the training process.

Comment by Forged Invariant on A Bayesian Aggregation Paradox · 2021-11-25T04:23:30.979Z · LW · GW

The left hand side of the example is deliberately making the mistake described in your article, as a way to build intuition on why it is a mistake. 

(Adding instead of averaging in the update summaries was an unintended mistake)

Thanks for explaining how to summarize updates, it took me a bit to see why averaging works.

Comment by Forged Invariant on A Bayesian Aggregation Paradox · 2021-11-23T04:28:53.479Z · LW · GW

Seeing the equations, it was hard to intuitively grasp why updates work this way. This example made things more intuitive for me:

If an event can have 3 outcomes, and we encounter strong evidence against outcomes B and C, then the update looks like this:

The information about what hypotheses are in the running is important, and pooling the updates can make the evidence look much weaker than it is.

Comment by Forged Invariant on Petrov Day Retrospective: 2021 · 2021-10-22T07:22:31.383Z · LW · GW

I found the postmortem over-focuses on what went wrong or was sub-optimal. I would like to point out that I found the event fun, despite being a lurker with no code.

Comment by Forged Invariant on Petrov Day Retrospective: 2021 · 2021-10-22T07:20:04.684Z · LW · GW

There were some reports of people seeing a frozen countdown on the button, that disappeared when the page was refreshed. Was this an intentional false alarm? I had assumed that was the case, as a false alarm with some evidence that it was false echoes some parts of Petrov's situation nicely.

Comment by Forged Invariant on Petrov Day Retrospective: 2021 · 2021-10-22T07:17:34.164Z · LW · GW
Comment by Forged Invariant on Petrov Day 2021: Mutually Assured Destruction? · 2021-09-26T21:44:36.723Z · LW · GW

Just be aware that other users have already noticed messages which could be deliberate false alarms: https://www.lesswrong.com/posts/EW8yZYcu3Kff2qShS/petrov-day-2021-mutually-assured-destruction?commentId=JbsutYRotfPDLNskK

Comment by Forged Invariant on Covid 8/12: The Worst Is Over · 2021-08-16T07:46:59.051Z · LW · GW

I had not noticed my own Gel-Mann amnesia when reading that bit, and therefore find your response quite convincing. I had thought that Ziv's answer to (D) made sense due to the FDA being over-cautious about approving things, but both the scope of the precedent and the kinds/directions of errors had not registered with me.

Comment by Forged Invariant on The shoot-the-moon strategy · 2021-07-22T07:33:20.669Z · LW · GW

One possible strategy would be to make AI more dangerous as quickly as possible, in the hopes it produces a strong reaction and addition of safety protocols. Doing this with existing tools so that it is not an AGI makes it survivable. This reminds me a bit of Robert Miles facial recognition and blinding laser robot. (Which of course is never used to actually cause harm.)

Comment by Forged Invariant on Potential Bottlenecks to Taking Over The World · 2021-07-09T06:45:55.772Z · LW · GW

If the AGI can simply double it's cognitive throughput, it can just repeat the action "sleuth to find an under-priced stock" as needed. This does not exhaust the order book until the entire market is operating at AGI-comparable efficiency, at which point the AGI probably controls a large (or majority) share of the trading volume.

Also, the other players would have limited ability to imitate the AGI's tactics, so its edge would last until they left the market. 

Comment by Forged Invariant on Why did the UK switch to a 12 week dosing schedule for COVID-19 vaccines? · 2021-06-21T06:38:08.351Z · LW · GW

A hypothesis I had was that the US was sticking to an exact formula due to higher vaccine hesitancy, in order to "play it safe" and give less for anti-vaxers to criticize. After looking at a small handful of countries, I think this is not a significant cause of the difference in responses.

If this were true I would expect countries that have higher vaccine hesitancy to be less likely to do first doses first.

Checking [this data](https://www.thelancet.com/cms/10.1016/S0140-6736(20)31558-0/attachment/720358f5-8df0-405b-b06f-7734cf542a58/mmc1.pdf) which was near the top of search results, and using eyeballed values of strongly agree to "I think vaccines are safe" as the measure:

Canada: 75%, Yes FDF (March 3)

US: 66%, No FDF

Mexico: 60%, Yes FDF (Jan 22)

UK: 50%, Yes FDF (Jan 4)

Germany: 50%, Yes FDF (March 5)

Obviously a really small sample and I am being loose with the data, but it does not support this hypothesis, with no obvious correspondence between vaccine-confidence and when FDF started. I chose the countries in question off the top of my head.

Dates and sources were found by searching online, I have not carefully checked them.

https://www.statista.com/statistics/1195560/coronavirus-covid-19-vaccinations-number-germany/ This graph looks like there is about a 3-week lag in 2nd doses.

https://www.thetimes.co.uk/article/germany-follows-uk-by-delaying-second-dose-of-covid-vaccine-mk65kkh9w March 05, Germany starts FDF.

https://abcnews.go.com/Health/wireStory/mexico-russias-sputnik-shortages-limited-2nd-doses-77617433 Mexico does first doses first due to supply issues with the Sputnik vaccine; the first dose can be produced faster. The article does not mention the save more lives argument.

https://www.nasdaq.com/articles/mexico-may-delay-second-vaccine-doses-and-allow-private-orders-to-tame-raging-pandemic The tone of this piece seems to suggest FDF out of desperation.

Comment by Forged Invariant on Why did the UK switch to a 12 week dosing schedule for COVID-19 vaccines? · 2021-06-21T05:24:27.164Z · LW · GW

From my understanding of the Canada situation, it may have been motivated by less access to vaccines initially. The US did very well in terms of getting lots of vaccines soon (https://ourworldindata.org/covid-vaccinations) while Canada took about 4 months after the US to really get going. Canada may have been more desperate to prevent Covid (or have their numbers stop lagging the US), and thus been less risk-adverse.

This argument does not work for the UK, as they have been ahead of the US the whole time.

https://www.msn.com/en-ca/news/canada/vaccine-panel-says-canada-can-delay-second-dose-of-covid-19-vaccine-if-shortage/ar-BB1cIJaG This article cites the decision being partly justified by limited supplies and how bad things were.

Comment by Forged Invariant on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-17T20:13:24.207Z · LW · GW

I like how this proposal makes explicit the player strategies, and how they are incorporated into the calculation. I also think that the edge case where the agents actions have no effect on the result

I think that this proposal making alignment symmetric might be undesirable. Taking the prisoner's dilemma as an example, if s = always cooperate and r = always defect, then I would say s is perfectly aligned with r, and r is not at all aligned with s.

The result of 0 alignment for the Nash equilibrium of PD seems correct.

I think this should be the alignment matrix for pure-strategy, single-shot PD:

Here the first of each ordered pair represents A's alignment with B. (assuming we use the [0,1] interval)

I think in this case the alignments are simple, because A can choose to either maximize or to minimize B's utility.

Comment by Forged Invariant on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-17T06:15:17.560Z · LW · GW

1/1  0/0

0/0  0.8/-1

I have put the preferred state for each player in bold. I think by your rule this works out to 50% aligned. However, the Nash equilibrium is both players choosing the 1/1 result, which seems perfectly aligned (intuitively).

1/0.5  0/0

0/0  0.5/1

In this game, all preferred states are shared, yet there is a Nash equilibrium where each player plays the move that can get them 1 point 2/3 of the time, and the other move 1/3 of the time. I think it would be incorrect to call this 100% aligned.

(These examples were not obvious to me, and tracking them down helped me appreciate the question more. Thank you.)

Comment by Forged Invariant on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-17T04:45:01.085Z · LW · GW

Another point you could fix using intuition would be complete disinterest. It makes sense to put it at 0 on the [-1, 1] interval.

Assuming rational utility maximizes, a board that results in a disinterested agent would be:

1/0  1/1

0/0 0/1

Then each agent cannot influence the rewards of the other, so it makes sense to say that they are not aligned.

More generally, if arbitrary changes to one players payoffs have no effect on the behaviour of the other player, then the other player is disinterested.