## Posts

## Comments

**migueltorrescosta**on Design thoughts for building a better kind of social space with many webs of trust · 2020-09-06T12:10:40.455Z · LW · GW

First of all thank you for your post, it’s very thorough :)

While I want to reread it in case I missed any arguments for this, the main issue I usually have with these trust webs is the propensity for the creation of echo chambers: by relying only on those you trust and who they trust, you might filter out others opinions not because they are less valid, but because you disagree on some fundamental axioms. Have you given any thought on how to avoid echo chambers in these webs of trust?

Best, Miguel

**migueltorrescosta**on Is there an easy way to turn a LW sequence into an epub? · 2020-07-19T00:01:04.902Z · LW · GW

I'd really like this feature as well

**migueltorrescosta**on Allowing Exploitability in Game Theory · 2020-05-18T00:32:30.074Z · LW · GW

If Tim tells the truth with probability $p$, you simply get that you should guess what he said if $p<\frac{1}{1000000}$, and $p>\frac{1}{1000000}$. For Tim the optimal choice is to have $p=\frac{1}{1000000}$ in order not to give you any information: Anything else is playing on psychology and human biases, which exist in reality but trying to play a "perfect" game by assuming your opponent is not also leaves you vulnerable to exploitability, as you mentioned.

It seems you are trying to get a deeper understanding of human fallibility rather than playing optimal games. Have I misunderstood it?

**migueltorrescosta**on A method for fair bargaining over odds in 2 player bets! · 2020-01-14T14:53:47.348Z · LW · GW

Lovely idea.

Minor point: it feels to me the average bet isn’t the usual average but instead the harmonic mean of all bets taken. The difference might be small and more importantly there’s no reason why the arithmetic average is fairer than the harmonic average, but it was just a small thing I noticed 😜

**migueltorrescosta**on Swarm AI (tool) · 2019-05-02T15:23:23.736Z · LW · GW

I’m up for this

**migueltorrescosta**on Constructing Goodhart · 2019-02-06T06:26:08.697Z · LW · GW

Thank you habryka!

**migueltorrescosta**on Constructing Goodhart · 2019-02-03T23:28:58.846Z · LW · GW

Note: The LaTeX is not rendering properly on this reply. Does anyone know what the reason could be?

I chose because the optimal point in that case is the set of integers , but the argument holds for any positive real constant, and by using either equality, less than or not greater than.

There is one thing we assumed which is that, given the utility function , our proxy utility function is .This is not necessarily obvious, and even more so if we think of more convoluted utility functions: if our utility was given by , what would be our proxy when we only know ?

To answer this question generally my first thought would be to build a function that maps a vector space , a utility function , the manifold S of possible points and a map from those points to a filtration that tells us the information we have available when at point to a new utility function .

However this full generality seems a lot harder to describe.

Best, Miguel

**migueltorrescosta**on Constructing Goodhart · 2019-02-03T22:37:17.769Z · LW · GW

I think it's possible to build a Goodharts example on a 2D vector space.

Say you get to choose two parameters and . You want to maximize their sum, but you are constrained by . Then the maximum is attained when . Now assume that is hard to measure, so you use as a proxy. Then you move from the optimal point we had above to the worse situation where , but .

The key point being that you are searching for a solution in a manifold inside your vector, but since some dimensions of that vector space are too hard or even impossible to measure, you end up in sub optimal points of your manifold.

In formal terms you have a true utility function based on all the data you have, and a misaligned utility function based on the subspace of known variables , where could be obtained by integrating out the unknown dimensions if we know their probability distribution, or any other technique that might be more suitable.

Would this count as a more substantive assumption?

Best, Miguel

Edit: added the "In formal terms" paragraph

**migueltorrescosta**on One Website To Rule Them All? · 2019-01-11T22:11:11.147Z · LW · GW

Have you seen Kialo?

**migueltorrescosta**on In Logical Time, All Games are Iterated Games · 2018-09-20T08:53:26.598Z · LW · GW

Thank you for your post abramdemski!

I failed to understand why you can't arrive at a solution for the Single-Shot game via Iterated Play without memory of the previous game. In order to clarify my ideas let me define two concepts first:

Iterated Play with memory: We repeatedly play the game knowing the results of the previous games.

Iterated Play without memory: We repeatedly play the game, while having no memory of the previous play.

The distinction is important: With memory we can at any time search all previous games and act accordingly, allowing for strategies such as Tit-for-Tat and other history dependent strategies. Without memory we can still learn ( for example by applying some sort of Bayesian updates to our probability estimates of each move being played ), whilst not having access to the previous games before each move. That way we can "learn" how to best play the single shot version of the game by iterated play.

Does what I said above need any clarification, and is there any failure in its' logic?

Best Regards, Miguel

**migueltorrescosta**on Book Review - Probability and Finance: It's Only a Game! · 2018-01-19T22:31:00.754Z · LW · GW

You mention that a Martingale is a betting strategy where the player doubles their bet each time.

A Martingale is a fair game (i.e. the expected outcome is zero). If your outcome is given by a coin toss, and you receive only what you bet, then that is a Martingale game (you win X £ with probability and lose X £ with probability too ).

Then you could say that doubling your bet is a betting strategy on a Martingale game, BUT not that a Martingale game is a betting strategy where the player doubles their bet each time (in the same way that a dog is an animal but an animal is not a dog).

Does that make sense?

Other than that I'm very intrigued by the claim made. Definitely worth reading, but my hopes for something worthwhile are few :P