Posts

[SEQ RERUN] The Cartoon Guide to Löb's Theorem 2012-08-05T08:28:19.128Z
Focus on rationality 2012-06-02T19:25:37.362Z
[META] Recent Posts for Discussion and Main 2012-05-13T10:42:39.986Z
Rationality Quotes April 2012 2012-04-03T00:42:04.135Z
Harry Potter and the Methods of Rationality discussion thread, part 11 2012-03-17T09:41:23.620Z
Harry Potter and the Methods of Rationality discussion thread, part 10 2012-03-07T16:46:49.993Z
Open thread, November 2011 2011-11-02T18:19:16.423Z
LessWrong running very slow? 2011-09-30T20:15:32.128Z
Word Pronunciation 2011-09-10T14:25:58.162Z
Harry Potter and the Methods of Rationality discussion thread, part 9 2011-09-09T13:29:52.355Z
Writing guide? 2011-07-26T07:06:29.801Z
Signatures for posts 2011-07-11T18:45:13.050Z
Rationality Quotes: June 2011 2011-06-01T08:17:07.695Z

Comments

Comment by Oscar_Cunningham on Translations Should Invert · 2023-10-06T06:13:55.927Z · LW · GW

Yup, you want a bi-interpretation:

Two models or theories are mutually interpretable, when merely each is interpreted in the other, whereas bi-interpretation requires that the interpretations are invertible in a sense after iteration, so that if one should interpret one model or theory in the other and then re-interpret the first theory inside that, then the resulting model should be definably isomorphic to the original universe

Bi-interpretation in weak set theories

Comment by Oscar_Cunningham on How tall is the Shard, really? · 2023-06-26T11:05:57.946Z · LW · GW

They will in fact stop you from taking fancy laser measuring kit to the top of the Shard: https://www.youtube.com/watch?v=ckcdqlo3pYc&t=792s.

Comment by Oscar_Cunningham on Kelly betting vs expectation maximization · 2023-05-31T19:02:10.827Z · LW · GW

I don't think that's the only reason - if I value something linearly, I still don't want to play a game that almost certainly bankrupts me.

I still think that's because you intuitively know that bankruptcy is worse-than-linearly bad for you. If your utility function were truly linear then it's true by definition that you would trade an arbitrary chance of going bankrupt for a tiny chance of a sufficiently large reward.

I mean, that's not obvious - the Kelly criterion gives you, in the example with the game, E(money) = $240, compared to $246.61 with the optimal strategy. That's really close.

Yes, but the game is very easy, so a lot of different strategies get you close to the cap.

Comment by Oscar_Cunningham on Kelly betting vs expectation maximization · 2023-05-31T16:10:24.599Z · LW · GW

It bankrupts you with probability 1 - 0.6^300, but in the other 0.6^300 of cases you get a sweet sweet $25 × 2^300. This nets you an expected $1.42 × 10^25.

Whereas Kelly betting only has an expected value of $25 × (0.6×1.2 + 0.4×0.8)^300 = $3220637.15.

Obviously humans don't have linear utility functions, but my point is that the Kelly criterion still isn't the right answer when you make the assumptions more realistic. You actually have to do the calculation with the actual utility function.

Comment by Oscar_Cunningham on Kelly betting vs expectation maximization · 2023-05-31T13:59:37.733Z · LW · GW

The answer is that you bet approximately Kelly.

No, it isn't. Gwern never says that anywhere, and it's not true. This is a good example of what I'm saying.

For clarity the game is this. You start with $25 and you can bet any multiple of $0.01 up to the amount you have. A coin is flipped with a 60/40 bias in your favour. If you win you double the amount you bet, otherwise you lose it. There is a cap of $250, so after each bet you lose any money over this amount (so in fact you should never make a bet that could take you over). This continues for 300 rounds.

Bob's edge is 20%, so the Kelly criterion would recommend that he bets $5. If he continues to use the Kelly criterion in every round (except if this would take him over the cap, in which case he bets to take him to the cap) he ends with an average of $238.04.

As explained on the page you link to, the optimal strategy and expected value can be calculated inductively based on the number of bets remaining. The optimal starting bet is $1.99, and if you continue to bet optimally your average amount of money is $246.61.

So in this game the optimal starting bet is only 20% of the Kelly bet. The Kelly strategy bets too riskily, and leaves $8.57 on the table compared to the optimal strategy.

Kelly isn't optimal in any limit either. As the number of rounds goes to infinity, the optimal strategy is to bet just $0.01, since this maximises the likelihood of never going bankrupt. If instead the cap goes to infinity then the optimal strategy is to bet everything on every round. Of course you could tune the cap and the number of rounds together so that Kelly was optimal on the first bet, but then it still wouldn't be optimal for subsequent bets.

(EDIT: It's actually not certain that the optimal strategy in the first round is $1.99, since floating point accuracy in the computations becomes relevant and many starting bets give the same result. But $5 is so far from optimum that it genuinely did give a lower expected value, so we can say for certain that Kelly is not optimal.)

Comment by Oscar_Cunningham on Kelly betting vs expectation maximization · 2023-05-29T08:17:45.504Z · LW · GW

If Bob wants to maximise his money at the end, then he really should bet it all every round. I don't see why you would want to use Kelly rather than maximising expected utility. Not maximising expected utility means that you expect to get less utility.

Comment by Oscar_Cunningham on Kelly betting vs expectation maximization · 2023-05-28T06:27:33.960Z · LW · GW

Can you be more precise about the exact situation Bob is in? How many rounds will he get to play? Is he trying to maximise money, or trying to beat Alice? I doubt the Kelly criterion will actually be his optimal strategy.

Comment by Oscar_Cunningham on A hundredth of a bit of extra entropy · 2022-12-26T16:20:46.872Z · LW · GW

I tend to view the golden ratio as the least irrational irrational number. It fills in the next gap after all the rational numbers. In the same way, 1/2 is the noninteger which shares the most algebraic properties with the integers, even though it's furthest from them in a metric sense.

Comment by Oscar_Cunningham on A hundredth of a bit of extra entropy · 2022-12-24T23:06:29.590Z · LW · GW

Nice idea! We can show directly that each term provides information about the next.

The density function of the distribution of the fractional part in the continued fractional algorithm converges to 1/[(1+x) ln(2)] (it seems this is also called the Gauss-Kuzmin distribution, since the two are so closely associated). So we can directly calculate the probability of getting a coefficient of n by integrating this from 1/(n+1) to 1/n, which gives -lg(1-1/(n+1)^2) as you say above. But we can also calculate the probability of getting an n followed by an m, by integrating this from 1/(n+1/m) to 1/(n+1/(m+1)), which gives -lg(1-1/(mn+1)(mn+m+n+2)). So dividing one by the other gives P(m|n) = lg(1-1/(mn+1)(mn+m+n+2))/lg(1-1/(n+1)^2), which is rather ugly, but the point is that it does depend on n.

This turns out to be an anticorrelation. High numbers are more likely to by followed by low numbers, and vice-versa. The probability of getting a 1 given you've just had a 1 is 36.6%, whereas if you've just had a very high number the probability of getting a 1 will be very close to 50% (since the distribution of the fractional part is tending to uniform).

Comment by Oscar_Cunningham on Mastodon Linking Norms · 2022-11-10T15:20:55.435Z · LW · GW

I've only been on Mastodon a bit longer than the current Twitter immigrants, but as far as I know there's no norm against linking. But the server admins are all a bit stressed by the increased load. So I can understand why they'd be annoyed by any link that brought new users. I've been holding off on inviting new users to the instance I'm on, because the server is only just coping as it is.

Comment by Oscar_Cunningham on Covid 7/14/22: BA.2.75 Plus Tax · 2022-07-16T16:25:00.374Z · LW · GW

Apologies for asking an object level question, but I probably have Covid and I'm in the UK which is about to experience a nasty heatwave. Do we have a Covid survival guide somewhere?

(EDIT: I lived lol)

Comment by Oscar_Cunningham on Open & Welcome Thread - July 2022 · 2022-07-01T09:24:55.312Z · LW · GW

Is there a way to alter the structure of a futarchy to make it follow a decision theory other than EDT?

Comment by Oscar_Cunningham on Open & Welcome Thread - June 2022 · 2022-06-30T14:40:44.541Z · LW · GW

I got a good answer here: https://stats.stackexchange.com/q/579642/5751

Comment by Oscar_Cunningham on Open & Welcome Thread - June 2022 · 2022-06-18T18:36:38.387Z · LW · GW

Or is that still too closely tied to the explore-exploit paradigm?

Right. The setup for my problem is the same as the 'bernoulli bandit', but I only care about the information and not the reward. All I see on that page is about exploration-exploitation.

Comment by Oscar_Cunningham on Open & Welcome Thread - June 2022 · 2022-06-18T17:55:12.181Z · LW · GW

What's the term for statistical problems that are like exploration-exploitation, but without the exploitation? I tried searching for 'exploration' but that wasn't it.

In particular, suppose I have a bunch of machines which each succeed or fail independently with a probability that is fixed separately for each machine. And suppose I can pick machines to sample to see if they succeed or fail. How do I sample them if I want to become 99% certain that I've found the best machine, while using the fewest number of samples?

The difference with exploration-exploitation is that this is just a trial period, and I don't care about how many successes I get during this testing. So I want something like Thompson sampling, but for my purposes Thompson sampling oversamples the machine it currently thinks is best because it values getting successes rather than ruling out the second-best options.

Comment by Oscar_Cunningham on Is there an equivalent of the CDF for grading predictions? · 2022-04-11T08:17:51.153Z · LW · GW

The problem is that this measures their amount of knowledge about the questions as well as their calibration.

My model would be as follows. For a fixed source of questions, each person has a distribution describing how much they know about the questions. It describes how likely it is that a given question is one they should say p on. Each person also has a calibration function f, such that when they should say p they instead say f(p). Then by assigning priors over the spaces of these distributions and calibration functions, and applying Bayes' rule we get a posterior describing what we know about that persons calibration function.

Then assign a score to each calibration function which is the expected log score lost by a person using that calibration function instead of an ideal one, assuming that the questions were uniformly distributed in difficulty for them. Then their final calibration score is just the expected value of that score given our distribution of calibration functions for them.

Comment by Oscar_Cunningham on Why Iraq is so violent · 2022-04-05T08:14:34.340Z · LW · GW

I was expecting a post about tetraethyllead.

Comment by Oscar_Cunningham on Sums and products · 2022-03-28T08:59:19.161Z · LW · GW

I gave some more examples here: https://www.reddit.com/r/math/comments/9gyh3a/if_you_know_the_factors_of_an_integer_x_can_you/e68u5x4/

Comment by Oscar_Cunningham on Rational and irrational infinite integers · 2022-03-24T15:27:14.600Z · LW · GW

You just have to carefully do the algebra to get an inductive argument. The fact that the last digit is 5 is used directly.

Suppose n is a number that ends in 5, and such that the last N digits stay the same when you square it. We want to prove that the last N+1 digits stay the same when you square n^2.

We can write n = m*10^N + p, where p has N digits, and so n^2 = m^2*10^(2N) + 2mp*10^N + p^2. Note that since 2p ends in 0, the term 2mp*10^N actually divides by 10^(N+1). Then since the two larger terms divide by 10^N+1, n^2 agrees with p^2 on its last N+1 digits, and so p^2 agrees with p on at least its last N digits. So we may write p^2 = q*10^N + p. Hence n^2 = m^2*10^(2N) + 2mp*10^N + q*10^N + p.

Squaring this yields (n^2)^2 = (terms dividing by 10^(N+1)) + 2qp*10^N + p^2. Again, 2p ends in 0, so 2qp*10^N also divides by 10^(N+1). So the last N+1 digits of this agree with p^2, which we previously established also agrees with n^2. QED

A similar argument shows that the number generated in this way is the only 10-adic number that ends in 5 and squares to itself. So we also have that one minus this number is the only 10-adic ending in 6 that squares to itself. You can also prove that 0 and 1 are the only numbers ending in 0 and 1 that square to themselves. The other digits can't square to themselves. So x^2 = x has precisely four solutions.

Comment by Oscar_Cunningham on Rational and irrational infinite integers · 2022-03-24T09:01:55.275Z · LW · GW

If we start with 5 and start squaring we get 5, 25, 625, 390625, 152587890625.... Note how some of the end digits are staying the same each time. If we continue this process we get a number ...8212890625 which is a solution of x^2 = x. We get another solution by subtracting this from 1 to get ...1888119476.

Comment by Oscar_Cunningham on Whence the determinant? · 2022-03-20T15:32:24.414Z · LW · GW

if not, then we'll essentially need another way to define determinant for projective modules because that's equivalent to defining an alternating map?

There's a lot of cases in mathematics where two notions can be stated in terms of each other, but it doesn't tell us which order to define things in.

The only other thought I have is that I have to use the fact that  is projective and finitely generated. This is equivalent to  being dualisable. So the definition is likely to use  somewhere.

Comment by Oscar_Cunningham on Whence the determinant? · 2022-03-18T18:14:21.202Z · LW · GW

I'm curious about this. I can see a reasonable way to define  in terms of sheaves of modules over : Over each connected component,  has some constant dimension , so we just let  be  over that component.

If we call this construction  then the construction I'm thinking of is . Note that  is locally -dimensional, so my construction is locally isomorphic to yours but globally twisted. It depends on  via more than just its local dimension. Also note that with this definition we will get that  is always isomorphic to .

But it sounds like you might not like this definition,

Right. I'm hoping for a simple definition that captures what the determinant 'really means' in the most general case. So it would be nice if it could be defined with just linear algebra without having to bring in the machinery of the spectrum.

 and I'd be interested to know if you had a better way of defining  (which will probably end up being equivalent to this).

I'm still looking for a nice definition. Here's what I've got so far.

If we pick a basis of  then it induces a bijection between  and . So we could define a map  to be 'alternating' if and only if the corresponding map  is alternating. The interesting thing I noticed about this definition is that it doesn't depend on which basis you pick for . So I have some hope that since this construction isn't basis dependent, I might be able to write down a basis-independent definition of it. Then it would apply equally well with  replaced with , whereupon we can define  as the universal alternating map out of .

 [Edit: Perhaps something in terms of generators and relations, with the generators being linear maps ?]

Yeah exactly. That's probably a simpler way to say what I was describing above. One embarrassing thing is that I don't even know how to describe the simplest relations, i.e. what the s should be.

Comment by Oscar_Cunningham on [deleted post] 2022-03-15T09:40:34.848Z

I agree that 'credence' and 'frequency' are different things. But round here the word 'probability' does refer to credence rather than frequency. This isn't a mistake; it's just the way we're using words.

Comment by Oscar_Cunningham on Whence the determinant? · 2022-03-14T18:16:04.135Z · LW · GW

Another thing to say is if  then

.

Comment by Oscar_Cunningham on Whence the determinant? · 2022-03-14T16:43:12.285Z · LW · GW

My intuition for  is that it tells you how an infinitesimal change accumulates over finite time (think compound interest). So the above expression is equivalent to . Thus we should think 'If I perturb the identity matrix, then the amount by which the unit cube grows is proportional to the extent to which each vector is being stretched in the direction it was already pointing'.

Comment by Oscar_Cunningham on Whence the determinant? · 2022-03-14T12:53:39.244Z · LW · GW

Thank you for that intuition into the trace! That also helps make sense of .

Comment by Oscar_Cunningham on Whence the determinant? · 2022-03-14T12:50:24.240Z · LW · GW

This is close to one thing I've been thinking about myself. The determinant is well defined for endomorphisms on finitely-generated projective modules over any ring. But the 'top exterior power' definition doesn't work there because such things do not have a dimension. There are two ways I've seen for nevertheless defining the determinant.

  • View the module as a sheaf of modules over the spectrum of the ring. Then the dimension is constant on each connected component, so you can take the top exterior power on each and then glue them back together.
  • Use the fact that finitely-generated projective modules are precisely those which are the direct summands of a free module. So given an endomorphism  you can write  and then define .

These both give the same answer. However, I don't like the first definition because it feels very piecemeal and nonuniform, and I don't like the second because it is effectively picking a basis. So I've been working on by own definition where instead of defining  for natural numbers  we instead define  for finitely-generated projective modules . Then the determinant is defined via .

Comment by Oscar_Cunningham on Whence the determinant? · 2022-03-14T10:52:56.637Z · LW · GW

I think the determinant is more mathematically fundamental than the concept of volume. It just seems the other way around because we use volumes in every day life.

Comment by Oscar_Cunningham on Deterministic Strategies Can Be Suboptimal · 2022-03-05T17:18:56.491Z · LW · GW

https://www.lesswrong.com/posts/AAqTP6Q5aeWnoAYr4/?commentId=WJ5hegYjp98C4hcRt

I don't dispute what you say. I just suggest that the confusing term "in the worst case" be replaced by the more accurate phrase "supposing that the environment is an adversarial superintelligence who can perfectly read all of your mind except bits designated 'random'".

Comment by Oscar_Cunningham on Deterministic Strategies Can Be Suboptimal · 2022-03-05T09:34:14.470Z · LW · GW

In this case P is the cumulative distribution function, so it has to approach 1 at infinity, rather than the area under the curve being 1. An example would be 1/(1+exp(-x)).

Comment by Oscar_Cunningham on A Warm Up Problem for Yudkowsky's New Fiction · 2022-03-05T08:20:09.238Z · LW · GW

A simple way to do this is for ROB to output the pair of integers {n, n+1} with probability K((K-1)/(K+1))^|n|, where K is some large number. Then even if you know ROB's strategy the best probability you have of winning is 1/2 + 1/(2K).

If you sample an event N times the variance in your estimate of its probability is about 1/N. So if we pick K >> √N then our probability of success will be statistically indistinguishable from 1/2.

The only difficulty is implementing code to sample from a geometric distribution with a parameter so close to 1.

Comment by Oscar_Cunningham on Deterministic Strategies Can Be Suboptimal · 2022-03-05T03:24:08.442Z · LW · GW

The original problem doesn't say that ROB has access to your algorithm, or that ROB wants you to lose.

Comment by Oscar_Cunningham on Deterministic Strategies Can Be Suboptimal · 2022-03-05T02:39:06.773Z · LW · GW

Note that Eliezer believes the opposite: https://www.lesswrong.com/posts/GYuKqAL95eaWTDje5/worse-than-random.

Comment by Oscar_Cunningham on A Warm Up Problem for Yudkowsky's New Fiction · 2022-03-04T17:14:17.467Z · LW · GW

Right, I should have chosen a more Bayesian way to say it, like 'suceeds with probability greater than 1/2’.

Comment by Oscar_Cunningham on A Warm Up Problem for Yudkowsky's New Fiction · 2022-03-04T09:44:42.126Z · LW · GW

 The intended answer for this problem is the Frequentist Heresy in which ROB's decisions are treated as nonrandom even though they are unknown, while the output of our own RNG is treated as random, even though we know exactly what it is, because it was the output of some 'random process'.

Instead, use the Bayesian method. Let P({a,b}) be your prior for ROB's choice of numbers. Let x be the number TABI gives you. Compute P({a,b}|x) using Bayes' Theorem. From this you can calculate P(x=max(a,b)|x). Say that you have the highest number if this is over 1/2, and the lowest number otherwise. This is guaranteed to succeed more than 1/2 of the time, and to be optimal given your state of knowledge about ROB.

Comment by Oscar_Cunningham on Optimization Concepts in the Game of Life · 2021-10-30T11:40:45.589Z · LW · GW

Some related discussions: 1. https://www.conwaylife.com/forums/viewtopic.php?f=2&t=979 2. https://www.conwaylife.com/forums/viewtopic.php?f=7&t=2877 3. https://www.conwaylife.com/forums/viewtopic.php?p=86140#p86140

My own thoughts.

  • Patterns in GoL are generally not robust. Typically changing anything will cause the whole pattern to disintegrate in a catastrophic explosion and revert to the usual 'ash' of randomly placed small still lifes and oscillators along with some escaping gliders.

  • The pattern Eater 2 can eat gliders along 4 adjacent lanes.

  • The Highway Robber can eat gliders travelling along a lane right at the edge of the pattern, such that gliders on the next lane pass by unaffected. So one can use several staggered highway robbers to make a wall which eats any gliders coming at it from a given direction along multiple adjacent lanes. The wall will be very oblique and will fail if two gliders come in on the same lane too close together.

  • The block is robust to deleting any one of its live cells, but is not robust to placing single live cells next to it.

  • The maximum speed at which GoL patterns can propagate into empty space is 1 cell every 2 generations, measured in the L_1 norm. Spaceships which travel at this speed limit (such as the glider, XWSS, and Sir Robin) are therefore robust to things happening behind them, in the sense that nothing can catch up with them.

  • It's long been hypothesised that it should be possible to make a pattern which can eat any single incoming glider. My method for doing this would be to design a wall around the pattern which is designed to fall apart in a predictable way whenever it is hit by a glider. This collapse would then trigger construction machinery on the interior of the pattern that rebuilds the wall. The trick would be to make sure that the collapse of the wall didn't emit any escaping gliders and whose debris didn't depend on where the glider hit it. That way the construction machinery would have a reliable blank slate on which to rebuild.

  • If one did have a pattern with the above property that it could eat any glider that hit it, one could then arrange several copies of this pattern in a ring around any other pattern to make it safe from any single glider. Of course such a pattern would not be safe against other perturbations, and the recovery time would be so slow that it would not even be safe against two gliders a million generations apart.

  • It's an open problem whether there exists a pattern that recovers if any single cell is toggled.

  • I think the most promising approach is the recently discovered methods in this thread. These methods are designed to clear large areas of the random ash that Life tends to evolve into. One could use these methods to create a machine that clears the area around itself and then builds copies of itself into the cleared space. As this repeated it would spread copies of itself across the grid. The replicators could build walls of random ash between themselves and their children, so that if one of them explodes the explosion does not spread to all copies. If one of these copies hit something it couldn't deal with, it would explode (hopefully also destroying the obstruction) and then be replaced by a new child of the replicators behind it. Thus such a pattern would be truly robust. If one wanted the pattern to be robust and not spread, one could make every copy keep track of its coordinates relative to the first copy, and not replicate if it was outside a certain distance. I think this would produce what you desire: a bounded pattern that is robust to many of the contexts it could be placed in. However, there are many details still to be worked out. The main problem is that the above cleaning methods are not guaranteed to work on every arrangement of ash. So the question is whether they can clear a large enough area before they hit something that makes them explode. We only need each replicator to succeed often enough that their population growth rate is positive.

Comment by Oscar_Cunningham on The So-Called Heisenberg Uncertainty Principle · 2021-10-29T10:09:37.459Z · LW · GW

Yes, I'm a big fan of the Entropic Uncertainty Principle. One thing to note about it is that the definition of entropy only uses the measure space structure of the reals, whereas the definition of variance also uses the metric on the reals as well. So Heisenberg's principle uses more structure to say less stuff. And it's not like the extra structure is merely redundant either. You can say useful stuff using the metric structure, like Hardy's Uncertainty Principle. So Heisenberg's version is taking useful information and then just throwing it away.

I'd almost support teaching the Entropic Uncertainty Principle instead of Heisenberg's to students first learning quantum theory. But unfortunately its proof is much harder. And students are generally more familiar with variance than entropy.

With regards to Eliezer's original point, the distibutions |f|^2 and |g|^2 don't actually describe our uncertainty about anything. We have perfect knowledge of the wavefunction; there is no uncertainty. I suppose you could say that H(|f|^2) and H(|g|^2) quantify the uncertainty you would have if you were to measure the position and momentum (in Eliezer's point of view this would be indexical uncertainty about which world we were in), although you can't perform both of these measurements on the same particle.

Comment by Oscar_Cunningham on Optimization Concepts in the Game of Life · 2021-10-17T10:05:03.979Z · LW · GW

A good source for the technology available in the Game of Life is the draft of Nathaniel Johnston and Dave Greene's new book "Conway’s Game of Life: Mathematics and Construction".

Comment by Oscar_Cunningham on Is microCOVID "badness" superlinear? · 2021-08-13T09:22:08.950Z · LW · GW

The if the probabilities of catching COVID on two occasions are x and y, the the probability of catching it at least once is 1 - (1 - x)(1 - y) which equals x + y - xy. So if x and y are large enough for xy to be significant, then splitting is better because even though catching it the second time will increase your viral load, it's not going to make it twice as bad as it already was.

Comment by Oscar_Cunningham on Agency in Conway’s Game of Life · 2021-06-02T07:50:18.088Z · LW · GW

The link still works for me. Perhaps you must first become a member of that discord? Invite link: https://discord.gg/nZ9JV5Be (valid for 7 days)

Comment by Oscar_Cunningham on Agency in Conway’s Game of Life · 2021-05-14T20:55:56.026Z · LW · GW

The weird thing is that there are two metrics involved: information can propagate through a nonempty universe at 1 cell per generation in the sense of the l_infinity metric, but it can only propagate into empty space at 1/2 a cell per generation in the sense of the l_1 metric.

https://en.wikipedia.org/wiki/Norm_(mathematics)#p-norm

Comment by Oscar_Cunningham on Agency in Conway’s Game of Life · 2021-05-14T20:50:42.473Z · LW · GW

You're probably right, but I can think of the following points.

Its rule is more complicated than Life's, so its worse as an example of emergent complexity from simple rules (which was Conway's original motivation).

It's also a harder location to demonstrate self replication. Any self replicator in Critters would have to be fed with some food source.

Comment by Oscar_Cunningham on Agency in Conway’s Game of Life · 2021-05-14T15:54:52.893Z · LW · GW

Yeah, although probably you'd want to include a 'buffer' at the edge of the region to protect the entity from gliders thrown out from the surroundings. A 1,000,000 cell thick border filled randomly with blocks at 0.1% density would do the job.

Comment by Oscar_Cunningham on Agency in Conway’s Game of Life · 2021-05-14T15:51:36.264Z · LW · GW

https://en.wikipedia.org/wiki/Critters_(cellular_automaton)

Comment by Oscar_Cunningham on Agency in Conway’s Game of Life · 2021-05-14T12:32:38.359Z · LW · GW

This is very much a heuristic, but good enough in this case.

Suppose we want to know how many times we expect to see a pattern with n cells in a random field of area A. Ignoring edge effects, there are A different offsets at which the pattern could appear. Each of these has a 1/2^n chance of being the pattern. So we expect at least one copy of the pattern if n < log_2(A).

In this case the area is (10^60)^2, so we expect patterns of size up to 398.631. In other words, we expect the ash to contain any pattern you can fit in a 20 by 20 box.

Comment by Oscar_Cunningham on Agency in Conway’s Game of Life · 2021-05-14T08:07:14.620Z · LW · GW

The glider moves at c/4 diagonally, while the c/2 ships move horizontally. A c/2 ship moving right and then down will reach its destination at the same time the c/4 glider does. In fact, gliders travel at the empty space speed limit.

Comment by Oscar_Cunningham on Agency in Conway’s Game of Life · 2021-05-13T13:16:28.646Z · LW · GW

Most glider guns in random ash will immediately be destroyed by the chaos they cause. Those that don't will eventually reach an eater which will neutralise them. But yes, such things could pose a nasty surprise for any AI trying to clean up the ash. When it removes the eater it will suddenly have a glider stream coming towards it! But this doesn't prove it's impossible to clear up the ash.

Comment by Oscar_Cunningham on Agency in Conway’s Game of Life · 2021-05-13T08:04:32.634Z · LW · GW

Making a 'ship sensor' is tricky. If it collides with something unexpected it will create more chaos that you'll have to clear up.

Comment by Oscar_Cunningham on Agency in Conway’s Game of Life · 2021-05-13T08:01:51.957Z · LW · GW

This sounds like you're treating the area as empty space, whereas the OP specifies that it's filled randomly outside the area where our AI starts.

Comment by Oscar_Cunningham on Agency in Conway’s Game of Life · 2021-05-13T08:00:42.223Z · LW · GW

My understanding was that we just want to succeed with high probability. The vast majority of configurations will not contain enemy AIs.