Posts

Do you vote based on what you think total karma should be? 2020-08-24T13:37:52.987Z · score: 43 (16 votes)
Existential Risk is a single category 2020-08-09T17:47:08.452Z · score: 24 (15 votes)
Inner Alignment: Explain like I'm 12 Edition 2020-08-01T15:24:33.799Z · score: 113 (37 votes)
Rafael Harth's Shortform 2020-07-22T12:58:12.316Z · score: 6 (1 votes)
The "AI Dungeons" Dragon Model is heavily path dependent (testing GPT-3 on ethics) 2020-07-21T12:14:32.824Z · score: 47 (17 votes)
UML IV: Linear Predictors 2020-07-08T19:06:05.269Z · score: 14 (3 votes)
How to evaluate (50%) predictions 2020-04-10T17:12:02.867Z · score: 118 (57 votes)
UML final 2020-03-08T20:43:58.897Z · score: 23 (5 votes)
UML XIII: Online Learning and Clustering 2020-03-01T18:32:03.584Z · score: 13 (3 votes)
What to make of Aubrey de Grey's prediction? 2020-02-28T19:25:18.027Z · score: 24 (9 votes)
UML XII: Dimensionality Reduction 2020-02-23T19:44:23.956Z · score: 9 (3 votes)
UML XI: Nearest Neighbor Schemes 2020-02-16T20:30:14.112Z · score: 15 (4 votes)
A Simple Introduction to Neural Networks 2020-02-09T22:02:38.940Z · score: 33 (10 votes)
UML IX: Kernels and Boosting 2020-02-02T21:51:25.114Z · score: 13 (3 votes)
UML VIII: Linear Predictors (2) 2020-01-26T20:09:28.305Z · score: 9 (3 votes)
UML VII: Meta-Learning 2020-01-19T18:23:09.689Z · score: 15 (4 votes)
UML VI: Stochastic Gradient Descent 2020-01-12T21:59:25.606Z · score: 13 (3 votes)
UML V: Convex Learning Problems 2020-01-05T19:47:44.265Z · score: 13 (3 votes)
Excitement vs childishness 2020-01-03T13:47:44.964Z · score: 18 (8 votes)
Understanding Machine Learning (III) 2019-12-25T18:55:55.715Z · score: 17 (5 votes)
Understanding Machine Learning (II) 2019-12-22T18:28:07.158Z · score: 25 (7 votes)
Understanding Machine Learning (I) 2019-12-20T18:22:53.505Z · score: 47 (9 votes)
Insights from the randomness/ignorance model are genuine 2019-11-13T16:18:55.544Z · score: 7 (2 votes)
The randomness/ignorance model solves many anthropic problems 2019-11-11T17:02:33.496Z · score: 10 (7 votes)
Reference Classes for Randomness 2019-11-09T14:41:04.157Z · score: 8 (4 votes)
Randomness vs. Ignorance 2019-11-07T18:51:55.706Z · score: 6 (4 votes)
We tend to forget complicated things 2019-10-20T20:05:28.325Z · score: 51 (19 votes)
Insights from Linear Algebra Done Right 2019-07-13T18:24:50.753Z · score: 53 (23 votes)
Insights from Munkres' Topology 2019-03-17T16:52:46.256Z · score: 40 (12 votes)
Signaling-based observations of (other) students 2018-05-27T18:12:07.066Z · score: 20 (5 votes)
A possible solution to the Fermi Paradox 2018-05-05T14:56:03.143Z · score: 10 (3 votes)
The master skill of matching map and territory 2018-03-27T12:06:53.377Z · score: 36 (11 votes)
Intuition should be applied at the lowest possible level 2018-02-27T22:58:42.000Z · score: 29 (10 votes)
Consider Reconsidering Pascal's Mugging 2018-01-03T00:03:32.358Z · score: 14 (4 votes)

Comments

Comment by sil-ver on What are good election betting opportunities? · 2020-10-29T09:17:39.293Z · score: 8 (2 votes) · LW · GW

You can use catnip to bet on Biden in the election.

Points for:

  • It's doable from around the world
  • It's easy to use compared to other crypto currency methods
  • It has low fees and a pretty good price for Biden (between 62 and 63 percent)
  • It has no betting limits

Points against:

  • It's a tiny project without any previous track record, so you might not trust it
  • Very hard to use compared to PredictIt (you have to set up a crypto currency wallet and use a platform like bitvavo to buy Etherium)
  • Anything that requires you to do that is probably inefficient for betting small amounts (afaik the crypto currency stuff has constant-sized fees)

If it can be trusted, it is in theory a way to make (a) more money than on PredictIt, and (b) with lower fees. (PredictIt charges 10% on profits and 5% on withdrawals. It also has an 850$ cap per market, although there are at least seven de-facto Biden-wins-the-election markets.)

Comment by sil-ver on [deleted post] 2020-10-25T18:34:46.370Z

I think you can test everything with just drafts. It looks the same.

Comment by sil-ver on Babble challenge: 50 ways of solving a problem in your life · 2020-10-22T13:41:09.764Z · score: 2 (1 votes) · LW · GW

I found it confusing as well (but decided it had to be the first since it wouldn't otherwise be called 'applied babble').

Comment by sil-ver on Learning is (Asymptotically) Computationally Inefficient, Choose Your Exponents Wisely · 2020-10-22T10:30:29.759Z · score: 6 (3 votes) · LW · GW

This is an interesting claim, but I'm not sure if it matches my own subjective experience. Though I also haven't dug deeply into math so maybe it's more true there - it seems to me like this could vary by field, where some fields are more "broad" while others are "deep".

My experience with math as an entire field is the opposite -- it gets easier to learn more to more I know. I think the big question is how you draw the box. If you draw it just around probability theory or something, the claim seems more plausible (though I still doubt it).

Comment by sil-ver on Babble challenge: 50 ways of solving a problem in your life · 2020-10-22T08:21:41.749Z · score: 19 (5 votes) · LW · GW

Problem: I'm often having trouble falling asleep.

  1. Experiment with different doses Melatonin and document results
  2. Experiment with taking Melatonin at times other than 30 minutes before going to bed
  3. Establish a rule not to stare into a screen for some time before going to bed
  4. Reinstall the app that makes the screen look all red and sleepy
  5. Remove the blue lights from the exterior of my PC
  6. Practice some kind of meditation technique
  7. Actually try counting seriously
  8. Take regular walks through the city before falling asleep
  9. As above but use the bike
  10. Research for other supplements
  11. Visit a doctor
  12. Ask on LessWrong
  13. Ask everyone I know IRL
  14. Ask on r/sleep if that's a thing
  15. After completing the list, see if I can reverse any recommendation
  16. Get drunk before falling asleep
  17. Choose an earlier point to wake up every day to make myself more tired
  18. Track my yawns throughout the day and try to notice patterns
  19. Start a diary to find useful information
  20. Do a babble challenge every time I'm in bed
  21. Find new complicated music to listen to when in bed before trying to sleep
  22. Get lots of plants into my room even though I have no evidence for lack of oxygen
  23. Leave window wide open to make it really cold
  24. Always work out directly before going to bed
  25. Be more strict about the go-to-bed timing
  26. Find a difficult problem and implement rule that I can't think about anything but that when trying to sleep
  27. Hypnotize myself into being tired
  28. Donate 10$ to a terrible cause every time I fail to go to sleep or stand up on time
  29. Same idea but use Beeminder instead
  30. Experiment with using different pillows or blankets
  31. Change sheets more often
  32. Try listening to podcasts instead of music to fall asleep
  33. Try watching soothing movies instead
  34. Force myself to read sth whenever I go to sleep, until I'm tired
  35. Go back to a polyphasic sleep rythm
  36. Read a book about sleep
  37. Do other research on sleep
  38. Find a list of all TED talks relating to sleep and listen to all of them in one sitting
  39. Screw all research and personally crack the code and find the one true solution
  40. Try the drumming technique I've once heard about more seriously
  41. Pick a fictional setting and intently visualize the setting every time when trying to fall asleep
  42. Pick the worst book on sleep out there and try to invert every advice
  43. Try not eating anything for a long time before going to bed
  44. Try eating a lot directly before going to bed, but have it be low-fat
  45. Try not drinking anything for several hours before going to bed
  46. See what the guy who wrote the Minihabit book has to say on mini habits about sleep
  47. Abandon the regular schedule and stay awake until I'm dead tired every day like I used to do
  48. Change my nutrition in general, somehow
  49. Implement a rule that avoids important news up to 2 hours before going to bed
  50. Implement a rule that says I have to do one undivided activity in the 2 hours before going to bed
Comment by sil-ver on Rafael Harth's Shortform · 2020-10-21T15:54:05.426Z · score: 2 (1 votes) · LW · GW

That's probably right.

I have since learned that there are functions which do have all partial derivatives at a point but are not smooth. Wikipedia's example is with . And in this case, there is still a continuous function that maps each point to the value of the directional derivative, but it's , so different from the regular case.

So you can probably have all kinds of relationships between direction and {value of derivative in that direction}, but the class of smooth functions have a fixed relationship. It still feels surprising that 'most' functions we work with just happen to be smooth.

Comment by sil-ver on Bet On Biden · 2020-10-20T16:29:53.378Z · score: 2 (1 votes) · LW · GW

I went the crypto route a bit, but then decided to funnel it through an acquaintance and use predictit instead.

If I understand everything correctly and calculated all the fees properly (probably haven't), then the overall deal from the point where money leaves my paypal to the point where hopefully more comes back, is analogous to betting on Biden as if he had a 73,6% on the markets. Which I guess is still pretty good, but man do I wish that there was just one large market with minimal fees, instead of fees on transferring, currency conversion, profit on predictit, and withdrawal from predictit. So many fees :x

Also, it takes a few more days. I hope the markets don't shift in Biden's favor in the meantime (then the deal obviously gets worse). It's amusing how I now want them to move into the opposite direction.

And all that is assuming a 0% risk that the person just screws me over and keeps the money for themselves, which is probably roughly accurate.

Comment by sil-ver on Iteration Fixed Point Exercises · 2020-10-20T11:35:23.944Z · score: 2 (1 votes) · LW · GW

(The second half made me realize how much more comfortable I am with abstract exercises than with regular Analysis à la Ex4.)

Ex6

If , then all are comparable to each other: we have

and so on. Furthermore, if is such that , then for all as well (verify by looking at the contrapositive). Consequently (set ), if were not a fixed point of , then , and hence , which means would have elements.

If , we get and so on, leading to the same argument.

Ex7

Wlog, assume . Set . Given any non-limit ordinal , we find a predecessor and set . Given any limit ordinal , we set .

Suppose this doesn't define for all ordinals . Then, there is some smallest ordinal such that is not defined. This immediately yields a contradiction (regardless of whether is a limit ordinal or not).

We want this construction to have the properties that and that . Thus, let and be an ordinal. If has a predecessor, the check for both properties are easy. If not, then and . Then, for the first property, note that the upper-bound of a one-element set is just the element itself. For the second, note that each element in the first set is smaller than some element in the second set, so is an upper-bound for the first set, which implies that since is the lowest upper-bound.

Now, given an ordinal , our construction defines a function . If , then the chain doesn't become stationary at any earlier point either (to verify, take a smallest such that [ but the chain is stationary for smaller ordinals] and derive a contradiction), and hence is injective, proving that . (This is the generalized version of the argument from Ex6.)

Ex8

Let be monotonic and let be the set of fixed points of . Then inherits the partial order from ; what needs doing is verify the least upper-bound property. So let . Then, has a least upper-bound in .

Let be some ordinal with . From the previous exercise, we know that . Choose the smallest such that . Then, and , hence is an upper-bound of .

It remains to show that it is the least upper-bound. Thus, let be another upper-bound of . Then, in , hence (apply Ex7).

Ex9

A least upper-bound is obtained via on all sets, and the greatest lower-bound via . (Easy checks.) Given any function , we trivially have ; injectivity is not needed.

We define

  • (i.e., greatest lower bound of the 's)

We need to verify that , then and are the desired sets.

  • "": Let . Then, there exists some smallest such that . (The case is possible and included.) We have , hence . Then, , hence .
  • "": Let . Then, there exists some smallest such that . In this case, we must have , so we know that . It follows that was lost at this step, i.e.,

Ex10

Let be the set constructed in Ex9. Then, we can define a bijection via

Comment by sil-ver on Rafael Harth's Shortform · 2020-10-20T08:17:08.870Z · score: 10 (2 votes) · LW · GW

Yesterday, I spent some time thinking about how, if you have a function and some point , the value of the directional derivative from could change as a function of the angle. I.e., what does the function look like? I thought that any relationship was probably possible as long as it has the property that . (The values of the derivative in two opposite directions need to be negatives of each other.)

Anyone reading this is hopefully better at Analysis than I am and realized that there is, in fact, no freedom at all because each directional derivative is entirely determined by the gradient through the equation (where ). This means that has to be the cosine function scaled by , it cannot be anything else.

I clearly failed to internalize what this equation means when I first heard it because I found it super surprising that the gradient determines the value of every directional derivative. Like, really? It's impossible to have more than exactly two directions with equally large derivatives unless the function is constant? It's impossible to turn 90 degree from the direction of the gradient and having anything but derivative 0 in that direction? I'm not asking that be discontinuous, only that it not be precisely . But alas.

This also made me realize that if viewed as a function of the circle is just the dot product with the standard vector, i.e.,

or even just . Similarly, .

I know what you're thinking; you need and to map to in the first place. But the circle seems like a good deal more fundamental than those two functions. Wouldn't it make more sense to introduce trigonometry in terms of 'how do we wrap around ?'. The function that does this is , and then you can study the properties that this function needs to have and eventually call the coordinates and . This feels like a way better motivation than putting a right triangle onto the unit circle for some reason, which is how I always see the topic introduced (and how I've introduced it myself).

Looking further at the analogy with the gradient, this also suggests that there is a natural extension of to for all . I.e., if we look at some point , we can again ask about the function that maps each angle to the value of the directional derivative on in that direction, and if we associate these angles with points of , then this yields the function , which is again just the dot product with or the projection onto the first coordinate (scaled by ). This can then be considered a higher-dimensional function.

There's also the 0-d case where . This describes how the direction changes the derivative for a function .

Comment by sil-ver on Iteration Fixed Point Exercises · 2020-10-19T18:08:58.365Z · score: 2 (1 votes) · LW · GW

Ex5 (this is super ugly but I don't think it's worth polishing and it does work. All important ideas are in the first third of the proof, the rest just inelegantly resolves the details.)

We define our metric space as where is the set of probability distributions, and . Let and let , then can be computed as

where the last step holds because multiplying a vector with the state-transition matrix leaves the sum of entries unchanged. (Reasonably easy to verify using that each column of sums up to 1.)

If , then has at least one negative entry and the inequality is strict. In that case, let and . In particular, we have . Note that, when two numbers have different sign, then and thus . Therefore, the amount that gets canceled out is at least

Let be the smallest entry in , then we can lower-bound the above as

Wlog, let . Let be the sum of all postive entires of , then , so the term we want to lower-bound is . The sum of the negative entries is , which means that the one with largest norm among them has norm at least . Thus, the relative decrease is at least

Then, , hence . This proves that is a contraction; apply Banach's theorem.

Comment by sil-ver on Iteration Fixed Point Exercises · 2020-10-18T16:23:53.549Z · score: 4 (2 votes) · LW · GW

Ex1

Let , let , and let . Then . For each , we find an such that , then . This proves that is a Cauchy-sequence, which (because is complete) means it converges to some point .

Furthermore, given a position , we have

.

Ex2

Given a any sequence in , it converges to some point , and it's easy to see that is a fixed point of . Let be a fixed point of . Then, , hence . (Otherwise, this contradicts the fact that is a contraction.)

Ex3

Choose as . Then is complete because, given any Cauchy sequence , it's easy to prove that there is an such that all but finitely many are in the subspace . However, the map given by has no fixed points since it moves each point by at least 1. (And it's straight-forward to verify that is a weak contraction.)

Comment by sil-ver on Bet On Biden · 2020-10-18T12:12:00.629Z · score: 5 (4 votes) · LW · GW

Sadly, while you can place sport bets from Germany via Betfair, you can't access political bets.

Comment by sil-ver on Bet On Biden · 2020-10-18T07:50:34.671Z · score: 6 (4 votes) · LW · GW

Is there anything a person living in Germany can do other than place a bet via someone else?

Comment by sil-ver on Bet On Biden · 2020-10-18T07:49:39.601Z · score: 5 (3 votes) · LW · GW

I've been told that the election is considered a massive and rare opportunity among professional gamblers, which that person thought was the 'smart money'.

Comment by sil-ver on Diagonalization Fixed Point Exercises · 2020-10-17T18:24:33.116Z · score: 2 (1 votes) · LW · GW

I might be wrong, but I believe this is not correct. The diagonal lemma lets you construct a sentence that is logically equivalent to something including its own godel numeral. This is different from having its own godel numeral be part of the syntactic definition.

In particular, the former isn't recursive. It defines one sentence and then, once that sentence is defined, proves something about a second sentence which includes the godel numeral of the first. But what seed attempted (unless I misunderstood it) was to use the godel numeral in the syntactic definition for , which doesn't make sense because is not defined until is.

Comment by sil-ver on Diagonalization Fixed Point Exercises · 2020-10-17T16:25:55.733Z · score: 3 (2 votes) · LW · GW

Don't know if this is still relevant, but on Ex9

you definitely can't define this way. Your definition includes the godel numeral for , which makes the definition depend on itself.

Comment by sil-ver on Diagonalization Fixed Point Exercises · 2020-10-16T16:59:43.733Z · score: 2 (1 votes) · LW · GW

Ex8

This was reasonably straight-forward given the quine.

def apply(f):
    l = chr(123) # opening curly bracket
    r = chr(125) # closing curly bracket
    q = chr(39) # single quotation mark
    n = chr(10) # linebreak
    z = [n+"    ", l+f"z[i]"+r+q+n+"        + f"+q]
    x = [n+"        ", l+f"x[i]"+r]
    e = [q, l+"e[i]"+r+q+")"+n+"    print(f(sourcecode))"]
    sourcecode = ""
    for i in range(0,2):
        sourcecode += (f'def apply(f):{z[i]}'
        + f'l = chr(123) # opening curly bracket{z[i]}'
        + f'r = chr(125) # closing curly bracket{z[i]}'
        + f'q = chr(39) # single quotation mark{z[i]}'
        + f'n = chr(10) # linebreak{z[i]}'
        + f'z = [n+"    ", l+f"z[i]"+r+q+n+"        + f"+q]{z[i]}'
        + f'x = [n+"        ", l+f"x[i]"+r]{z[i]}'
        + f'e = [q, l+"e[i]"+r+q+")"+n+"    print(f(sourcecode))"]{z[i]}'
        + f'sourcecode = ""{z[i]}'
        + f'for i in range(0,2):{x[i]}sourcecode += (f{e[i]}')
    print(f(sourcecode))
Comment by sil-ver on Diagonalization Fixed Point Exercises · 2020-10-16T09:31:02.277Z · score: 2 (1 votes) · LW · GW

Last time, I got to Ex7. This time, I decided to do them all again before continuing.

Comment on Ex1-6

It gets easy if you just write down what property you want to have in first-order logic.

For example, for Ex1 you want a set that does the following:

now if we consider a set as a function that takes an element and returns true or false, this becomes

How do you get such a ? You can just choose , then

and this is done by defining , i.e., which is precisely the solution. This almost immediately answers Ex 1,2,4 and it mostly answers Ex6.

Another quine for Ex7, this time in python:

l = chr(123) # opening curly bracket
r = chr(125) # closing curly bracket
q = chr(39) # single quotation mark
t = chr(9) # tab
n = chr(10) # linebreak
z = [n, l+f"z[i]"+r+q+n+t+"+ f"+q]
x = [n+t, l+f"x[i]"+r]
e = [q, l+"e[i]"+r+q+", end="+q+q+")"]
for i in range(0,2):
        print(f'l = chr(123) # opening curly bracket{z[i]}'
        + f'r = chr(125) # closing curly bracket{z[i]}'
        + f'q = chr(39) # single quotation mark{z[i]}'
        + f't = chr(9) # tab{z[i]}'
        + f'n = chr(10) # linebreak{z[i]}'
        + f'z = [n, l+f"z[i]"+r+q+n+t+"+ f"+q]{z[i]}'
        + f'x = [n+t, l+f"x[i]"+r]{z[i]}'
        + f'e = [q, l+"e[i]"+r+q+", end="+q+q+")"]{z[i]}'
        + f'for i in range(0,2):{x[i]}print(f{e[i]}', end='')
Comment by sil-ver on A full explanation to Newcomb's paradox. · 2020-10-12T18:08:12.766Z · score: 6 (4 votes) · LW · GW

(All of this is just based on my understanding, no guarantees.)

Miri is studying decision theory in the context of embedded agency. Embedded Agency is all about what happens if you stop having a clear boundary between the agent and the environment (and you instead have the agent as part of the environment, hence embedded). Decision problems where the outcome depends on your behavior in counter-factual situations are just one of several symptoms that come from being an embedded agent.

In this context, we care about things like "if an agent is made of parts, how can she ensure her parts are aligned" or "if an agent creates copies of herself, how can we make sure nothing goes wrong" or "if the agent creates a successor agent, how can we make sure the successor agent does what the original agent wants".

I say this because (3) and (4) suddenly sound a lot more plausible when you're talking about something like an embedded agent playing a newcomb-like game (or a counter-factual mugging type game or a prisonner-dilemma type game) with a copy of itself.

Also, I believe Timeless Decision Theory is outdated. The important decision theories are Updateless Decision Theory and Functional Decision Theory. Afaik, UDT is both better and better formalized than TDT.

Comment by sil-ver on Topological Fixed Point Exercises · 2020-10-12T17:45:42.827Z · score: 2 (1 votes) · LW · GW

Ex12

I didn't use the hint, so my solution looks different. I also don't get how the intended solution works -- you can't choose the cubes in small enough to make sure that is constant on each cube , so may not be convex. This seems to kill the straight-line solution, and I didn't see a way to salvage it.

Here's what I did in one paragraph. Divide both and into cubes. For any horizontal edge in , make sure hits the centers of all cubes that touches on points within distance of (where is at least the diameter of a cube in ), while moving around only within those cubes. Extend to without wandering off too far, and voilà.

Proof roadmap:

  1. Embed and into unit cubes
  2. Cut the unit cubes into small cubes
  3. Prove a bunch of helpful Lemmas
  4. Define on some subset of the unit cube containing
  5. Argue that this approximates on

(1) Since and are compact and hence bounded, we can scale them down such that we can consider them subspaces of the unit cubes, i.e., and , where we choose as small as possible. (This is the abbreviated version of working with embedding functions.)

Let .

(2) By cutting each interval into pieces, i.e., , we obtain small cubes in of the form

where . Enumerate these cubes as . An analogous construction for yields .

Given any set , we define the operator to 'expand' to all the cubes that it touches, i.e.,

(3) We do most work via paths. This requires a bunch of Lemmas.

Lemma 1. For any connected set , the set is connected.

Proof. Let be a separation into two closed sets. Suppose first there is a point not entirely contained in or . Then, and is a separation of , contradicting the fact that is convex (and hence (path)-connected.) Thus, each either lies entirely in or entirely in .

Since is closed, so is and (where is the graph of ), which is simply . (The preimage function is well-defined for and due to the result from the previous paragraph, and the projection is closed because is compact.) Analogously, is closed. Then, is a separation of , implying that (because is connected), one of them is the empty set. Since , this implies that or .

This is only needed to prove Lemma 2.

Lemma 2. For any connected set , the set is path-connected.

Proof. The set consists of cubes in . Consider the graph where all cubes in are nodes, and there is an edge between two nodes iff the cubes share at least a point. If this graph were disconnected, then there would be a minimal distance between the sets of cubes corresponding to two disconnected parts of the graph. This yields a separation of , which is also a separation of , contradicting the previous lemma. (The distance can, in fact, be lower-bounded, but it suffices to use the fact that two closed disjoint sets in a metric space have non-zero distance.) Thus, is connected. This allows us to construct a path between the center points of two arbitrary cubes in (since there is a corresponding path in and the straight-line connection between the centers of two cubes that share at least one vertex yields a continuous path). Now, given two points and two cubes such that and , we can construct a path from to via

Lemma 3. Given and any path-connected space , all functions from to are homotopic.

Proof. Let . Define a homotopy by the rule . Then, is , and is a constant map, proving that is null-homotopic. Since being homotopic is an equivalence relation (and any two constant maps are trivially homotopic in a path connected space), it follows that all maps are homotopic to each other.

(4) Let be a parameter that depends on . We will specify how is chosen in the last part of the proof. It will have the properties that it's at least as large as the diameter of a cube and that it converges to as grows.

Let be connected. We define two operators on . The first is the set of points in that wish for to hit on points near it. We set

where . The second is a sufficiently small subspace of that is guaranteed to contain all points that touches on . We set . Note that this set is identical to , which makes it path-connected by Lemma 2.

We now want to define a partial function . We begin by defining it on vertices, then generalize it to specific edges, then specific faces, and so on, until we define it on all cubes that intersect .

A vertex is defined to be a point of the form

for some . The set may be empty if is too far outside , in which case we leave undefined on . If it is not empty, choose arbitrarily and set .

We now turn to edges. However, we only consider 'horizontal' edges, that is, subspaces of the form

for some . Let and be the two vertices of . If is undefined on either, we leave it undefined on . If not, we define it in the following. Note that this is the step where we guarantee that hits all points in the target set.

We know from Lemma 3 that there is a path from to . Since is homeomorphic to , it's easy to transform into a 'path' . But we can do even better and construct a path that starts at , ends at , and hits all points in on the way. (All trivial since is path-connected.) Now we simply set .

We next consider all 'horizontal-vertical' faces, that is, all sets of the form

for some . Let and be the two horizontal edges of . If is undefined on either, we leave it undefined on . If not, we have two paths and which implement on and , respectively. Using Lemma 3, we obtain a homotopy that continuously deforms into . Since our face is homeomorphic to , it's easy to transform into a function . Now we simply set .

Now we do the same for 3-dimensional subspaces of the form

where has been defined on the two horizontal-vertical faces, and so on. This way, every -dimensional subspace of this kind contains precisely two -dimensional subspaces of this kind, and if we have defined on both, we can apply a construction analogous to the above to define on the -dimensional subspace. Eventually, we define on -dimensional subspaces, which are precisely our cubes. (In the case of -dimensional subspaces, the definition above doesn't pose any restriction; it coincides with the definition of a cube ). Importantly, this defines on any cube that intersects . (This is so because any vertex of this cube is within of , which means that it has non-empty area. This implies that we have defined on any edge, face, and so on, of .) Thus, we end up having defined on some subspace of that includes all cubes that intersect .

The advantage of this construction is that, for any , we know that is contained in , which is the same as the union of all cubes in that touches on points near . If we had used the homomorphic extension of instead (i.e., connecting via straight lines), we could merely guarantee that is contained in the convex hull of certain points in , which may be much larger.

(5) Having defined on a subset of that contains , we take a projection , and define by the rule . It remains to show that our construction is such that the Hausdorff distance between and converges to 0 as , then the distance between and converges to as well. (To see this, note that, if is within of (which lies in ), then can move it by at most , which means that is within of .)

To do this, we now explain how is chosen, and then argue that the Hausdorff distance can be upper-bounded by some constant times .

The issue we have to deal with is that, by default, a point on the boundary of may not have any edge close to it that is contained in . (In fact, it may not even have an edge close to it that intersects .) Thus, we would like to be so large that any -ball around a point in must contain a -ball entirely contained in , where is larger than the diameter of a cube. In that case, any is within of a cube entirely contained in and thus also within of an edge entirely contained in .

It remains to show that we can choose such that it (a) has this property and (b) converges to as a function of . To show this, we consider fixed, and show that eventually grows large enough for that to suffice.

For every point , there exists some such that contains some -ball entirely contained in . Define a function that each point to the supremum of such 's. Then, is a continuous function from a compact set to , which means it takes on a minimum value. It now suffices to choose large enough that the diameter of a cube is smaller than this minimum. (To verify that is continuous, take a sequence of points in , assuming that doesn't converge, and derive a contradiction.)

With this out the way, we return to the proof that is the Haudorff limit of the . This consists of showing two parts:

  1. for any point on (the graph of ), there exists a close point on (i.e., comes sufficiently close to all points on ); and
  2. for any point on , there exists a close point on (i.e., has no points too far away from ).

Both are now easy:

  1. Let , and let be a cube that contains . Let be an edge with distance at most to . By construction, we have , which means that there is a point such that . Since and , the distance between and converges to as .
  2. Let , and let be a cube containing . Since , it follows that , which means there exists a point such that , which is to say, such that there is a for which . Now and , and thus the distance between and again converges to as .

Ex13

Follows from Ex11 and Ex12 :-)

Comment by sil-ver on How much to worry about the US election unrest? · 2020-10-12T14:22:39.472Z · score: 7 (4 votes) · LW · GW

I think there are two separate questions here: one, whether American society could substantially break down, and two, whether Trump could remain in power despite losing the election. I don't have an answer to either, but

  • Sam Harris, who is the only thinker I value that I've heard talk about this, thinks the risk for the first is serious.
  • Nate Silver et. al.'s take on Trump stealing the election was basically (1) he'll probably try; and (2) whether or not it can work depends mostly on how close it is. It would probably have to be within 0.5%, so if Biden's lead remains roughly this large until election day, then (according to Nate) there is most likely nothing Trump can do. (I think that was discussed on this podcast.)

I would be curious what sources other people think are relevant here.

Comment by sil-ver on Topological Fixed Point Exercises · 2020-10-08T09:43:05.257Z · score: 2 (1 votes) · LW · GW

Ex11

(I assume the graph of is compact; otherwise, the Hausdorff distance isn't defined, and there seem to be counter-examples to the claim of the exercise.)

Since each is a continuous function from to itself, it has a fixed point by Ex10. Then is a sequence of points in a compact space and thus has a limit point .

Let be the graph of . Assume for a contradiction that . Then, . Since is a compact subspace of the Hausdorff space , it is also closed. Let be an open set around disjoint from . Then, we find an such that . (This uses that is convex, otherwise the ball would exist in but could fall out of and hence .)

Choose infinite such that is entirely contained in . Note that by construction. Write for the Hausdorff distance, then . This contradicts the fact that , hence .

Comment by sil-ver on Rationality and Climate Change · 2020-10-07T18:21:06.654Z · score: 2 (1 votes) · LW · GW

So, I do agree that if climate change contributes to existential risk indirectly in that sort of way (but we're still talking about the same kind of climate change as we might worry about the direct effects of) then yes, that should go in the same accounting bucket as the direct effects. Yay, agreement.

(And I think we also agree that cases where other things such as nuclear war produce other kinds of climate change should not go in the same accounting bucket, even though in some sense they involve climate change.)

Yes on both.

This conversation is sort of interesting on a meta level. Turns out there were two ways my example was confusing, and neither of them occurred to me when I wrote it. Apologies for that.

I'm not sure if there's a lesson here. Maybe something like 'the difficulty of communicating something isn't strictly tied to how simple the point seems to you' (because this was kind of the issue; I thought what I was saying was simple hence easy to understand hence there was no need to think much about what examples to use). Or maybe just always think for a minimum amount of time since one tends to underestimate the difficulty of conversation in general. In retrospect, it sure seems stupid to use nuclear winter as an example for a second-order effect of climate change, when the fact that winter and climate are connected is totally coincidental.

It's somewhat consoling that we at least managed to resolve one misunderstanding per back-and-forth message pair.

Comment by sil-ver on Topological Fixed Point Exercises · 2020-10-07T17:38:54.289Z · score: 2 (1 votes) · LW · GW

Ex10

The entire work here is to show that a continuous function from from the standard simplex to itself has a fixed point. If that's done, given compact and convex and a continuous function on , we can scale to be a subset of , take the continuous projection , and gives us a function from to itself. Now, a fixed point of is also a fixed point of .

For that, the intended way is presumably to mirror the step from Ex4 to Ex5. The problem is that the coloring of the disk isn't drawn in a way that generalizes well. The nicer way to color it would be like this. One can see that these colors still work (i.e., a trichromatic triangle must contain the origin), and they're subsets of the previous colors, so the conditions of sides not touching colors still hold. This way of coloring is analogous to what we do in -dimension space.

Mathematically, one can describe these areas as follows:

  • The areas of the kind
  • The area where all coordinates are non-negative,

Given the -dimensional standard simplex and a continuous function , the function given by does have the property that each face of the simplex has one color it can't map into...

  • The first faces can be characterized by the condition that . Then for a point in this face, we have , hence .
  • The final face can be characterized by the condition that . Then for a point in this face, we have that , hence . (Unless , in which case we're also happy.)

We still have to show that the image points of the vertices of the simplex actually have all colors. This is not necessarily true, but as above we can show that either it is true or one of them maps directly into the origin.

The -th vertex is the point with and . We have for all , and . Thus, either or .

And for the origin, we have , so

Now, either one of the first vertices maps directly into the origin, or we can construct a simplex with all 'colors' for the map in . According to Ex9, this simplex has a -chromatic component. It remains to show that the origin is always contained the span of such points (tedius but pretty clear from the 2-d case), then we can again construct a sequence of points that converges toward the origin, by making the simplex progressively more granular. This gives us a point such that and hence .

Comment by sil-ver on Rationality and Climate Change · 2020-10-07T10:12:43.967Z · score: 3 (2 votes) · LW · GW

But now that I understand better what scenario you're proposing it seems like a really weird scenario to propose, because I can't imagine what sort of real-world "solution" to climate change would have that property. Maybe the discovery of some sort of weather magic that enables us to adjust weather and climate arbitrarily would do it

I think the story of how mitigating climate change reduces risk of first-order effects from nuclear war is not that it helps survive nuclear winter, but that climate change leads to things like refugee crises, which in turn lead to worse international relations and higher chance of nuclear weapons being used, and hence mitigating c/c leads to lower chances of nuclear winter occurring.

The 1%/9% numbers were meant to illustrate the principle and not to be realistic, but if you told me something like, there's a 0.5% contribution to x-risk from c/c via first-order effects, and there's a total of 5% contribution to x-risk from c/c via increased risk from AI, bio-terrorism, and nuclear winter (all of which plausibly suffer from political instabilities), that doesn't sound obviously unreasonable to me.

The concrete claims I'm defending are that

  • insofar as they exist, -th order contributions to x-risk matter roughly as much as first-order contributions; and
  • it's not obvious that they don't exist or are not negligible.

I think those are all you need to see that the single-category framing is the correct one.

Comment by sil-ver on Rationality and Climate Change · 2020-10-06T19:33:56.832Z · score: 3 (2 votes) · LW · GW

If 10% of current existential risk is because of the possibility that we have a massive nuclear war and the resulting firestorms fill the atmosphere with particulates that lower temperatures and the remaining humans freeze to death, then the things we might consider doing as a result include campaigning for nuclear disarmament or rearmament, (whichever we think will do more to reduce the likelihood of large-scale nuclear war), finding ways to reduce international tensions generally, researching weapons that are even more directly destructive and have fewer side effects, investigating ways of getting particulates out of the atmosphere after a nuclear war, and so forth.

In the hypothetical, 9% was the contribution of climate change to nuclear winter, not the total probability of nuclear winter. The total probability for nuclear winter could be 25%.

In that case, if we 'solved' climate change, the probability for nuclear winter would decrease from 25%  to 16% (and the probability for first-order extinction from climate change would decrease from 1% to 0%). The total decrease in existential risk would be 10%.

I will grant you that it's not irrelevant where the first-order effect comes from -- if we somehow solved nuclear war entirely, this would make it much less urgent to solve climate change, since now the possible gain is only 1% and not 10%. But it still seems obvious to me that the number you care about when discussing climate change is 10% because as long as we don't magically solve nuclear war, that's the total increase to the one event we care about (i.e., the single category of existential risk).

Comment by sil-ver on Rationality and Climate Change · 2020-10-06T19:25:30.306Z · score: 4 (2 votes) · LW · GW

The analogy makes sense to me, and I can both see how being Bob on alignment (and many other areas) is a failure mode, and how being Alice in some cases is failure mode.

But I don't think it applies to what I said.

there are some things that are really x-risks and directly cause human extinction, and there are other things like bad governance structures or cancel culture or climate change that are pretty bad indeed and generally make society much worse off

I agree, but I was postulating that climate change increases literal extinction (where everyone dies) by 10%.

The difference between x-risk and catastrophic risk (what I think you're talking about) is not the same as the difference between first and -th order existential risk. As far as I'm concerned, the former is very large because of future generations, but the second is zero. I don't care at all if climate change kills everyone directly or via nuclear war, as long as everyone is dead.

Or was your point just that the two could be conflated?

Comment by sil-ver on Rationality and Climate Change · 2020-10-06T08:57:54.850Z · score: 5 (3 votes) · LW · GW

I'm not sure I follow your point about "is" versus "contributes to". I don't think I agree that it doesn't matter whether a particular entity is itself capable of ending civilization. Nanotech, AI, synthetic biology, each have the ability to be powerful enough to end civilization before breakfast. Climate change seems like a major catastrophe but not on the same level, and so while it's still relevant to model over multiple decades, it's not primary in the way the others are.

Suppose it is, in fact, the case that climate change contributes 10% to existential risk. (Defined by, if we performed a surgery on the world state right now that found a perfect solution to c/c, existential risk would go down by that much.) Suppose further that only one percentage point of that goes into scenarios where snowball effects lead to earth to becoming so hot that society grinds to a halt, and nine percentage points into scenarios where international tensions lead to an all-out nuclear war and subsequent winter that ends of all humanity. Would you then treat "x-risk by climate change" as 1% or 10%? My point is that it should clearly be 10%, and this answer falls out of the framing I suggest. (Whereas the 'x-risk by or from climate change' phrasing makes it kind of unclear.)

FWIW I don't think the FLI is that reasonable an authority here, I'm not sure why you'd defer to them.

The 'FLI is a reasonable authority' belief is itself fairly low information (low enough to be moved by your comment).

Comment by sil-ver on Rationality and Climate Change · 2020-10-05T10:47:57.430Z · score: 7 (4 votes) · LW · GW

I am generally concerned, and also think this makes me an outlier. I don't have any specific model of what will happen.

This is a low information belief that could definitely change in the future. However, it doesn't seem important to figure out how dangerous climate change is exactly because doing something about it is definitely not my comparative advantage, and I'm confident that it's less under-prioritized and less important than dangers from AI. It's mostly like, 'well the future of life institute has studied this problem, they don't seem to think we can disregard it as a contributor to existential risk, and they seem like the most reasonable authority to trust here'.

A personal quibble I have is that I've seen people dismiss climate change because they don't think it poses a first-order existential risk. I think this is a confused framing that comes from asking 'is climate change an existential risk?' rather than 'does climate change contribute to existential risk?', which is the correct question because existential risk is a single category. The answer to the latter question seems to be trivially yes, and the follow-up question is just how much.

Comment by sil-ver on Topological Fixed Point Exercises · 2020-10-04T14:16:44.080Z · score: 2 (1 votes) · LW · GW

Ex 9

I'm weak with high-dimensional stuff, so I wanted to translate the statement into one about graphs. We characterize graphs by the following property:

Property P: every -clique in the graph has an equal number of extensions to -cliques. (I.e., an equal number of nodes not in the clique that are connected to every node in the clique.)

(A simplex we translate into a graph has property P: every vertex has an equal number of edges it's a part of, every edge an equal number of faces it's a part of, and so on. That is, except for the vertices/edges at the... well, edges of the triangulation. Those have already made problems in Ex4.)

We now prove by induction that, given any and a graph with property P where the largest cliques are -cliques, and any coloring , the graph has an even number of -chromatic cliques.

Base case is . The only such graph with property P is the cycle graph . Lemma follows from Ex. 1. (We need that here, but that's fine.)

Inductive step: suppose the claim is true for some and we have a graph where the largest cliques are cliques and some coloring . We prove the step by constructing starting with the trivial coloring where . This coloring has no -chromatic cliques, so in particular, the number of such cliques is even. We can transform into by repeatedly recoloring nodes, as in Ex4 -- and as in Ex4, the claim follows if we can prove that any step changes the number of -chromatic cliques by an even number.

Let , and suppose we change the color of from to . Recoloring can only change the -chromatic-ness of cliques which contain . Let be such a clique. Then changes its -chromatic-ness iff (a) precisely one of the nodes in