Posts

What would be the shelf life of nuclear weapon-secrecy if nuclear weapons had not immediately been used in combat? 2023-11-29T00:53:42.598Z

Comments

Comment by Gram Stone on Limerence Messes Up Your Rationality Real Bad, Yo · 2022-07-01T22:36:01.137Z · LW · GW

I just want to register that there are things I feel like I couldn't have realistically learned about my partner in an amount of time short enough to precede the event horizon, and that my partner and I have changed each other in ways past the horizon that could be due to limerence but could also just be good old-fashioned positive personal change for reasonable reasons, and that I suspect that the takeaway here is not necessarily to attempt to make final romantic decisions before reaching the horizon, but that you will only be fully informed past the horizon, and that is just the hand that Nature has dealt you. I don't necessarily think you disagree with that but wanted to write that in case you did.

Comment by Gram Stone on Late 2021 MIRI Conversations: AMA / Discussion · 2022-03-06T15:55:45.567Z · LW · GW

I got the impression Eliezer's claiming that a dangerous superintelligence is merely sufficient for nanotech.

How would you save us with nanotech? It had better be good given all the hardware progress you just caused!

Comment by Gram Stone on This Year I Tried To Teach Myself Math. How Did It Go? · 2022-01-01T01:26:05.444Z · LW · GW

A genuine congratulations for learning the rare skill of spotting and writing valid proofs.

Graham’s Number I see as ridiculous, apparently one of the answers to his original problem could be as low as a single digit number, why have power towers on power towers then?

Graham's number is an upper bound on the exact solution to a Ramsey-type problem. Ramsey numbers and related generalizations are notorious for being very easy to define and yet very expensive to compute with brute-force search, and many of the most significant results in Ramsey theory are proofs of extraordinarily large upper bounds on Ramsey numbers. Graham would have proved a smaller bound if he could.

(In fact, as I understand it, the popular Graham's number is slightly larger than the published result, but the published result is only slightly smaller in relative terms, for a lot more work.)

Comment by Gram Stone on Shulman and Yudkowsky on AI progress · 2021-12-04T00:58:18.951Z · LW · GW

Now that we clarified up-thread that Eliezer's position is not that there was a giant algorithmic innovation in between chimps and humans, but rather that there was some innovation in between dinosaurs and some primate or bird that allowed the primate/bird lines to scale better

 

Where was this clarified...? My Eliezer-model says "There were in fact innovations that arose in the primate and bird lines which allowed the primate and bird lines to scale better, but the primate line still didn't scale that well, so we should expect to discover algorithmic innovations, if not giant ones, during hominin evolution, and one or more of these was the core of overlapping somethingness that handles chipping handaxes but also generalizes to building spaceships."

If we're looking for an innovation in birds and primates, there's some evidence of 'hardware' innovation rather than 'software.'


For a speculative software innovation hypothesis, there seem to be cognitive adaptations that arose in the LCA of all anthropoids for mid-level visual representations e.g. glossiness, above the level of, say, lines at a particular orientation, and below the level of natural objects, which seem like an easy way to exapt into categories, then just stack more layers for generalization and semantic memory. These probably reduce cost and error by allowing the reliable identification of foraging targets at a distance. There seem to be corresponding cognitive adaptations for auditory representations that strongly correlate with foraging targets, e.g. calls of birds that also target the fruits of angiosperm trees. Maybe birds and primates were under similar selection for mid-level visual representations that easily exapt into categories, etc.

Comment by Gram Stone on Soares, Tallinn, and Yudkowsky discuss AGI cognition · 2021-11-30T23:19:07.982Z · LW · GW

If information is 'transmitted' by modified environments and conspecifics biasing individual search, marginal fitness returns on individual learning ability increase, while from the outside it looks just like 'cultural 'evolution.''

Comment by Gram Stone on Yudkowsky and Christiano discuss "Takeoff Speeds" · 2021-11-24T03:02:11.190Z · LW · GW

If I take the number of years since the emergence of Homo erectus (2 million years) and divide that by the number of years since the origin of life (3.77 billion years), and multiply that by the number of years since the founding of the field of artificial intelligence (65 years), I get a little under twelve days. This seems to at least not directly contradict my model of Eliezer saying "Yes, there will be an AGI capable of establishing an erectus-level civilization twelve days before there is an AGI capable of establishing a human-level one, or possibly an hour before, if reality is again more extreme along the Eliezer-Hanson axis than Eliezer. But it makes little difference whether it's an hour or twelve days, given anything like current setups." Also insert boilerplate "essentially constant human brain architectures, no recursive self-improvement, evolutionary difficulty curves bound above human difficulty curves, etc." for more despair.

I guess even though I don't disagree that knowledge accumulation has been a bottleneck for humans dominating all other species, I don't see any strong reason to think that knowledge accumulation will be a bottleneck for an AGI dominating humans, since the limits to human knowledge accumulation seem mostly biological. Humans seem to get less plastic with age, mortality among other things forces us to specialize our labor, we have to sleep, we lack serial depth, we don't even approach the physical limits on speed, we can't run multiple instances of our own source, we have no previous example of an industrial civilization to observe, I could go on: a list of biological fetters that either wouldn't apply to an AGI or that an AGI could emulate inside of a single mind instead of across a civilization. I am deeply impressed by what has come out of the bare minimum of human innovative ability plus cultural accumulation. You say "The engine is slow," I say "The engine hasn't stalled, and look how easy it is to speed up!"

I'm not sure I like using the word 'discontinuous' to describe any real person's position on plausible investment-output curves any longer; people seem to think it means "intermediate value theorem doesn't apply," (which seems reasonable) when usually hard/fast takeoff proponents really mean "intermediate value theorem still applies but the curve can be almost arbitrarily steep on certain subintervals."

Comment by Gram Stone on Quick general thoughts on suffering and consciousness · 2021-11-01T17:16:53.184Z · LW · GW

I have an alternative hypothesis about how consciousness evolved. I'm not especially confident in it.

In my view, a large part of the cognitive demands on hominins consists of learning skills and norms from other hominins. One of a few questions I always ask when trying to figure out why humans have a particular cognitive trait is “How could this have made it cheaper (faster, easier, more likely, etc.) to learn skills and/or norms from other hominins?” I think the core cognitive traits in question originally evolved to model the internal state of conspecifics, and make inferences about task performances, and were exapted for other purposes later.

I consider imitation learning a good candidate among cognitive abilities that hominins may have evolved since the last common ancestor with chimpanzees, since as I understand it, chimps are quite bad at imitation learning. So the first step may have been hominins obtaining the ability to see another hominin performing a skill as another hominin performing a skill, in a richer way than chimps, like “That-hominin is knapping, that-hominin is striking the core at this angle.” (Not to imply that language has emerged yet, verbal descriptions of thoughts just correspond well to the contents of those thoughts; consider this hypothesis silent on the evolution of language at the moment.) Then perhaps recursive representations about skill performance, like “This-hominin feels like this part of the task is easy, and this part is hard.” I’m not very committed on whether self-representations or other-representations came first. Then higher-order things like, “This-hominin finds it easier to learn a task when parts of the task are performed more slowly, so when this-hominin performs this task in front of that-hominin-to-be-taught, this-hominin should exaggerate this part, or that part of the task.” And then, “This-hominin-that-teaches-me is exaggerating this part of the task,” which implicitly involves representing all those lower order thoughts that lead to the other hominin choosing to exaggerate the task, and so on. This is just one example of how these sorts of cognitive traits could improve learning efficiency, in sink and source.

Once hominins encounter cooperative contexts that require norms to generate a profit, there is selection for these aforementioned general imitation learning mechanisms to be exapted for learning norms, which could result in metarepresentations of internal state relevant to norms, like emotional distress, among other things. I also think this mechanism is a large part of how evolution implements moral nativism in humans. Recursive metarepresentations of one’s own emotional distress can be informative when learning norms as well. Insofar as one’s own internal state is informative about the True Norms, evolution can constrain moral search space by providing introspective access to that internal state. On this view, this is pretty much what I think suffering is, where the internal state is physical or emotional distress.

I think this account allows for more or less conscious agents, since for every object-level representation, there can be a new metarepresentation, so as minds become richer, so does consciousness. I don't mean to imply that full-blown episodic memory, autobiographical narrative, and so on falls right out of a scheme like this. But it also seems to predict that mostly just hominins are conscious, and maybe some other primates to a limited degree, and maybe some other animals that we’ll find have convergently evolved consciousness, maybe elephants or dolphins or magpies, but also probably not in a way that allows them to implement suffering.

I don’t feel that I need to invoke the evolution of language for any of this to occur; I find I don’t feel the need to invoke language for most explicanda in human evolution, actually. I think consciousness preceded the ability to make verbal reports about consciousness.

I also don’t mean to imply that dividing as opposed to making pies is a small fraction of the task demands that hominins faced historically, but I also don’t think it’s the largest fraction.

Your explanation does double-duty, with its assumptions at least, and kind of explains how human cooperation is stable where it wouldn’t be by default. I admit that I don’t provide an alternative explanation, but I also feel like it’s outside of the scope of the conversation and I do have alternative explanations in mind that I could shore up if pressed.

Comment by Gram Stone on Progress, Stagnation, & Collapse · 2021-07-23T00:11:37.298Z · LW · GW

Your argument has a Holocene, sedentary, urban flavor, but I think it applies just as well to Pleistocene, nomadic cultures; I think of it as an argument about population size and 'cognitive capital' as such, not only about infrastructure or even technology. Although my confidence is tempered by mutually compatible explanations and taphonomic bias, my current models of behavioral modernity and Neanderthal extinction essentially rely on a demographic argument like the one made here. I don't think this comment would be as compelling without a reminder that almost everyone explains these phenomena via interspecies differences in individual cognitive adaptation, as opposed to demography.

On behavioral modernity, I am a gradualist; I do not think there was a sudden pulse of innovation. Signs of modernity, like a wider resource base, public symbol use, and certain technologies appear and then disappear, before becoming a permanent fixture later in the record. The most striking example of this would be ancient Australia, where humans did not reliably demonstrate all features of behavioral modernity for the first 25,000 years of residence even though these features were present in contemporaneous cultures in different geographic locations; Australia also happens to be one of the last places to which humans migrated. The idea here is that reduced population size, and thus density, affects the fidelity and bandwidth of social learning in lots of ways (e.g. decreased redundancy and specialization), resulting in a less sophisticated behavioral repertoire, as well as reduced selection for public symbol use (since there are no frequently encountered outgroups to which to signal).

On Neanderthal extinction, I think this is a collapse in the sense of your post, by a failure to reliably transmit cognitive capital. The life history of hominins is similar to a case of growth-dependent economics, where large energy and time investments are made under the expectation of future economic productivity. While I think environmental effects started the decay, temperate specialists as the Neanderthals were, I think they went extinct ultimately because these effects started a process that gradually invalidated the preconditions for reliable cultural transmission in that species.

Comment by Gram Stone on Four factors that moderate the intensity of emotions · 2018-11-25T19:02:27.631Z · LW · GW

For those wondering about the literature, although Kahneman and Tversky coined no term for it, Kahneman & Tversky (1981) describes counterfactual-closeness and some of its affective consequences. This paper appears to be the origin of the missed flight example. Roese (1997) is a good early review on counterfactual thinking with a section on contrast effects, of which closeness effects are arguably an instance.

Comment by Gram Stone on Incorrect hypotheses point to correct observations · 2018-11-22T01:14:06.993Z · LW · GW

Succubi/incubi and the alien abduction phenomenon point to hypnagogia, and evo-psych explanations of anthropomorphic cognition are often washed down with arguments that anthropomorphism causes good enough decisions while being technically completely false; there's an old comment by JenniferRM talking about how surprisingly useful albeit wrong it would be to model pathogens as evil spirits.

Comment by Gram Stone on Topological Fixed Point Exercises · 2018-11-18T18:20:18.394Z · LW · GW

An attempt at problem #1; seems like there must be a shorter proof.

The proof idea is "If I flip a light switch an even number of times, then it must be in the same state that I found it in when I'm finished switching."

Theorem. Let e a path graph on ertices with a vertex oloring uch that if hen Let s bichromatic Then s odd.

Proof. By the definition of a path graph, there exists a sequence ndexing An edge s bichromatic iff A subgraph f s a state iff its terminal vertices are each incident with exactly one bichromatic edge or equal to a terminal vertex of The color of a state is the color of its vertices. There exists a subsequence of ontaining the least term of each state; the index of a state is equal to the index of its least term in this subsequence.

Note that none of the states with even indexes are the same color as any of the states with odd indexes; hence all of the states with even indexes are the same color, and all of the states with odd indexes are the same color.

For each state there exists a subsequence of orresponding to the vertices of and the least term of each subsequence is either r some hat is the greatest term in a bichromatic edge. Thus the number of states in

By contradiction, suppose that s even. Then the number of states is odd, and the first and last states are the same color, so the terminal vertices of re the same color, contrary to our assumption that they are different colors. Thus ust be odd.

:::

Comment by Gram Stone on What To Do If Nuclear War Seems Imminent · 2018-09-13T23:41:00.273Z · LW · GW

I see that New Zealand is also a major wood exporter. In case of an energy crisis, wood gas could serve as a renewable alternative to other energy sources. Wood gas can be used to power unmodified cars and generators. Empirically this worked during the Second World War and works today in North Korea. Also, FEMA once released some plans for building an emergency wood gasifier.

Comment by Gram Stone on Making a Difference Tempore: Insights from 'Reinforcement Learning: An Introduction' · 2018-07-06T16:27:21.487Z · LW · GW

Yeah it's fixed.

Comment by Gram Stone on Making a Difference Tempore: Insights from 'Reinforcement Learning: An Introduction' · 2018-07-06T13:53:31.571Z · LW · GW

The Lahav and Mioduser link in section 14 is broken for me. Maybe it's just paywalled?

Comment by Gram Stone on Anthropics made easy? · 2018-06-14T16:54:15.352Z · LW · GW

Just taking the question at face value, I would like to choose to lift weights for policy selection reasons. If I eat chocolate, the non-Boltzmann brain versions will eat it too, and I personally care a lot more about non-Boltzmann brain versions of me. Not sure how to square that mathematically with infinite versions of me existing and all, but I was already confused about that.

The theme here seems similar to Stuart's past writing claiming that a lot of anthropic problems implicitly turn on preference. Seems like the answer to your decision problem easily depends on how much you care about Boltzmann brain versions of yourself.

Comment by Gram Stone on Does Thinking Hard Hurt Your Brain? · 2018-04-29T21:46:10.839Z · LW · GW

The closest thing to this I've seen in the literature is processing fluency, but to my knowledge that research doesn't really address the willpower depletion-like features that you've highlighted here.

Comment by Gram Stone on Learn Bayes Nets! · 2018-03-27T23:45:38.105Z · LW · GW

It's also a useful analogy for aspects of group epistemics, like avoiding double counting as messages pass through the social network.

Fake Causality contains an intuitive explanation of double-counting of evidence.

Comment by Gram Stone on Set Up for Success: Insights from 'Naïve Set Theory' · 2018-02-28T17:59:19.588Z · LW · GW

Re: proof calibration; there are a couple textbooks on proofwriting. I personally used Velleman's How to Prove It, but another option is Hammack's Book of Proof, which I haven't read but appears to cover the same material at approximately equal length. For comparison, Halmos introduces first-order logic on pages 6 and 7 of Naive Set Theory, whereas Velleman spends about 60 pages on the same material.

It doesn't fit my model of how mathematics works technically or socially that you can really get very confident but wrong about your math knowledge without a lot of self-deception. Exercises provide instant feedback. And according to Terence Tao's model, students don't spend most of their education learning whether or not a proof is valid at all, so much as learning how to evaluate longer proofs more quickly without as much conscious thought.

Part of that process is understanding formal things, part of it is understanding how mathematicians' specialized natural language are shorthand for formal things. E.g. my friend was confused when he read an exercise telling him to prove that a set was "the smallest set" with this property (and perhaps obviously the author didn't unpack this). What this means formally when expanded is "Prove that this set is a subset of every set with this property." AFAICT, there's no way to figure out what this means formally without someone telling you, or (this is unlikely) inventing the formal version yourself because you need it and realizing that 'smallest set' is good shorthand and this is probably what was meant. Textbooks are good for fixing this because the authors know that textbooks are where most students will learn how to talk like a mathematician without spelling everything out. I find ProofWiki very useful for having everything spelled out the way I would like it and consistently when I don't know what the author is trying to say.

Finally, I have a rationalist/adjacent friend who tutored me enough to get to the point where I could verify my own proofs; I haven't talked to them in a while, but I could try to get in touch and see if they would be interested in checking your proofs. Last time I talked to them, they expressed that the main bottleneck on the number of students they had was students' willingness to study.

Comment by Gram Stone on Set Up for Success: Insights from 'Naïve Set Theory' · 2018-02-28T15:08:33.089Z · LW · GW

Re: category-theory-first approaches; I find that most people think this is a bad idea because most people need to see concrete examples before category theory clicks for them, otherwise it's too general, but a few people feel differently and have published introductory textbooks on category theory that assume less background knowledge than the standard textbooks. If you're interested, you could try Awodey's Category Theory (free), or Lawvere's Conceptual Mathematics. After getting some more basics under your belt, you could give either of those a shot, just in case you're the sort of person who learns faster by seeing the general rule before the concrete examples. (These people exist, but I think it's easy to fall into the trap of wishing that you were that sort of person and banging your head against the general rule when you really just need to pick up the concrete examples first. One should update if first-principles approaches are not working.)

Comment by Gram Stone on The Monthly Newsletter as Thinking Tool · 2018-02-05T16:45:25.547Z · LW · GW

I'd be very interested in reading about EverQuest as an exemplar of Fun Theory, if you're willing to share.

Comment by Gram Stone on Against Instrumental Convergence · 2018-01-29T23:23:57.950Z · LW · GW

I think proponents of the instrumental convergence thesis would expect a consequentialist chess program to exhibit instrumental convergence in the domain of chess. So if there were some (chess-related) subplan that was useful in lots of other (chess-related) plans, we would see the program execute that subplan a lot. The important difference would be that the chess program uses an ontology of chess while unsafe programs use an ontology of nature.

Comment by Gram Stone on A LessWrong Crypto Autopsy · 2018-01-28T18:30:18.531Z · LW · GW

It seems like a good idea to collect self-reports about why LessWrongers didn't invest in Bitcoin. For my own part, off the top of my head I would cite:

  • less visible endorsement of the benefits than e.g. cryonics;
  • vaguely sensing that cryptocurrency is controversial and might make my taxes confusing to file, etc.;
  • and reflexively writing off articles that give me investment advice because most investment advice is bad and I generally assume I lack some minimal amount of wealth needed to exploit the opportunity.

So something like We Agree: Get Froze, might have helped. I also could have actually evaluated the difficulty of filing taxes given that I'd purchased bitcoins, instead of flinching away and deciding it was too difficult on the spot. I could pay more attention to figures that are relatively small even for someone with little wealth.

If surveys are still being done, that seems like important data to collect on the next survey. I would want to know if most people knew but didn't invest, or simply didn't know at all, and so on.

Comment by Gram Stone on Examples of Mitigating Assumption Risk · 2017-12-06T17:25:38.667Z · LW · GW

Other things equal, choose the reversible alternative.

Comment by Gram Stone on The Archipelago Model of Community Standards · 2017-11-21T13:02:41.928Z · LW · GW

In particular, thank you for pointing out that in social experiments, phenomenal difficulty is not much Bayesian evidence for ultimate failure.