Alex_Altair's Shortform

post by Alex_Altair · 2022-11-27T18:59:05.193Z · LW · GW · 8 comments


Comments sorted by top scores.

comment by Alex_Altair · 2024-03-05T20:05:53.130Z · LW(p) · GW(p)

Has anyone checked out Nassim Nicholas Taleb's book Statistical Consequences of Fat Tails? I'm wondering where it lies on the spectrum from textbook to prolonged opinion piece. I'd love to read a textbook about the title.

Replies from: CstineSublime, Morpheus
comment by CstineSublime · 2024-03-05T22:53:56.137Z · LW(p) · GW(p)

Taleb has made available a technical Monograph that parallels that book, and all of his books. You can find it here:

comment by Morpheus · 2024-03-07T01:19:54.637Z · LW(p) · GW(p)

The pdf linked by @CstineSublime is definitely towards the textbook. I’ve started reading it and it has been an excellent read so far. Will probably write a review later.

comment by Alex_Altair · 2023-11-22T23:10:02.004Z · LW(p) · GW(p)

Here's my guess as to how the universality hypothesis a.k.a. natural abstractions [LW · GW] will turn out. (This is not written to be particularly understandable.)

  1. At the very "bottom", or perceptual level of the conceptual hierarchy, there will be a pretty straight-forward objective set of concept. Think the first layer of CNNs in image processing, the neurons in the retina/V1, letter frequencies, how to break text strings into words. There's some parameterization here, but the functional form will be clear (like having a basis of n vectors in R^n, but it (almost) doesn't matter which vectors).
  2. For a few levels above that, it's much less clear to me that the concepts will be objective. Curve detectors may be universal, but the way they get combined is less obviously objective to me.
  3. This continues until we get to a middle level that I'd call "objects". I think it's clear that things like cats and trees are objective concepts. Sufficiently good language models will all share concepts that correspond to a bunch of words. This level is very much due to the part where we live in this universe, which tends to create objects, and on earth, which has a biosphere with a bunch of mid-level complexity going on.
  4. Then there will be another series of layers that are less obvious. Partly these levels are filled with whatever content is relevant to the system. If you study cats a lot then there is a bunch of objectively discernible cat behavior. But it's not necessary to know that to operate in the world competently. Rivers and waterfalls will be a level 3 concept, but the details of fluid dynamics are in this level.
  5. Somewhere around the top level of the conceptual hierarchy, I think there will be kind of a weird split. Some of the concepts up here will be profoundly objective; things like "and", mathematics, and the abstract concept of "object". Absolutely every competent system will have these. But then there will also be this other set of concepts that I would map onto "philosophy" or "worldview". Humans demonstrate that you can have vastly different versions of these very high-level concepts, given very similar data, each of which is in some sense a functional local optimum. If this also holds for AIs, then that seems very tricky.
  6. Actually my guess is that there is also a basically objective top-level of the conceptual hierarchy. Humans are capable of figuring it out but most of them get it wrong. So sufficiently advanced AIs will converge on this, but it may be hard to interact with humans about it. Also, some humans' values may be defined in terms of their incorrect worldviews, leading to ontological crises with what the AIs are trying to do.
comment by Alex_Altair · 2022-11-27T18:59:05.505Z · LW(p) · GW(p)

Totally baseless conjecture that I have not thought about for very long; chaos is identical to Turing completeness. All dynamical systems that demonstrate chaotic behavior are Turing complete (or at least implement an undecidable procedure).

Has anyone heard of an established connection here?

Replies from: gwern
comment by gwern · 2022-11-27T20:10:56.524Z · LW(p) · GW(p)

Might look at Wolfram's work. One of the major themes of his CA classification project is that chaotic (in some sense, possibly not the rigorous ergodic dynamics definition) rulesets are not Turing-complete; only CAs which are in an intermediate region of complexity/simplicity have ever been shown to be TC.

comment by Lakin (ChrisLakin) · 2023-03-29T21:40:22.050Z · LW(p) · GW(p)

Maybe you already thought of this, but it might be a nice project for someone to take the unfinished drafts you've published, talk to you, and then clean them up for you.  Apprentice/student kind of thing. (I'm not personally interested in this, though.)

Replies from: Alex_Altair
comment by Alex_Altair · 2023-03-29T21:50:04.187Z · LW(p) · GW(p)

I like that idea! I definitely welcome people to do that as practice in distillation/research, and to make their own polished posts of the content. (Although I'm not sure how interested I would be in having said person be mostly helping me get the posts "over the finish line".)