Posts

Comments

Comment by TekhneMakre on [deleted post] 2021-06-14T06:05:47.866Z

Video: https://www.youtube.com/watch?v=-_NNTVJzqtY

Claims Ivermectin, and also some other treatments, are safe and effective for Covid, and that information about it is being suppressed (both distributedly and maliciously). LessWrong seems to not have discussed Ivermectin.

From the video description:

Dr. Robert Malone is the inventor of mRNA Vaccine technology.
Mr. Steve Kirsch is a serial entrepreneur who has been researching adverse reactions to COVID vaccines.
Dr. Bret Weinstein is an evolutionary biologist.
Bret talks to Robert and Steve about the pandemic, treatment and the COVID vaccines.

Comment by TekhneMakre on Oh No My AI (Filk) · 2021-06-11T15:23:51.411Z · LW · GW

I was gonna do some X

with my AI

I was gonna only do X

with my AI

But when I got it working, it generalized (to Y)

oh no my AI

it also does Y

oh no my AI

Comment by TekhneMakre on What are the best ways to improve resiliency? · 2021-06-11T03:41:08.718Z · LW · GW

Practice listing tons of hypotheses. Then when the life shit hits the fan, you can list many hypotheses of what's going on and plans to untangle stuff.

Comment by TekhneMakre on What are the best ways to improve resiliency? · 2021-06-11T03:39:52.480Z · LW · GW

When you do have slack, explore, especially situations where you're forced to interact with the world. E.g. talking to people in new contexts, making physical objects, etc. Conjecture: there's a sort of "can charge full speed ahead into the unknown" that's basically trainable, and it's about doing things where you start off not at all knowing how it's going to go / how you're going to deal with it; training in low-stakes will transfer to high-stakes.

Comment by TekhneMakre on Reply to Nate Soares on Dolphins · 2021-06-10T22:49:25.832Z · LW · GW

Wait, but this would also apply to similarities of convergent evolution in similar niches. There's the essence of sight, the essence of flight, the essence of water-dwelling, the essence of hunting.

Comment by TekhneMakre on Reply to Nate Soares on Dolphins · 2021-06-10T22:46:34.560Z · LW · GW

I feel like a fun version of noticing this conflict, is to rub one's hands together at the prospect of getting to invent a word for "that set of animals who are members of species which occupy a niche that resembles the niches occupied by (the paraphyletic) Osteichthyes".

Comment by TekhneMakre on Optimization, speculations on the X and only X problem. · 2021-06-09T01:48:20.908Z · LW · GW

To clarify where my responses are coming from: I think what I'm saying is not that directly relevant to your specific point in the post. I'm more (1) interested in discussing the notion of only-X, broadly, and (2) reacting to the feature of your discussion (shared by much other discussion) that you (IIUC) consider only the extensional (input-output) behavior of programs, excluding from analysis the intensional properties. (Which is a reasonable approach, e.g. because the input-output behavior captures much of what we care about, and also because it's maybe easier to analyze and already contains some of our problems / confusions.)

From where I'm sitting, when a program "makes an observation of the world", that's moving around in codespace. There's of course useful stuff to say about the part that didn't change. When we really understand how a cognitive algorithm works, it starts to look like a clear algorithm / data separation; e.g. in Bayesian updating, we have a clear picture of the code that's fixed, and how it operates on the varying data. But before we understand the program in that way, we might be unable to usefully separate it out into a fixed part and a varying part. Then it's natural to say things like "the child invented a strategy for picking up blocks; next time, they just use that strategy", where the first clause is talking about a change in source code. We know for sure that such separations can be done, because for example we can say that the child is always operating in accordance with fixed physical law, and we might suspect there's "fundamental brain algorithms" that are also basically fixed. Likewise, even though Solomonoff induction is always just Solomonoff induction plus data, it can be also useful to understand SI(some data) in terms of understanding those programs that are highly ranked by SI(some data), and it seems reasonable to call that "the algorithm changed to emphasize those programs".

Comment by TekhneMakre on Optimization, speculations on the X and only X problem. · 2021-06-08T05:53:51.243Z · LW · GW

Well, a main reason we'd care about codespace distance, is that it tells us something about how the agent will change as it learns (i.e. moves around in codespace). (This is involving time, since the agent is changing, contra your picture.) So a key (quasi)metric on codespace would be, "how much" learning does it take to get from here to there. The if True: x() else: y() program is an unnatural point in codespace in this metric: you'd have to have traversed the both the distances from null to x() and from null to y(), and it's weird to have traversed a distance and make no use of your position. A framing of the only-X problem is that traversing from null to a program that's an only-Xer according to your definition, might also constitute traversing almost all of the way from null to a program that's an only-Yer, where Y is "very different" from X.

Comment by TekhneMakre on Optimization, speculations on the X and only X problem. · 2021-06-05T23:34:53.974Z · LW · GW

Thanks for trying to clarify "X and only X", which IMO is a promising concept.

One thing we might want from an only-Xer is that, in some not-yet-formal sense, it's "only trying to X" and not trying to do anything else. A further thing we might want is that the only-Xer only tries to X, across some relevant set of counterfactuals. You've discussed the counterfactuals across possible environments. Another kind of counterfactual is across modifications of the only-Xer. Modification-counterfactuals seem to point to a key problem of alignment: how does this generalize? If we've selected something to do X, within some set of environments, what does that imply about how it'll behave outside of that set of environments? It looks like by your definition we could have a program that's a very competent general intelligence with a slot for a goal, plus a pointer to X in that slot; and that program would count as an only-Xer. This program would be very close, in some sense, to programs that optimize competently for not-X, or for a totally unrelated Y. That seems counterintuitive for my intuitive picture of an "X and only X"er, so either there's more to be said, or my picture is incoherent.

Comment by TekhneMakre on Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI · 2021-06-04T01:29:25.828Z · LW · GW

Somewhat relevant: Yudkowsky, Eliezer. 2004. Coherent Extrapolated Volition. https://intelligence.org/files/CEV.pdf

Gives one of the desiderata for CEV as "Avoid creating a motive for modern-day humans to fight over the initial dynamic".

Comment by TekhneMakre on [deleted post] 2021-04-22T03:58:51.474Z

There's two stances I can take when I want to express a thought so that I can think about it with someone. Both could be called "expressing". One could be called "pushing-out": like I'm trying to "get it off my chest", or "leave it behind / drop it so I can move on to the next thought". The other is more appropriately "expressing", as in pressing (copying) something out: I make a copy and give it to the other person, but I'm still holding the original. The former is a habit of mine, but on reflection it's often a mistake; what I really want is to build on the thought, and the way to do that is to keep it active while also thinking the next thought. The underlying mistake might be incorrectly thinking that the other person can perform the "combine already-generated thoughts" part of the overall progression while I do the "generate individual new thoughts" part. Doing things that way results in a lot of dropped thoughts.

Comment by TekhneMakre on [deleted post] 2021-04-22T03:46:07.091Z

Say Alice has a problem with Bob, but doesn't know what it is exactly. Then Bob tries to fix it cooperatively by searching in dimension X for settings that alleviate Alice's problem. If Alice's problem is actually about Bob's position on dimension Y, not X, Bob's activity might appear adversarial: Bob's actions are effectively goodharting Alice's sense of whether things are good, in the same way he'd do if he were actually trying to distract Alice from Y.

Comment by TekhneMakre on My Current Take on Counterfactuals · 2021-04-10T18:32:56.256Z · LW · GW
So I'd rather say that we "affect nothing but what we intervene on and what's downstream of what we intervened on".

A fair clarification.

Not sure whether this has anything to do with your point, though.

My point is very tangential to your post: you're talking about decision theory as top-level naturalized ways of making decisions, and I'm talking about some non-top-level intuitions that could be called CDT-like. (This maybe should've been a comment on your Dutch book post.) I'm trying to contrast the aspirational spirit of CDT, understood as "make it so that there's such a thing as 'all of what's downstream of what we intervened on' and we know about it", with descriptive CDT, "there's such a thing as 'all of what's downstream of what we intervened on' and we can know about it". Descriptive CDT is only sort of right in some contexts, and can't be right in some contexts; there's no fully general Arcimedean point from which we intervene.

We can make some things more CDT-ish though, if that's useful. E.g. we could think more about how our decisions have effects, so that we have in view more of what's downstream of decisions. Or e.g. we could make our decisions have fewer effects, for example by promising to later reevaluate some algorithm for making judgements, instead of hiding within our decision to do X also our decision to always use the piece-of-algorithm that (within some larger mental context) decided to do X. That is, we try to hold off on decisions that have downstream effects we don't understand well yet.

Comment by TekhneMakre on Testing The Natural Abstraction Hypothesis: Project Intro · 2021-04-10T06:37:18.533Z · LW · GW
The specifications would correctly capture what-we-actually-mean, so they wouldn't be prone to goodhart

I think there's an ambiguity in "concept" here, that's important to clarify re/ this hope. Humans use concepts in two ways:

1. as abstractions in themselves, like the idea of an ideal spring which contains its behavior within the mental object, and

2. as pointers / promissory notes towards the real objects, like "tree".

Seems likely that any agent that has to attend to trees, will form the ~unique concept of "tree", in the sense of a cluster of things, and minimal sets of dimensions needed to specify the relevant behavior (height, hardness of wood, thickness, whatever). Some of this is like use (1): you can simulate some of the behavior of trees (e.g. how they'll behave when you try to cut them down and use them to build a cabin). Some of this is like use (2): if you want to know how to grow trees better, you can navigate to instances of real trees, study them to gain further relevant abstractiosn, and then use those new abstractions (nutrient intake, etc.) to grow trees better.

So what do we mean by "strawberry", such that it's not goodhartable? We might mean "a thing that is relevantly naturally abstracted in the same way as a strawberry is relevantly naturally abstracted". This seems less goodhartable if we use meaning (2), but that's sort of cheating by pointing to "what we'd think of these strawberrys upon much more reflection in many more contexts of relevance". If we use meaning (1), that sems eminently goodhartable.

Comment by TekhneMakre on Testing The Natural Abstraction Hypothesis: Project Intro · 2021-04-10T06:24:19.013Z · LW · GW

>There is no continuum of tree-like abstractions.

Some possibly related comments, on why there might be discrete clusters:

https://www.lesswrong.com/posts/2J5AsHPxxLGZ78Z7s/bios-brakhus?commentId=hPfEp5r2K5BsfNe4F

Comment by TekhneMakre on Identifiability Problem for Superrational Decision Theories · 2021-04-10T05:39:11.792Z · LW · GW

From a superrational perspective (in the game with no randomness), in both cases there's two actions; in the correlation game both actions give a util, in the anti-correlation game both actions give no utils. The apparent difference is based on the incoherent counterfactual "what if I say heads and my copy says tails", which doesn't translate into the superrational perpective.

Comment by TekhneMakre on My Current Take on Counterfactuals · 2021-04-10T05:21:25.909Z · LW · GW

(Side note: There's an aspect to the notion of "causal counterfactual" that I think it's worth distinguishing from what's discussed here. This post seems to take causal counterfactuals to be a description of top-level decision reasoning. A different meaning is that causal counterfactuals refer to an aspiration / goal. Causal interventions are supposed to be interventions that "affect nothing but what's explicitly said to be affected". We could try to describe actions in this way, carefully carving out exactly what's affected and what's not; and we find that we can't do this, and so causal counterfactuals aren't, and maybe can't possibly, be a good description (e.g. because of Newcomb-like problems). But instead we could view them as promises: if I manage to "do X and only X" then exactly such and such effects result. In real life if I actually do X there will be other effects, but they must result from me having done something other than just exactly X. This seems related to the way in which humans know how to express preferences data-efficiently, e.g. "just duplicate this strawberry, don't do any crazy other stuff".)

Comment by TekhneMakre on Reflective Bayesianism · 2021-04-07T15:41:29.613Z · LW · GW

>Surely there's some precise way the universe is.


Agree, and would love to see a more detailed explicit discussion of what this means and whether it's true. (Also, worth noting that there may be a precise way the universe is, but no "precise" way that "you" fit into the universe, because "you" aren't precise.)

Comment by TekhneMakre on Predictive Coding has been Unified with Backpropagation · 2021-04-07T15:24:00.105Z · LW · GW
--Human brains have special architectures, various modules that interact in various ways (priors?)
--Human brains don't use Backprop; maybe they have some sort of even-better algorithm

This is a funny distinction to me. These things seem like two ends of a spectrum (something like, the physical scale of "one unit of structure"; predictive coding is few-neuron-scale, modules are big-brain-chunk scale; in between, there's micro-columns, columns, lamina, feedback circuits, relays, fiber bundles; and below predictive coding there's the rules for dendrite and synapse change).


I wouldn't characterize my own position as "we know a lot about the brain." I think we should taboo "a lot."
I think there's mounting evidence that brains use predictive coding

Are you saying, there's mounting evidence that predictive coding screens off all lower levels from all higher levels? Like all high-level phenomena are the result of predictive coding, plus an architecture that hooks up bits of predictive coding together?

Comment by TekhneMakre on Predictive Coding has been Unified with Backpropagation · 2021-04-07T15:16:23.122Z · LW · GW
It is implausible that human beings' cognitive instincts contain significantly more information than the human genome (750 megabytes). I expect our instincts contain much less.

Our instincts contain pointers to learning from other humans, which contain lots of cognitive info. The pointer is small, but that doesn't mean the resulting organism is algorithmically that simple.

Comment by TekhneMakre on [deleted post] 2021-04-06T08:06:47.493Z


__Levers error__.

Anna writes about bucket errors . Attempted summary: sometimes two facts are mentally tracked by only one variable; in that case, correctly updating the belief about one fact can also incorrectly update the belief about the other fact, so it is sometimes epistemic to flinch away from the truth of the first fact (until you can create more variables to track the facts separately).

There's a conjugate error: two actions are bound together in one "lever".

For example, I want to clean my messy room. But somehow it feels pointless / tiring, even before I've started. If I just started cleaning anyway, I'd get bogged down in some corner, trying to make a bunch of decisions about where exactly to put lots of futzy random objects, tiring myself out and leaving my room still annoyingly cluttered. It's not that there's a necessary connection between cleaning my room and futzing around inefficiently; it's that the only lever I have right now that activates the "clean room" action also activates the "futz interminably" action.

What I want instead is to create a lever that activates "clean room" but not "futz", e.g. by explicitly noting the possibility to just put futzy stuff in a box and not deal with it more. When I do that, I feel motivated to clean my messy room. I think this explains some "akrasia".

The general pattern: I want to do X to acheive some goal, but the only way (that I know how right now) to do X is if I also do Y, and doing Y in this situation would be bad. Flinching away from action toward a goal is often about protecting your goals.

Comment by TekhneMakre on [deleted post] 2021-04-06T07:53:18.688Z

Generally, apprenticeships should have planned obsolescence. A pattern I've seen in myself and others: A student takes a teacher. They're submissive, in a certain sense--not giving up agency, or harming themselves, or following arbitrarily costly orders, or being overcredulous; but rather, a narrow purely cognition-allocating version of assuming a low-status stance: deferring to local directions of attention by the teacher, provisionally accepting some assumptions, taking a stance of trying to help the teacher with the teacher's work. This is good because it enhances the bandwidth and depth of transmission of tacit knowledge from the teacher.

But, for many students, it shouldn't be the endpoint of their development. At some point they should be questioning all assumptions, directing their attention and motivation on all levels, being the servant of their own plans. When this is delayed, e.g. the teacher or the student or something else is keeping the student within a fixed submissive role, the student is stunted, bitter, wasted, restless, jerked around, stagnant. In addition to lines of retreat from social roles, have lines of developmental fundamental change.

Comment by TekhneMakre on [deleted post] 2021-04-06T07:12:16.348Z


Say Alice is making some point to Bob, and Carol is listening and doesn't like the point and tries to stop Alice from making the point to Bob. What might be going on? What is Carol trying to do, and why? She might think Alice is lying / disinforming--basing her arguments on false information or invalid arguments with false conclusions. But often that's not what Carol reports; rather, even if Alice's point is true and her arguments are valid reasoning from true information, and Carol could be expected to know that or at least not be so sure that's not the case, Carol still wants to stop Alice from making the point. It's a move in a "culture war".

But what does that even mean? We might steelman Carol as implicitly working from an assumption like: maybe Alice's literal, decoupled point is true; but no one's a perfect decoupler, and so Bob might still make mistaken inferences from Alice's true point, leading Bob to do bad things and spread disinformation. Another interpretation is more Simulacra: the claims have no external meaning, it's a war for power over the narrative, and you want to say your side's memes and block the other side's memes.

Here's a third interpretation, close to the Simulacra one, but with a clarification: maybe part of what's going on, is that Bob does know how to check local consistency of his ideology, even though he lacks the integrative motive or skill to evaluate his whole position by modeling the world. So Bob is going to copy one or another ideology being presented. From within the reach of Bob's mind, the conceptual vocabularies of opposed ideologies don't have many shared meanings, even though on their own they are coherent and describe at least some of the world recognizably well. So there's an exclusion principle: since Bob can't assimilate the concepts of an ideology opposed to his into his vocabulary, unless given a large activation push, Bob will continue gaining fluency in his current vocabulary while the other vocabulary bounces off of him. However, talking to someone is enough activation energy to at least gain a little fluency, if only locally and temporarily, with their vocabulary. Carol may be worried that if there's too many instances of various Alices successfully explaining points to Bob, then Bob will get enough fluency to be "over the hump" and will start snowballing more fluency in the opposing ideology, and eventually might switch loyalties.

Comment by TekhneMakre on [deleted post] 2021-04-05T14:03:05.559Z

Persian messenger: "Listen carefully, Leonidas. Xerxes conquers and controls everything he rests his eyes upon. He leads an army so massive it shakes the ground with its march, so vast it drinks the rivers dry. All the God-King Xerxes requires is this: a simple offering of earth and water. A token of Sparta's submission to the will of Xerxes."

[...]

Persian messenger: "Choose your next words carefully, Leonidas. They may be your last as king."

[...]

Leonidas: "Earth and water... You'll find plenty of both down there." [indicates the well with his sword]

Persian messenger: "No man, Persian or Greek, no man threatens a messenger!"

Leonidas: "You bring the crowns and heads of conquered kings to my city's steps! You insult my queen. You threaten my people with slavery and death! Oh, I've chosen my words carefully, Persian, while yours are lashed from your lips by the whip of your God-King. I'll give you a final chance to live with justice: give up your fearful allegiance to your slavemaster Xerxes, do not speak his threats for him any more, and come live in Greece as a free man."

Persian messenger: "This is blasphemy! This is madness!"

Leonidas: "Madness? THIS IS SPARTA!" [kicks the Persian messenger into the deep well]

Comment by TekhneMakre on [deleted post] 2021-04-03T08:15:45.092Z


**test1**
test2

test3

Comment by TekhneMakre on [deleted post] 2021-04-03T08:15:00.996Z

test1 test2

test3

Comment by TekhneMakre on [deleted post] 2021-04-03T08:07:55.263Z

test1 test2

test3

Comment by TekhneMakre on [deleted post] 2021-04-03T08:07:39.025Z

test1 test2

test3

Comment by TekhneMakre on [deleted post] 2021-04-03T08:04:11.295Z

test1 test2

test3

Comment by TekhneMakre on [deleted post] 2021-04-03T06:07:51.133Z

Consequent clusters

This may be a confused question, but it seems like it'd be more satisfying to have a story where clusterness "goes up", rather than just being "copied over" from other clusteredness.

Comment by TekhneMakre on [deleted post] 2021-04-03T06:01:49.215Z

Some possible answers:

Attractor states. The world is a multi-level dynamical system. Dynamical systems have attractor states. These attractor states are stereotyped, i.e. clustered. E.g. the elements exist because nuclei with certain numbers of protons and neutrons are the attractor states (or rather, attractor factors of the state space), and then nuclear structure implies features of elements. You don't get long thin strands of nucleons, or nucleons spaced out by L-12m (atoms have nucleons that are ~L-14.5m apart); those aren't stable, and aren't attractors, or at least their basins of attraction are small. Why isn't there a continuum of attractor states? Does it really make sense to view all of reality as a multi-level dynamical system?

Anthropics. Maybe minds only arise when there's (multi-level) cluster structure. Is that a satisfying explanation?

Processes. If there's a process that produces something, and it just keeps going, it'll produce lots of those things. Then there will be a cluster of "things produced by this process". This doesn't even have to derive from a preexisting cluster of processes operating in parallel, though it does derive from a cluster of the same process at different times. E.g. an artist might make a series of works that have a singular character, in some ways alike between themselves and distinct from all other art.

Stability. Stability is maybe the same as clustering across time. Maybe what we call "existence" or "reality" already implies some stability, so everything real necessarily already participates in some clusters.

Consequent clusters. If you have things in a cluster, they'll create, develop into, or induce in the world things that are also clustered because their causes are clustered. E.g. the litany of human universals, which derive from the human cluster but seem additional to it.

Comment by TekhneMakre on [deleted post] 2021-04-03T05:15:15.201Z

Cluster structure of Thingspace.

But why does thingspace have clusters? Many objects are made by processes; if the same process makes many objects, those products will have many things in common.

Examples of clusters: Members of a species. Species in a clade. Species occupying similar niches (e.g., birds and flying insects have stuff in common not shared by other animals). Photocopies of something. Fragrances. Metals. Stars. Fascist states. Philosophers. Vector spaces. Etc.

Some categories seem to have certain features in common "only" because the category is selected for those features. E.g. "rock". Is a mountain a rock? "A rock" is prototypically grippable-size. A rock the size of a person is a "big rock", and a bigger rock is a boulder, etc. Much smaller, and it's a pebble or a grain. I don't think rocks, broadly construed, actually have a tendency to be grippable-size. We can still construe this as a genuine cluster, but it's more relational, not purely intrinsic to the [objects, as completely external to the observer]. It's not a trivial cluster; e.g. a prototypical rock can be knapped [by human hands], whereas a gigantic or tiny rock can't be knapped [by human hands] in the same sense. But we can view it's cluster-ness as partly not about empirical external [observer-independent] clustering. (There's nothing postmodern or whatever about this, of course; just noting that it's a cluster of relations with something else, and also the something-else happens to be the observer, so it'd be a mistake to think there's a preexisting tendency of rocks to be several centimeters wide.) ISTM there's some tension between this, and the background of talking about thingspace: we talk about thingspace from a place of aspiring to make maps, plus a belief that, when trying to make accurate maps, it's good engineering to have ideas that act mentally in the same way that parts of the world act in the world. But in the case of "a rock" there's a weird mixing of ideas: the concept of "a rock", and the mental stuff involved in making there be such a thing (e.g. skill in knapping).

Why are human eyes clustered? Here are some answers:

  1. Because they are generated by the same process: adaptation of the human genome-pool to the human ecological niche. That is, human eyes share most of their causal history, up until a few L4s--L6s of years ago. (L notation)
  2. Because they play the role, in the human organism, of vision.
  3. Because embryonic and physiological systems homeostatically create and preserve eyes in a narrow range along many dimensions, so they don't just degenerate / mutate.

To be clear, the question isn't "why are human eyes similar?". The question is, "why is there a cluster; why are there many things that are {spherical, have a retina and cornea and circular iris and blood vessels, are pointed by muscles so the pupil-retina axes intersect on an object, are a couple cm wide, sit in bony sockets, resolve light into an image with a precision of about L-3.5 radians, have a ~circular fovea, etc.} and way fewer things with most but not ~all of these features?".

So the question isn't, "why do eyes have these features?", but more like, in the space of things, why do almost all things that have most of these features have ~all of these features? (NB: the list I gave probably doesn't pin down human eyes as distinct from, I think, maybe some other primate eyes, maybe some bird eyes, probably other eyes; but we could make such a list, e.g. by being more specific about the size, the proteins used, the distribution and types of photosensitive cells, etc.)

What about metal? We encounter metal that's highly processed, optimized via homeostatic engineering to be this or that alloy with this or that ratio of such and such metals, heated and cooled and beaten to settle in a certain way. IIUC, native metals do have a cluster structure, but it's pretty messy; gold usually comes with silver, copper comes with tin and arsenic and so on; how much depends on where you find it. But also there's totally a cluster structure of metal: if you process it, you can separate out one kind from another, chemically react it to undo compounding such as rust, etc. There's clusters, but they're hidden. Why are they there? Why is there such a thing as iron or copper? Also why is there such a thing as metal? (The question isn't, "why is metal shiny?" or "why is metal conductive?" or "why is metal hard?", but rather, "why, among the elements, do those features correlate?".)

Is there something in common between the answers to "why is there such a thing as metal?" and "why is there such a thing as the human eye?"?

Comment by TekhneMakre on [deleted post] 2021-04-03T04:37:02.627Z

Notation: L(X) means 10^X. Also written LX. So L1 means 10, L2 means a hundred, L6 means a million.

L.05 ~= 1.12

L.1 ~= 1.26 ~= 5/4

L.2 ~= 1.6

L.3 ~= 2

L.4 ~= 2.5

L.5 ~= 3.2

L.6 ~= 4

L.7 ~= 5

L.8 ~= 6.3

L.9 ~= 8

LX * LY = L(X+Y)

E.g. L(X+.3) = 2LX, and L(X+1) = 10LX

LX + LY ~= LX if X > Y, off by a factor of L(Y-X).

E.g. L2 + L0 ~= L2, i.e. 100 + 1 ~= 100, off by a factor of L-2 = 1/100.

L-X = 1 / LX

centimeter = L-2m

mm = L-3m

nm = L-9m, etc.

L.(0)ⁿ1 ~= 1.(0)ⁿ23

L is for logarithm (base 10) because we're in logspace; maybe E would be better but I like L better for some reason, maybe because E already means expectation and e looks like the number e.

Comment by TekhneMakre on [deleted post] 2021-04-02T21:46:21.805Z

Just because someone is right about something or is competent at something, doesn't mean you have to or ought to: do what they do; do what they tell you to do; do what's good for them; do what they want you to do; do what other people think that person wants you to do; be included in their plans; be included in their confidence; believe what they believe; believe important what they believe important. If you don't keep this distinction, then you might have a bucket error about "X is right about / good at Y" and "I have to Z" for some Z mentioned above, and Z might require a bunch of bad stuff, and so you will either not want to admit that X is good at Y, or else you will stop tracking in general when people are good at Y, or stop thinking Y matters (whereas by default you did think Y matters). Meritocracy (rule of the meritorious) isn't the same thing as... meritognosis(?) (knowing who is meritorious). In general, -cracy is only good in some situations.

Comment by TekhneMakre on [deleted post] 2020-01-02T00:11:35.517Z

x

Comment by TekhneMakre on [deleted post] 2019-12-11T10:48:35.281Z

x

Comment by TekhneMakre on [deleted post] 2019-12-10T11:05:26.761Z

x

Comment by TekhneMakre on [deleted post] 2019-09-29T00:15:12.119Z

x

Comment by TekhneMakre on [deleted post] 2019-09-28T20:29:51.626Z

x

Comment by TekhneMakre on Rationality, Levels of Intervention, and Empiricism · 2019-09-26T07:21:02.915Z · LW · GW

x

Comment by TekhneMakre on Rationality, Levels of Intervention, and Empiricism · 2019-09-26T03:34:14.018Z · LW · GW

x

Comment by TekhneMakre on [deleted post] 2019-09-24T22:27:27.924Z

x

Comment by TekhneMakre on [deleted post] 2019-09-23T19:59:10.062Z

x

Comment by TekhneMakre on [deleted post] 2019-09-23T02:25:39.262Z

x

Comment by TekhneMakre on [deleted post] 2019-09-23T00:56:19.724Z

x

Comment by TekhneMakre on [deleted post] 2019-09-22T00:02:04.308Z

x

Comment by TekhneMakre on [deleted post] 2019-09-21T09:00:05.351Z

x

Comment by TekhneMakre on [deleted post] 2019-09-21T06:22:18.860Z

x

Comment by TekhneMakre on [deleted post] 2019-09-12T07:55:21.788Z

x

Comment by TekhneMakre on [deleted post] 2019-09-09T04:43:34.400Z

x