**johnicholas**on Yes, Virginia, You Can Be 99.99% (Or More!) Certain That 53 Is Prime · 2013-11-07T12:38:28.248Z · score: -1 (1 votes) · LW · GW

If humans are bad at mental arithmetic, but good at, say, not dying - doesn't that suggest that, as a practical matter, humans should try to rephrase mathematical questions into questions about danger?

E.g. Imagine stepping into a field crisscrossed by dangerous laser beams in a prime-numbers manner to get something valuable. I think someone who had a realistic fear of the laser beams, and a realistic understanding of the benefit of that valuable thing would slow down and/or stop stepping out into suspicious spots.

Quantifying is ONE technique, and it's been used very effectively in recent centuries - but those successes were inside a laboratory / factory / automation structure, not in an individual-rationality context.

**johnicholas**on The Robots, AI, and Unemployment Anti-FAQ · 2013-07-24T15:13:05.157Z · score: 3 (7 votes) · LW · GW

Steve Keen's Debunking Economics blames debt, not automation.

Essentially, many people currently feel that they are deep in debt, and work to get out of debt. Keen has a ODE model of the macroeconomy that shows various behaviors, including debt-driven crashes.

Felix Martin's Money goes further and argues that strong anti-inflation stances by central bank regulators strengthen the hold of creditors over debtors, which has made these recent crashes bigger and more painful.

**johnicholas**on Kevin T. Kelly's Ockham Efficiency Theorem · 2013-05-19T17:38:02.304Z · score: 0 (0 votes) · LW · GW

The statements, though contradictory, refer to two different thought experiments.

**johnicholas**on Kevin T. Kelly's Ockham Efficiency Theorem · 2013-05-18T13:55:13.441Z · score: 0 (0 votes) · LW · GW

The two comments, though contradictory, refer to two different thought experiments.

**johnicholas**on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-17T23:26:33.975Z · score: 0 (0 votes) · LW · GW

Is it reasonable to take this as evidence that we shouldn't use expected utility computations, or not only expected utility computations, to guide our decisions?

If I understand the context, the reason we believed an entity, either a human or an AI, ought to use expected utility as a practical decision making strategy, is because it would yield good results (a simple, general architecture for decision making). If there are fully general attacks (muggings) on all entities that use expected utility as a practical decision making strategy, then perhaps we should revise the original hypothesis.

Utility as a theoretical construct is charming, but it does have to pay its way, just like anything else.

P.S. I think the reasoning from "bounded rationality exists" to "non-Bayesian mind changes exist" is good stuff. Perhaps we could call this "on seeing this, I become willing to revise my model" phenomenon something like "surprise", and distinguish it from merely new information.

**johnicholas**on Second-Order Logic: The Controversy · 2013-01-12T16:28:22.123Z · score: 1 (1 votes) · LW · GW

Magic Haskeller and Augustsson's Djinn are provers (or to say it another way, comprehensible as provers, or to say it another way, isomorphic to provers). They attempt to prove the proposition, and if they succeed they output the term corresponding (via the Curry-Howard Isomorphism) to the proof.

I believe they cannot output a term t :: a->b because there is no such term, because 'anything implies anything else' is false.

**johnicholas**on Second-Order Logic: The Controversy · 2013-01-12T00:19:48.484Z · score: 1 (1 votes) · LW · GW

The type constructors that you're thinking of are Arrow and Int. Forall is another type constructor, for constructing generic polymorphic types. Some types such as "Forall A, Forall B, A -> B" are uninhabited. You cannot produce an output of type B in a generic way, even if you are given access to an element of type A.

The type corresponding to a proposition like "all computable functions from the reals to the reals are continuous" looks like a function type consuming some representation of "a computable function" and producing some representation of "that function is continuous". To represent that, you probably need dependent types - this would be a type constructor that takes an element, not a type, as a parameter. Because not all functions are continuous, the codomain representing "that function is continuous" isn't in general inhabited. So building an element of that codomain is not necessarily trivial - and the process of doing so amounts to proving the original proposition.

What I'm trying to say is that the types that type theorists use look a lot more like propositions than the types that mundane programmers use.

**johnicholas**on Forcing Anthropics: Boltzmann Brains · 2013-01-08T20:45:11.655Z · score: 0 (0 votes) · LW · GW

I think you may be sincerely confused. Would you please reword your question?

If your question is whether someone (either me or the OP) has committed a multiplication error - yes, it's entirely possible, but multiplication is not the point - the point is anthropic reasoning and whether "I am a Bolzmann brain" is a simple hypothesis.

**johnicholas**on Second-Order Logic: The Controversy · 2013-01-07T16:02:28.763Z · score: 0 (0 votes) · LW · GW

The arithmetical hierarchy is presuming a background of predicate logic; I was not presuming that. Yes, the type theory that I was gesturing towards would have some similarity to the arithmetical hierarchy.

I was trying to suggest that the answer to "what is a prediction" might look like a type theory of different variants of a prediction. Perhaps a linear hierarchy like the arithmetical hierarchy, yes, perhaps something more complicated. There could be a single starting type "concrete prediction" and a type constructor that, given source type and a destination type, gives the type of statements defining functions that take arguments of the source type and give arguments of the destination type.

The intuitive bridge for me from ultrafinitism to the question "what counts as a prediction?" is that ultrafinitists do happily work with entities like 2^1000 considered as a structure like ("^" "2" "1000"), even if they deny that those structures refer to things that are definitely numbers (of course, they can agree that they are definitely "numbers", given an appropriately finitist definition of "number"). Maybe extreme teetotalers regarding what counts as a prediction would happily work with things such as computable functions returning predictions, even if they don't consider them predictions.

**johnicholas**on Second-Order Logic: The Controversy · 2013-01-07T14:01:00.236Z · score: 1 (1 votes) · LW · GW

Perhaps there is a type theory for predictions, with concrete predictions like "The bus will come at 3 o'clock", and functions that output concrete predictions like "Every monday, wednesday and friday, the bus will come at 3 o'clock" (consider the statement as a function taking a time and returning a concrete prediction yes or no).

An ultrafinitist would probably not argue with the existence of such a function, even though to someone other than an ultrafinitist, the function looks like it is quantifying over all time. From the ultrafinitist's point of view, you're going to apply it to some concrete time, and at that point it's going to output a some concrete prediction.

**johnicholas**on Second-Order Logic: The Controversy · 2013-01-06T18:45:19.436Z · score: 3 (3 votes) · LW · GW

The prevalence of encodings means that we might not be able to "build machines with one or the other". That is, given that there are basic alternatives A and B and A can encode B and B can encode A, it would take a technologist specializing in hair-splitting to say whether a machine that purportedly is using A is "really" using A at its lowest level or whether it is "actually" using B and only seems to use A via an encoding.

If in the immediate term you want to work with many-sorted first order logic, a reasonable first step would be to implement single-sorted first order logic, and a preprocessor to translate the many-sorted syntax into the single-sorted syntax. Then further development, optimizations and so on, might complicate the story, compacting the preprocessor and single-sorted engine into a single technology that you might reasonably say implements many-sorted logic "natively".

The prevalence of encodings means that apparently different foundations don't always make a difference.

**johnicholas**on Re: Second-Order Logic: The Controversy · 2013-01-05T16:35:39.575Z · score: 3 (3 votes) · LW · GW

It seems to me like this discussion has gotten too far into either/or "winning". The prevalance of encodings in mathematics and logic (e.g. encoding the concept of pairing in set theory by defining (a, b) to be the set {{a}, {a, b}}, the double negation encoding of classical logic inside intuitionist logic, et cetera) means that the things we point to as "foundations" such as the ZF axioms for set theory are not actually foundational, in the sense of necessary and supporting. ZF is one possible system which is sufficiently strong to encode the things that we want to do. There are many other systems which are also sufficiently strong, and getting multiple perspectives on a topic by starting from different alternative "foundations" is probably a good thing. You do not have to choose just one!

Another way to look at it is - the things that we call foundations of mathematics are like the Planck length - yes, according to our current best theory, that's the smallest, most basic element. However, that doesn't mean it is stable; rather, it's cutting edge (admittedly, Planck length is more at the cutting edge of physics than foundations of mathematics are at a cutting edge of mathematics). The stable things are more like cubits and inches - central, human-scale "theorems" such that if a new alternative foundation of mathematics contradicted them, we would throw away the foundation rather than the theorem. Some candidate "cubit" or "inch" theorems (really, arguments) might be the infinitude of primes, or the irrationality of the square root of 2.

Friedman and Simpson have a program of "Reverse Mathematics", studying in what way facts that we normally think of as theorems, like the Heine-Borel theorem, pin down things that we normally think of as logic axioms, such as induction schemas. I really think that Simpson's paper "Partial Realizations of Hilbert's Program" is insightful and helpful to this discussion: http://www.math.psu.edu/simpson/papers/hilbert.ps

Lakatos's "Proofs and Refutations" is also helpful, explaining in what sense a flawed and/or informal argument is helpful - Note that Lakatos calls arguments proofs, even when they're flawed and/or informal, implicitly suggesting that everyone calls their arguments proofs until the flaws are pointed out. I understand he retreated from that philosophical position later in life, but that pre-retreat philosophical position might still be tenable.

**johnicholas**on Second-Order Logic: The Controversy · 2013-01-05T14:43:04.009Z · score: 7 (7 votes) · LW · GW

Feeling our way into a new formal system is part of our (messy, informal) pebblecraft. Sometimes people propose formal systems starting with their intended semantics (roughly, model theory). Sometimes people propose formal systems starting with introduction and elimination rules (roughly, proof theory). If the latter, people sometimes look for a semantics to go along with the syntax (and vice versa, of course).

For example, lambda calculus started with rules for performing beta reduction. In talking about lambda calculus, people refer to it as "functions from functions to functions". Mathematicians knew that there was no nontrivial set S with a bijection to the set of all functions from S to S. So something else must be going on. Dana Scott invented domain theory partially to solve this problem. Domains have additional structure, such that some domains D do have bijections with the set of all *structure-preserving* maps from D to D. http://en.wikipedia.org/wiki/Domain_theory Similarly, modal logics started with just proof theory, and then Saul Kripke invented a semantics for them. http://en.wikipedia.org/wiki/Kripke_semantics

There's always a syntactical model which you can construct mechanically from the proof theory (at least, that's my understanding of the downward Lowenheim-Skolem argument) - but Scott and Kripke were not satisfied with that syntactical model, and went looking for something else, more insightful and perspicuous. Adding a "semantic" understanding can increase our (informal) confidence in a formal system - even a formal system that we were already working with. I'm not sure why it helps, but I think it's part of our pebblecraft that it does help.

Perhaps adding a "semantic" understanding is like another bridge between informal and formal reasoning. These bridges are only partly formal - they're also partly informal, the concepts and gesturing around the equations and proofs. Having one bridge is sufficient to go onto the Island of Formality, do some stuff, and come off again, but might be more convenient to have two, or three.

**johnicholas**on Second-Order Logic: The Controversy · 2013-01-04T20:58:02.332Z · score: 10 (10 votes) · LW · GW

This and previous articles in this series emphasize attaching meaning to sequences of symbols via discussion and gesturing toward models. That strategy doesn't seem compatible with your article regarding sheep and pebbles. Isn't there a way to connect sequences of symbols to sheep (and food and similar real-world consequences) directly via a discipline of "symbolcraft"?

I thought pebblecraft was an excellent description of how finitists and formalists think about confusing concepts like uncountability or actual infinity: Writing down "... is uncountably infinite ..." is part of our pebblecraft, and we have some confidence that when we get to the bottom line of the formal argument, we'll be able to interpret the results "in the usual way". The fact that someone (perhaps an alien) might watch our pebblecraft and construe us (in medias res) to be referring to (abstract, mathematical) things other than the (abstract, mathematical) things that we usually construe ourselves to be referring to doesn't stop the pebblecraft magic from working.

**johnicholas**on Godel's Completeness and Incompleteness Theorems · 2012-12-28T17:35:05.981Z · score: 3 (3 votes) · LW · GW

Mathematics and logic are part of a strategy that I'll call "formalization". Informal speech leans on (human) biological capabilities. We communicate ideas, including ideas like "natural number" and "set" using informal speech, which does not depend on definitions. Informal speech is not quite pointing and grunting, but pointing and grunting is perhaps a useful cartoon of it. If I point and grunt to a dead leaf, that does not necessarily pin down any particular concept such as "dead leaves". It could just as well indicate the concept "things which are dry on top and wet on bottom" - including armpits. There's a Talmudic story about a debate that might be relevant here. Nevertheless (by shared biology) informal communication is possible.

In executing the strategy of formalization, we do some informal argumentation to justify and/or attach meaning to certain symbols and/or premises and/or rules. Then we do a chain of formal manipulations. Then we do some informal argumentation to interpret the result.

Model theory is a particular strategy of justifying and/or attaching meaning to things. It consists of discussing things called "models", possibly using counterfactuals or possible worlds as intuition boosters. Then it defines the meaning of some strings of symbols first by reference to particular models, and then "validity" by reference to all possible models, and argues that certain rules for manipulating (e.g. simplifying) strings of symbols are "validity-preserving".

Model theory is compelling, perhaps because it seems to offer "thicker" foundations. But you do not have to go through model theory in order to do the strategy of formalization. You can justify and/or attach meaning to your formal symbols and/or rules directly. A simple example is if you write down sequences of symbols, explain how to "pronounce" them, and then say "I take these axioms to be self-evident.".

One problem with model theory from my perspective is that it puts too much in the metatheory (the informal argumentation around the formal symbol-manipulation). Set theory seems to me like something that should be in the formal (even, machine-checkable?) part, not in the metatheory. It's certainly possible to have two layers of formality, which in principle I have no objection to, but model theoretical arguments often seem to neglect the (informal) work to justify and attach meaning to the outer layer. Furthermore, making the formal part more complicated has costs.

I think this paper by Simpson: Partial Realizations of Hilbert's Program might be illuminating.

**johnicholas**on Godel's Completeness and Incompleteness Theorems · 2012-12-26T15:36:34.487Z · score: 8 (8 votes) · LW · GW

I'm concerned that you're pushing second order logic too hard, using a false fork - such and so cannot be done in first order logic therefore second-order logic. "Second order" logic is a particular thing - for example it is a logic based on model theory. http://en.wikipedia.org/wiki/Second-order_logic#History_and_disputed_value

There are lots of alternative directions to go when you go beyond the general consensus of first-order logic. Freek Wiedijk's paper "Is ZF a hack?" is a great tour of alternative foundations of mathematics - first order logic and the Zermelo-Frankel axioms for set theory turns out to be the smallest by many measures, but the others are generally higher order in one way or another. http://www.cs.ru.nl/~freek/zfc-etc/zfc-etc.pdf

"First order logic can't talk about finiteness vs. infiniteness." is sortof true in that first-order logic in a fixed language is necessarily local - it can step from object to object (along relation links) but the number of steps bounded by the size of the first-order formula. However, a standard move is to change the language, introducing a new sort of object "sets" and some axioms regarding relations between the old objects and the new objects such as inclusion, and then you can talk about properties of these new objects such as finiteness and infiniteness. Admittedly this standard move is informal or metatheoretic; we're saying "this theory is the same as that theory except for these new objects and axioms".

This is what the Zermelo-Frankel axioms do, and the vast majority of mathematics can be done within ZF or ZFC. Almost any time you encounter "finiteness" and "infiniteness" in a branch of mathematics other than logic, they are formalizable as first-order properties of first-order entities called sets.

**johnicholas**on Standard and Nonstandard Numbers · 2012-12-20T14:34:08.634Z · score: 0 (2 votes) · LW · GW

If you taboo one-predicate 'matter', please specialize the two-place predicate (X matters to Y) to Y = "the OP's subsequent use of this article", and use the resulting one-place predicate.

I am not worried about apparent circularity. Once I internalized the Lowenheim-Skolem argument that first-order theories have countable "non-standard" models, then model theory dissolved for me. The syntactical / formalist view of semantics, that what mathematicians are doing is manipulating finite strings of symbols, is always a perfectly good model, in the model theoretic sense. If you want to understand what the mathematician is doing, you may look at what they're doing, rather than taking them at their word and trying to boggle your imagination with images of bigness. Does dissolving model theory matter?

There's plenty of encodings in mathematics - for example, using first-order predicate logic and the ZFC axioms to talk about second-order logic, or putting classical logic inside of intuitionistic logic with the double negation translation. Does the prevalence of encodings (analogous to the Turing Tarpit) matter?

Formal arguments, to be used in the real world, occur as the middle of an informal sandwich - first there's an informal argument that the premises are appropriate or reasonable, and third there's an informal argument interpreting the conclusion. I understand the formal part of this post, but I don't understand the informal parts at all. Nonstandard (particularly countable) models are everywhere and unavoidable (analogously Godel showed that true but unprovable statements are everywhere and unavoidable). Against that background, the formal success of second-order logic in exiling nonstandard models of arithmetic doesn't seem (to me) a good starting point for any third argument.

**johnicholas**on Standard and Nonstandard Numbers · 2012-12-20T05:36:21.037Z · score: 4 (4 votes) · LW · GW

Does it matter if you don't have formal rules for what you're doing with models?

Do you expect what you're doing with models to be formalizable in ZFC?

Does it matter if ZFC is a first-order theory?

**johnicholas**on Mixed Reference: The Great Reductionist Project · 2012-12-05T13:04:23.623Z · score: 2 (2 votes) · LW · GW

It may not be possible to draw a sharp line between things that exist from the things that do not exist. Surely there are problematic referents ("the smallest triple of numbers in lexicographic order such that a^3+b^3=c^3", "the historical jesus", "the smallest pair of numbers in lexicographic order such that a^3+24=c^2", "shakespeare's firstborn child") that need considerable working with before ascertaining that they exist or do not exist. Given that difficulty, it seems like we work with existence explicitly, as a theory; it's not "baked in" to human reasoning.

Guy Steele wrote a talk called "Growing a Language", where one of his points is that building hooks (such as functions) into the language definition to allow the programmer to grow the language is more important than building something that is often useful, say, complex numbers or a rich collection of string manipulation primitives. Maybe talking about the structure of "theories of X" would be valuable. Perhaps all theories have examples (including counterexamples as a specific kind of example) and rules (including definitions as a specific kind of example) - thats the kind of thing that I'm suggesting might be more like a hook.

**johnicholas**on Causal Universes · 2012-11-28T19:20:02.444Z · score: -1 (1 votes) · LW · GW

One model for time travel might be a two dimensional piece of paper with a path or paths drawn wiggling around on it. If you scan a "current moment" line across the plane, then you see points dancing. If a line and its wiggles are approximately perpendicular to the line of the current moment, then the dancing is local and perhaps physical. Time travel would be sigmoid line, first a "spontaneous" creation of a pair of points, then the cancellation of one ("reversed") point with the original point.

An alternative story is of a line next to a loop - spontaneous creation of two "virtual" particles, one reversed, followed by those two cancelling.

Would J.K. Rowling's book be causal if we add to the lore that Time Turners are well understood to cause (unasked) "virtual bearers" very like their bearers? The virtual bearers could be reified by the real bearer using the time turner, or if they are not reified, they will "cancel" themselves by using their own, virtual, time turner.

I think this addition changes the time turner from a global / acausal constraint on possible universes to a local / causal constraint, but I could very well be mistaken. Note that the reversed person is presumably invisible, but persuasive - perhaps they are a disjunctive geas of some sort.

**johnicholas**on Logical Pinpointing · 2012-11-11T20:48:26.405Z · score: 0 (0 votes) · LW · GW

I understand your point - it's akin to the Box quote "all models are wrong but some are useful" - when choosing among (false) models, choose the most useful one. However, it is not the case that stronger assumptions are more useful - of course stronger assumptions make the task of proving easier, but the task as a whole includes both proving and also building a system based on the theorems proven.

My primary point is that EY is implying that second-order logic is necessary to work with the integers. People work with the integers without using second-order logic all the time. If he said that he is only introducing second-order logic for convenience in proving and there are certainly other ways of doing it, and that some people (intuitionists and finitists) think that introducing second-order logic is a dubious move, I'd be happy.

My other claim that second-order logic is unphysical and that its unphysicality probably does ripple out to make practical tasks more difficult, is a secondary one. I'm happy to agree that this secondary claim is not mainstream.

**johnicholas**on Logical Pinpointing · 2012-11-11T15:37:04.641Z · score: 1 (3 votes) · LW · GW

If you were writing software for something intended to traverse the Interplanetary transfer network then you would probably use charts and atlases and transition functions, and you would study (symplectic) manifolds and homeomorphisms in order to understand those more-applied concepts.

If an otherwise useful theorem assumes that the manifold is orientable, then you need to show that your practical manifold is orientable before you can use it - and if it turns out not to be orientable, then you can't use it at all. If instead you had an analogous theorem that applied to all manifolds, then you could use it immediately.

**johnicholas**on Logical Pinpointing · 2012-11-10T17:37:00.121Z · score: 2 (6 votes) · LW · GW

If you assume A and derive B you have not proven B but rather A implies B. If you can instead assume a weaker axiom Aprime, and still derive B, then you have proven Aprime implies B, which is stronger because it will be applicable in more circumstances.

**johnicholas**on Logical Pinpointing · 2012-11-10T15:56:12.065Z · score: 0 (2 votes) · LW · GW

I agree with this statement - and yet you did not contradict my statement that second order logic is also not part of mainstream mathematics.

A topologist might care about manifolds or homeomorphisms - they do not care about foundations of mathematics - and it is not the case that only one foundation of mathematics can support topology. The weaker foundation is preferable.

**johnicholas**on Logical Pinpointing · 2012-11-09T21:06:01.060Z · score: 1 (9 votes) · LW · GW

Second-order logic is not part of standard, mainstream mathematics. It is part of a field that you might call mathematical logic or "foundations of mathematics". Foundations of a building are relevant to the strength of a building, so the name implies that foundations of mathematics are relevant to the strength of mainstream mathematics. A more accurate analogy would be the relationship between physics and philosophy of physics - discoveries in epistemology and philosophy of science are more often driven by physics than the other way around, and the field "philosophy of physics" is a backwater by comparison.

As is probably evident, I think the good, solid mathematical logic is intuitionist and constructive and higher-order and based on proof theory first and model theory only second. It is easy to analogize from their names to a straight line between first-order, second-order, and higher-order logic, but in fact they're not in a straight line at all. First-order logic is mainstream mathematics, second-order logic is mathematical logic flavored with faith in the reality of infinite models and set theory, and higher-order logic is mathematical logic that is (usually) constructive and proof-theoretic and built with an awareness of computer science.

**johnicholas**on Logical Pinpointing · 2012-11-07T16:54:06.035Z · score: -2 (8 votes) · LW · GW

EY is talking from a position of faith that infinite model theory and second-order logic are good and reasonable things.

It is possible to instead start from a position of doubt that the infinite model theory and second order logic are good and reasonable things (based on my memory of having studied in college whether model theory and second order logic can be formalized within Zermelo-Frankel set theory, and what the first-order-ness of Zermelo-Frankel has to do with it.).

We might be fine with a proof-theoretic approach, which starts with the same ideas "zero is a number", "the successor of a number is a different number", but then goes to a proof-theoretic rule of induction something like "I'd be happy to say 'All numbers have such-and-such property' if there were a proof that zero has that property and another also proof that if a number has that property, then its successor also has that property."

We don't need to talk about models at all - in particular we don't need to talk about infinite models.

Second-order arithmetic is sufficient to get what EY wants (a nice pretty model universe) but I have two objections. First it is too strong - often the first sufficient hammer that you find in mathematics is rarely the one you should end up using. Second, the goal of a nice pretty model universe presumes a stance of faith in (infinite) model theory, but the infinite model theory is not formalized. If you do formalize it then your formalization will have alternative "undesired" interpretations (by Lowenheim-Skolem).

**johnicholas**on Proofs, Implications, and Models · 2012-10-29T02:22:35.046Z · score: -10 (16 votes) · LW · GW

Uncountability is an idea that skeptics/rationalists/less-wrongers should treat with a grain of salt.

Inside of a particular system, you might add an additional symbol, "i", with the assumed definitional property that "i times i plus one equals zero". This is how we make complex numbers. We might similarly add "infinity", with an assumed definitional property that "infinity plus one equals infinity". Depending on the laws that we've already assumed, this might form a contradiction or cause everything to collapse, but in some situations it's a reasonable thing to do. This new symbol, "infinity" can be definitionally large in magnitude without being large in information content - bits or bytes required to communicate it.

A specific generic member of an uncountable set is infinitely large in information content - that means it would take infinite time to communicate at finite bandwidth, infinite volume to store at finite information density. It is utterly impractical.

It is reasonable (because it is true) to believe that every mathematical object that a mathematician has ever manipulated was of finite information content. The set of all objects of finite information content is merely countable, not uncountable.

What I'm trying to say is: Don't worry about uncountability. It isn't physical and it isn't practical. Mathematicians may be in love with it (Hilbert's famous quote: "No one shall expel us from the Paradise that Cantor has created."), but the concept is a bit like that "infinite" symbol - infinite in magnitude, not in information content.

**johnicholas**on Babies and Bunnies: A Caution About Evo-Psych · 2012-10-23T04:51:13.965Z · score: 0 (0 votes) · LW · GW

My understanding is that the essay's effect is via the horror a reader feels at the alternate-world presented in the essay. It opens the reader's eyes somewhat to the degree that sexism is embedded in everyday grammar and idiom. My understanding is that it is not a persuasive essay in the usual sense.

Please elaborate.

**johnicholas**on Stuff That Makes Stuff Happen · 2012-10-18T15:27:59.745Z · score: 0 (0 votes) · LW · GW

I agree that if you don't look at the numbers, but at the surrounding text, you get the sense that the numbers could be paraphrased in that way.

So does h, labeled "I hear universe" mean "I hear the universe tell me something at all", or "I hear the universe tell me that they love me" or "I hear the universe tell me what it knows, which (tacitly according to the meaning of knows) is accurate"?

I thought it meant "I have a sensation as if the universe were telling me that they love me", but the highest probability scenarios are p&u&h, and -p&u&h, which would suggest that regardless of their love, I'm likely to experience a sensation as if the universe were telling me that they love me. That seems reasonable from a skeptical viewpoint, but not from a believer's viewpoint.

**johnicholas**on Stuff That Makes Stuff Happen · 2012-10-17T19:39:14.702Z · score: 6 (6 votes) · LW · GW

Twice in this article, there are tables of numbers. They're clearly made-up, not measured from experiment, but I don't really understand exactly how made-up they are - are they carefully or casually cooked?

Could people instead use letters (variables), with relations like 'a > b', 'a >> b', 'a/b > c' and so on in the future? Then I could understand better what properties of the table are intentional.

**johnicholas**on Causal Diagrams and Causal Models · 2012-10-12T23:13:36.537Z · score: 4 (4 votes) · LW · GW

I think it would be valuable if someone pointed out that a third party watching, without controlling, a scientist's controlled study is in pretty much the same situation as the three-column exercise/weight/internet use situation - they have instead exercise/weight/control group.

This "observe the results of a scientist's controlled study" thought experiment motivates and provides hope that one can sometimes derive causation from observation, where the current story arc makes a sortof magical leap.

**johnicholas**on Skill: The Map is Not the Territory · 2012-10-06T15:22:34.144Z · score: 2 (2 votes) · LW · GW

There are some aspects of maps - for example, edges, blank spots, and so on, that seem, if not necessary, extremely convenient to keep as part of the map. However, if you use these features of a map in the same way that you use most features of a map - to guide your actions - then you will not be guided well. There's something in the sequences like "the world is not mysterious" about people falling into the error of moving from blank/cloudy spots on the map to "inherently blank/cloudy" parts of the world.

The slogan "the map is not the territory" might encourage focusing on the delicate corrections necessary to act upon SOME aspects of one's representation of the world, but not act on other aspects which are actually intrinsic to the representation.

**johnicholas**on The Bayesian Agent · 2012-10-03T21:00:58.366Z · score: 1 (1 votes) · LW · GW

You might enjoy Crutchfield's epsilon machines, and Shalizi's CSSR algorithm for learning them:

http://masi.cscs.lsa.umich.edu/~crshalizi/notabene/computational-mechanics.html

**johnicholas**on New study on choice blindness in moral positions · 2012-09-21T12:28:40.515Z · score: 9 (9 votes) · LW · GW

There's cognitive strategies that (heuristically) take advantage of the usually-persistent world. Should I be embarrassed, after working and practicing with pencil and paper to solve arithmetic problems, that I do something stupid when someone changes the properties of pencil and paper from persistent to volatile?

What I'd like to see is more aboveboard stuff. Suppose that you notify someone that you're showing them possibly-altered versions of their responses. Can we identify which things were changed when explicitly alerted? Do we still confabulate (probably)? Are the questions that we still confabulate on questions that we're more uncertain about - more ambiguous wording, more judgement required?

**johnicholas**on Who Wants To Start An Important Startup? · 2012-08-17T01:03:09.221Z · score: 2 (2 votes) · LW · GW

Yes, I (and Stross) am taking auditors, internal and external, as a model. Why do you comment specifically on internal auditors?

**johnicholas**on Who Wants To Start An Important Startup? · 2012-08-14T19:24:29.684Z · score: 7 (7 votes) · LW · GW

There's a lot of similarity between the statistical tests that a scientist does and the statistical tests that auditors do. The scientist is interested in testing that the effect is real, and the auditor is testing that the company really is making that much money, that all its operations are getting aggregated up into the summary documents correctly.

Charlie Stross has a character in his 'Rule 34', Dorothy Straight, who is an organization-auditor, auditing organizations for signs of antisocial behavior. As I understood it, she was asking whether the organizations as a whole are likely to behave badly - though one way that the organization as a whole might behave badly is by sifting out or creating leaders who are likely to individually behave badly.

What I'm trying to say is that there will be a field of auditing an organization's 'safety case' - examining why it believes that it is a Friendly organization, what its internal controls entangling it with the truth are and so on, something like GiveWell for for-profits.

**johnicholas**on Reinforcement Learning: A Non-Standard Introduction (Part 2) · 2012-08-03T22:19:20.373Z · score: 1 (1 votes) · LW · GW

As I understand it, you're dividing the agent from the world; once you introduce a reward signal, you'll be able to call it reinforcement learning. However, until you introduce a reward signal, you're not doing specifically reinforcement learning - everything applies just as well to any other kind of agent, such as a classical planner.

**johnicholas**on Reinforcement Learning: A Non-Standard Introduction (Part 1) · 2012-08-02T18:09:50.065Z · score: 1 (1 votes) · LW · GW

The arrows all mean the same thing, which is roughly 'causes'.

Chess is a perfect-information game, so you could build the board entirely from the player's memory of the board, but in general, the state of the world at time t-1, together with the player, causes the state of the world at time t.

**johnicholas**on Reinforcement Learning: A Non-Standard Introduction (Part 2) · 2012-08-02T15:25:12.109Z · score: 7 (9 votes) · LW · GW

It might be valuable to point out that nothing about this is reinforcement learning yet.

**johnicholas**on Reply to Holden on The Singularity Institute · 2012-07-12T16:25:28.628Z · score: 1 (1 votes) · LW · GW

Those are interesting reviews but I didn't know they were speeches in SIAI's voice.

**johnicholas**on Reply to Holden on The Singularity Institute · 2012-07-12T13:35:58.927Z · score: 1 (1 votes) · LW · GW

Thanks for posting this!

I am also grateful to Holden for provoking this - as far as I can tell, the only substantial public speech from SIAI on LessWrong. SIAI often seems to be far more concerned with internal projects than communicating with its supporters, such as most of us on LessWrong.

**johnicholas**on Reply to Holden on 'Tool AI' · 2012-06-14T02:56:48.640Z · score: 3 (3 votes) · LW · GW

I don't think Strange7 is arguing Strange7's point strongly; let me attempt to strengthen it.

A button that does something dangerous, such as exploding bolts that separate one thing from another thing, might be protected from casual, accidental changes by covering it with a lid, so that when someone actually wants to explode those bolts, they first open the lid and then press the button. This increases reliability if there is some chance that any given hand motion is an error, but the errors of separate hand motions are independent. Similarly 'are you sure' dialog boxes.

In general, if you have several components, each of a given reliability, and their failure modes are somewhat independent, then you can craft a composite component of greater reliability than the individuals. The rings that Strange7 brings up are an example of this general pattern (there may be other reasons why layers-of-rings architectures are chosen for reliability in practice - this explanation doesn't explain why the rings are ordered rather than just voting or something - this is just one possible explanation).

**johnicholas**on Reply to Holden on 'Tool AI' · 2012-06-13T12:15:12.594Z · score: 5 (5 votes) · LW · GW

The distinction between hardwiring and softwiring is, at above the most physical, electronic aspects of computer design, a matter of policy - something in the programmer's mind and habits, not something out in the world that the programmer is manipulating. From any particular version of the software's perspective, all of the program it is running is equally hard (or equally soft).

It may not be impossible to handicap an entity in some way analogous to your suggestion, but holding fiercely to the concept of hardwiring will not help you find it. Thinking about mechanisms that would accomplish the handicapping in an environment where everything is equally hardwired would be preferable.

There's some evidence that chess AIs 'personality' (an emergent quality of their play) is related to a parameter of their evaluation function called 'contempt', which is something like (handwaving wildly) how easy the opponent is to manipulate. In general, AIs with higher contempt seek to win-or-lose more, and seek to draw less. What I'm trying to say is, your idea is not without merit, but it may have unanticipated consequences.

**johnicholas**on Reply to Holden on 'Tool AI' · 2012-06-12T10:23:35.517Z · score: 2 (2 votes) · LW · GW

The thing that is most like an agent in the Tool AI scenario is not the computer and software that it is running. The agent is the combination of the human (which is of course very much like an agent) together with the computer-and-software that constitutes the tool. Holden's argument is that this combination agent is safer somehow. (Perhaps it is more familiar; we can judge intention of the human component with facial expression, for example.)

The claim that Tool AI is an obvious answer to the Friendly AI problem is a paper tiger that Eliezer demolished. However, there's a weaker claim, that SIAI is not thinking about Tool AI much if at all, and that it would be worthwhile to think about (e.g. because it already routinely exists), which Eliezer didn't really answer.

**johnicholas**on General purpose intelligence: arguing the Orthogonality thesis · 2012-05-16T10:24:17.511Z · score: 1 (1 votes) · LW · GW

Minor text correction;

"dedicated committee of human-level AIs dedicated" repeats the same adjective in a small span.

More wide-ranging:

Perhaps the paper would be stronger if it explained why philosophers might feel that convergence is probable. For example, in their experience, human philosophers / philosophies converge.

In a society, where the members are similar to one another, and much less powerful than the society as a whole, the morality endorsed by the society might be based on the memes that can spread successfully. That is, a meme like 'everyone gets a vote' or the golden rule is a meme that is symmetrical in a way that 'so-and-so gets the vote' isn't. The memes that spread successfully might be more likely to be symmetrical. There could be convergent evolution of memes, and this could explain human philosophers experience of convergence.

**johnicholas**on General purpose intelligence: arguing the Orthogonality thesis · 2012-05-16T10:08:05.397Z · score: -1 (3 votes) · LW · GW

There was an incident of censorship by EY relating to acausal trading - the community's confused response (chilling effects? agreement?) to that incident explains why there is no overall account.

**johnicholas**on Thoughts on the Singularity Institute (SI) · 2012-05-12T13:27:25.020Z · score: 6 (6 votes) · LW · GW

There's two uses of 'utility function'. One is analogous to Daniel Dennett's "intentional stance" in that you can choose to interpret an entity as having a utility function - this is always possible but not necessarily a perspicuous way of understanding an entity - because you might end up with utility functions like "enjoys running in circles but is equally happy being prevented from running in circles".

The second form is as an explicit component within an AI design. Tool-AIs do not contain such a component - they might have a relevance or accuracy function for evaluating answers, but it's not a utility function over the world.

**johnicholas**on Recognizing memetic infections and forging resistance memes · 2012-04-30T12:15:17.795Z · score: 0 (2 votes) · LW · GW

You're right, it's infeasible to care about individual memes (or for that matter, the vast majority of individual animals) the way we care about other humans. I don't have an answer to your question, I'm trying to break a logjam of humancentric ethical thinking.

Forgive me for passing on my confusion here, but I'm not certain that consciousness/sentience, is anything more than 'recognizably human'. You and I have a common brain architecture and one of our faculties is picking that out from trees and rocks. Perhaps there are plenty of evolved, competent alien minds that would pose a challenge to our ape-like-me recognition systems simply because they're so alien.

But if and when humans upload, then they will become effectively memes. We need to solve the question of how to care about non-sentient life, because a datafile could be you or me or a descendent.

**johnicholas**on Recognizing memetic infections and forging resistance memes · 2012-04-27T14:03:18.562Z · score: 1 (1 votes) · LW · GW

No. This is regarding the 'possibly irrelevant rant' which I marked explicitly as a 'possibly irrelevant rant'. The concepts in the rant seemed nearby and inspirational to the main article in my mind when I wrote it, which is why I included it, but I cannot articulate a direct connection.

**johnicholas**on Recognizing memetic infections and forging resistance memes · 2012-04-27T11:38:14.327Z · score: 4 (4 votes) · LW · GW

Analogous in that people once discriminated against other races, other sexes, but over time with better ethical arguments, we decided it was better to treat other races, other sexes as worthy members of the "circle of compassion". I predict that if and when we interact with another species with fairly similar might (for example if and when humans speciate) then humancentrism will be considered as terrible as racism or sexism is now.

Moral realism (if I understand it correctly) is the position that moral truths like 'eating babies is wrong' are out in the world something like the law of gravitation. Yudkowsky has argued convincingly in the Baby-Eater sequence against moral realism (and I agree with him). However, he implied a false fork that, if moral realism is false, then humancentrism is the answer. Yes, our sense of morality is based on our history. No, our history is not the same as our species.

DNA is one residue of our history, but libraries are also a similar residue. There are two instances in our history of allying with a very alien form of life: Viral eukaryogenesis, and the alliance with memes.

Does this help at all? I feel like I'm saying the same thing over again just with more words.