Posts

Uncalibrated quantum experiments act clasically 2020-07-21T05:31:06.377Z
Measly Meditation Measurements 2018-12-09T20:54:46.781Z
An Invitation to Measure Meditation 2018-09-30T15:10:30.055Z

Comments

Comment by justinpombrio on Three enigmas at the heart of our reasoning · 2021-09-24T16:43:42.924Z · LW · GW

As you said, very often a justification-based conversation is looking to answer a question, and stops when it's answered using knowledge and reasoning methods shared by the participants. For example, Alice wonders why a character in a movie did something, and then has a conversation with Bob about it. Bob shares some facts and character-motivations that Alice didn't know, they figure out the character's motivation together, and the conversation ends. This relied on a lot of shared knowledge (about the movie universe plus the real universe), but there's no reason for them to question their shared knowledge. You get to shared ground, and then you stop.

If you insist on questioning everything, you are liable to get to nodes without justification:

  • "The lawn's wet." / "Why?" / "It rained last night." / "Why'd that make it wet?" / "Because rain is when water falls from the sky." / "But why'd that make it wet?" / "Because water is wet." / "Why?" / "Water's just wet, sweetie.". A sequence of is-questions, bottoming out at a definition. (Well, close to a definition: the parent could talk about the chemical properties of liquid water, but that probably wouldn't be helpful for anyone involved. And they might not know why water is wet.)
  • "Aren't you going to eat your ice cream? It's starting to melt." / "It sure is!" / "But melted ice cream is awful." / "No, it's the best." / "Gah!". This conversation comes to an end when the participants realize that they have fundamentally different preferences. There isn't really a justification for "I dislike melted ice cream". (There's an is-ought distinction here, though it's about preferences rather than morality.)

Ultimately, all ought-question-chains end at a node without justification. Suffering is just bad, period.

And I think if you dig too deep, you'll get to unjustified-ish nodes in is-question-chains too. For example, direct experience, or the belief that the past informs the future, or that reasoning works. You can question these things, but you're liable to end up on shakier ground than the thing you're trying to justify, and to enter a cycle. So, IDK, you can not count those flimsy edges and get a dead end, or count them and get a cycle, whichever you prefer?

We would just go and go and go until we lost all energy, and neither of us would notice that we’re in a cycle?

There's an important shift here: you're not wondering how the justification graph is shaped, but rather how we would navigate it. I am confident that the proof applies to the shape of the justification graph. I'm less confident you can apply it to our navigation of that graph.

“huh, it looks like we are on a path with the following generator functions”

Not all infinite paths are so predictable / recognizable.

Comment by justinpombrio on Three enigmas at the heart of our reasoning · 2021-09-23T21:35:23.016Z · LW · GW

If you ask me whether my reasoning is trustworthy, I guess I'll look at how I'm thinking at a meta-level and see if there are logical justifications for that category of thinking, plus look at examples of my thinking in the past, and see how often I was right. So roughly your "emperical" and "logical" foundations.

And I sometimes use my reasoning to bootstrap myself to better reasoning. For example, I didn't used to be Bayesian; I did not intuitively view my beliefs as having probabilities associated with them. Then I read Rationality, and was convinced by both theoretical arguments and practical examples that being Bayesian was a better way of thinking, and now that's how I think. I had to evaluate the arguments in favor of Bayesianism in terms of my previous means of reasoning --- which was overall more haphazard, but fortunately good enough to recognize the upgrade.

From the phrasing you used, it sounded to me like you were searching for some Ultimate Justification that could by definition only be found in regions of the space that have been ruled out by impossibility arguments. But it sounds like you're well aware of those reasons, and must be looking elsewhere; sorry for misunderstanding.

But honestly I still don't know what you mean by "trustworthy". What is the concern, specifically? Is it:

  • That there are flaws in the way we think, for example the Wikipedia list of biases?
  • That there's an influential bias that we haven't recognized?
  • That there's something fundamentally wrong with the way that we reason, such that most of our conclusions are wrong and we can't even recognize it?
  • That our reasoning is fine, but we lack a good justification for it?
  • Something else?
Comment by justinpombrio on Three enigmas at the heart of our reasoning · 2021-09-23T21:00:56.388Z · LW · GW

(2) doesn't require the graph to be finite. Infinite graphs also have the property that if you repeatedly follow in-edges, you must eventually reach (i) a node with no in-edges, or (ii) a cycle, or (iii) an infinite chain.

EDIT: Proof, since if we're talking about epistemology I shouldn't spout things without double checking them.

Let G be any directed graph with at most countably many nodes. Let P be the set of paths in G. At least one of the following must hold:

(i) Every path in P is finite and acyclic. (ii) At least one path in P is cyclic. (iii) At least one path in P is infinite.

Now we just have to show that (i) implies that there exists at least one node in G that has no in-edges. Since every path is finite and acyclic, every path has a (finite) length. Label the nodes of G with the length of the largest path that ends at that node. Pick any node N in G. Let n be its label. Strongly induct on n:

  • If n=0, we're done: the maximum path length ending at this node is 0, so it has no in-edges. (A.k.a. it lacks justification.)
  • If n>0, then there is a non-empty path ending at N. Follow it back one edge to a node N'. N' must be labeled at most n-1, because if its label was larger then N's label would be larger too. By the inductive hypothesis, there exists a node in G with no in-edges.
Comment by justinpombrio on Three enigmas at the heart of our reasoning · 2021-09-22T21:23:01.047Z · LW · GW

Yeah. Though you might be able to re-phrase the reasoning to turn it into one of the others?

EDIT: in more detail, it's something like this. I have a whole bunch of ways of reasoning, and can use many of them to examine the others. And they all generally agree, so it seems fine. (Sean Carrol says this.) You can't use completely broken reasoning to figure the world out. But if you start with partially broken reasoning, you can bootstrap your way to better and better reasoning. (Yudkowski says this.)

The main point is that I have been convinced by the reasoning in my previous comment and others that a search for an Ultimate Justification is fruitless, and have adjusted my expectations accordingly. When your intuitions don't match reality, you need to update your intuitions.

Comment by justinpombrio on Three enigmas at the heart of our reasoning · 2021-09-22T18:28:27.035Z · LW · GW

Maybe a clearer way to say it is that I actually agree with everything you’ve said, but I don’t think what you’ve said is yet sufficient to resolve the question of whether our reasoning is based on something trustworthy.

I get the impression that by the standards you have set, it is impossible to have a "trustworthy" justification:

  1. For anything you believe, you should be able to ask for its justifications. Thus justifications form a graph, with an edge from A to B meaning that "A justifies B".
  2. Just from how graphs work, if you start from any node and repeatedly ask for its justifications, you must eventually reach (i) a node with no justifications (in-edges), or (ii) a cycle, or (iii) an infinite chain.
  3. However, unjustified beliefs, cyclic reasoning, and infinite regress are all untrustworthy.

Do you simultaneously believe all three of these statements? I disbelieve 3.

Comment by justinpombrio on Three enigmas at the heart of our reasoning · 2021-09-22T18:13:45.681Z · LW · GW

Have you read the Sequences, or Sean Carrol's 'The Big Picture'? Both talk about these questions. For example:

We can appeal to empiricism to provide a foundation for logic, or we can appeal to logic to provide a foundation for empiricism, or we can connect the two in an infinitely recursive cycle.

See explain-worship-ignore

and more generally

mysterious-answers

The system of empiricism provides no empirical basis for believing in these foundational principles.

See no-universally-compelling-arguments-in-math-or-science

and more generally

mind-space

simpler hypotheses are more likely to be true than complicated hypotheses

I'm not sure if this appeared in the Sequences or not, but there's a purely logical argument that simpler hypotheses must be more likely. For any level of complexity, there are finitely many hypotheses that are simpler than that, and infinitely many that are more complex. You can use this to prove that any probability distribution must be biased towards simpler hypotheses.

We need not doubt all of mathematics, but we might do well to question what it is that we are trusting when we do not doubt all of mathematics.

"All of mathematics" might not be as coherent as you think. There's debate around the foundations. For example:

  • Should the foundation be set theory (ZF axioms), or constructive type theory?
  • Axiom of Choice: true or false?
  • Law of excluded middle: true or false?

(I'm not a mathematician, so take this with a grain of salt.)

There are two very different notions of what it means for some math to be "true". One is that the statement in question follows from the axioms you're assuming. The other is that you're using this piece of math to model the real world, and the corresponding statement about the real world is true. For example, "2 + 2 = 4" can be proved using the Peano axioms, with no regard to the world at all. But there are also (multiple!) real situations that "2 + 2 = 4" models. One is that if you put two cups together with two other cups, you'll have four cups. Another is that if you pour two gallons of gas into a car that already has two gallons of gas, the car will have four gallons. In this second model, it's also true that "1/2 + 1/2 = 1". In the first model, it isn't: the correspondence breaks down because no one wants a shattered cup.

I'm actually very interested to see what assumptions about the real world correspond to mathematical axioms. For example, if you interpret mathematical statements to be "objectively" true then the law of the excluded middle is true, but if you interpret them to be about knowledge or about provability, then the law of the excluded middle is false. I have no idea what the axiom of choice is about, though.

the-simple-truth

I am asking you to doubt that your reason for correctly (in my estimation) not doubting ethics can be found within ethics.

Have you read about Hume's is-ought distinction? He writes about it in 'A Treatise of Human Nature'. It says that ought-statements cannot be derived from is-statements alone. You can derive an is-statement from another, for example by using modus-ponens. And you can derive one ought-statement from another ought-statement, plus some is-statement reasoning, for example "you shouldn't punch him because that would hurt him, and someone being hurt is bad". But you can't go from pure is-statements to an ought-statement. Yudkowski says similar things. Once you instinctively see this distinction, it's not even tempting to look for an ultimate justification of ethics or within empiricism, because it's obviously not there.


The problem, I suspect, is that these questions of deep doubt in fact play within our minds all the time, and hinder our capacity to get on with our work.

It's always dangerous to put thoughts in other people's minds! These questions really truly do not play within my mind. I find them interesting, but doubt they're of much practical importance, and they do not bother me. I'm sure I'm not alone.

It seems like you are unhappy without having "a satisfying conceptual answer to 'why should I believe it?' within the systems that we are questioning." Why is that? Do you not want to strongly believe in something without a strong and non-cyclic conceptual justification for it?

Comment by justinpombrio on Book review: The Checklist Manifesto · 2021-09-18T14:43:22.681Z · LW · GW

Super-intelligence deployment checklist:

  1. DO NOT DEPLOY THE AGI UNTIL YOU HAVE COMPLETED THIS CHECKLIST.
  2. Check the cryptographic signature of the utility function against MIRI's public key.
  3. Have someone who has memorized the known-benevolent utility function you plan to deploy check that it matches their memory exactly. If no such person is available, do not deploy.
  4. Make sure that the code references that utility function, and not another one.
  5. Make sure the code is set to maximize utility, not minimize it.
  6. Deploy the AGI.

(This was written in jest, and is almost certainly incomplete or wrong. Do not use when deploying a real super-intelligent AGI.)

Comment by justinpombrio on Book review: The Checklist Manifesto · 2021-09-18T14:36:41.317Z · LW · GW

One difference between hospitals and programming is that code is entirely digital, so a lot of check lists can be replaced with automated tests. For example:

  • Instead of "did you run the code to make sure it works?", have test cases. This is traditional test cases. (Fully end-to-end testing is very hard though, to the point that it can be more feasible to do some QA testing instead of trying to automate all testing.)
  • Instead of "did you click the links you added in the documentation to make sure they work?", have a test that errors on broken links. Bonus is that if the docs link to an external page, and that link breaks a month later, you'll find out.
  • Instead of "did you try running the sample code in the library documentation?", have a test that runs all code blocks in docs. (Rust does this by default.)
  • Instead of "did you do any of these known dangerous things in the code?", have a "linting" step that looks for the dangerous patterns and warns you off of them (with a way to disable in cases where it's needed).

Of course not everything can be automated (most of Gunnar's list sounds like it can't). But when it can be, it's nice to not even have to use a checklist.

Comment by justinpombrio on Training My Friend to Cook · 2021-08-29T22:32:31.128Z · LW · GW

Most people, myself included, think it tastes better roasted. But you can sauté it, and I know someone who prefers it that way.

Comment by justinpombrio on When Programmers Don't Understand Code, Don't Blame The User · 2021-08-19T14:24:55.214Z · LW · GW

Be careful with this, though. Javascript is a strange and complex language, and it's not very amenable to static analysis.

Most of the time it's "easy" to determine what this refers to, but sometimes it's literally impossible, because it's ambiguous. For example, it might be determined at runtime, and vary from one invocation of a function to another. A good IDE will hopefully say "I don't know what this is", when it doesn't know. But on that boundary between known and unknowable, the IDE is liable to get confused (and who can blame it?), and this is exactly the sort of place in your code that bugs tend to crop up.

All that is to say, take what your IDE tells you with a grain of salt.

Comment by justinpombrio on The halting problem is overstated · 2021-08-16T20:45:54.695Z · LW · GW

Instead of talking about the halting problem, you might want to invoke Rice's Theorem:

All non-trivial, semantic properties of programs are undecidable [Wikipedia].

A semantic property of a program is one that depends only on its behavior, not on it's syntax. So "it's source code contains an even number of for-loops" is not a semantic property; "it outputs 7" is. The two "trivial" semantic properties that the theorem are excluding are very trivial: the semantic property "true" that is by definition true for all programs, and the semantic property "false" that is by definition false for all programs.

I agree with your main point that the halting problem (and Rice's Theorem) are overstated. While there exist programs that halt (or output 7) that can't be proved easily in whatever proof system you're using, they're mostly bad programs. If your programs works but requires transfinite induction to prove that it works, you should rewrite it to be easier to prove correct. Or maybe you're legitimately doing something very complicated.

We need to focus on proving the properties that the user is interested in, rather than the ones the compiler is interested in.

I think existing proof systems (e.g. Coq) already do this, unless I'm misunderstanding what you're asking for?

For example, here's a type signature that only says you're getting an integer out:

Fixpoint mystery (n : nat) : nat

And here's one that says you're getting a prime out (for an appropriate definition of the prime type):

Fixpoint mystery (n : nat) : prime

If you care that mystery produces a prime number, you use the second signature and go through the trouble of proving that it produces a prime. If you don't, you use the first signature.

You might object that while prime is optional, nat is not. You still have to prove that your program type checks and halts (modulo coinduction, which lets you express non-termination). But the alternative is a program that might crash, and if it crashes then it definitely doesn't work! Or if you really want to allow it to crash, you can express that too. Have everything return an option, and propagate them up to the toplevel, and print "oops, something went wrong" if it gets there:

Fixpoint mystery (n: nat) : option nat

Tldr; there's a lot of leeway in what you choose to prove about your program in a proof assistant.

Comment by justinpombrio on Staying Grounded · 2021-08-15T15:06:37.098Z · LW · GW

Doing so revealed to me a myriad of interventions that I’d expect to be higher impact than those endorsed by GiveWell.

Have you talked to GiveWell about this? Like, I don't know much of anything about charities or the people at GiveWell. But the standard rationalist reaction to "I found something you can read that helps explain the domain you're in and how to make the world a better place" is supposed to be "give me, give it now".

(And the reason I ask is the huge potential upside of making GiveWell donations more effective.)

Comment by justinpombrio on The two-headed bacterium · 2021-08-11T13:59:43.143Z · LW · GW

I would separate these two things:

  • What we consider to be an individual
  • Technical terminology, like "multicellular"

What we consider to be an individual thing should be irrelevant, except that that it tends to carry a lot of intuitive baggage with it. Like, I do find the self-sacrificial behaviors makes more sense when I think of the group of cells as an individual. Note that "what is an individual" is a framing of thought, and not a fact about the world. (Though "X can naturally be thought of as an individual" is a fact about the world.)

On the other hand, there's the technical terminology. And you're pointing out that "multicellular" means something specific (it is a fact about the world), that isn't what Elmer was talking about. And this should be corrected because terminology is important! Would referring to the individual as a "bacterial colony" be more accurate? I think the post could say "colony" or "group of" instead of "multicellular", without really changing its content.

Comment by justinpombrio on One Study, Many Results (Matt Clancy) · 2021-07-19T22:11:40.058Z · LW · GW

Yeah, that was my reaction too: regardless of intentions, the scientific method is, in the "soft" sciences, frequently not arriving at the truth.

The follow up question should of course be: how can we fix it? Or more pragmatically, how can you identify whether a study's conclusion can be trusted? I've seen calls to improve all the things that are broken right now: reduce p-hacking and publication bias, aim for lower p values, spread better knowledge of statistics, do more robustness checks, etc. etc.. This post adds to the list of things that must be fixed before studies are reliable.

But one thing I've wondered is: what about focusing more on studies that find large effects? There are two advantages: (i) it's harder to miss large effects, making the conclusion more reliable and easier to reproduce, and (ii) if the effects is small, it doesn't matter as much anyways. For example, I trust the research on the planning fallacy more because the effect is so pronounced. And I'm much more interested to know about things that are very carcinogenic than about things that are just barely carcinogenic enough to be detected.

So, has someone written the book "Top 20 Largest Effects Found in [social science / medicine / etc.]"? I would buy it in a heartbeat.

Comment by justinpombrio on The Point of Trade · 2021-06-23T01:09:16.943Z · LW · GW

One more magical power of trade, that I didn't see in other comments:

Planning and logistics. It takes about a week and a dozen steps to make a pencil. (Ok, probably not all dozen of those steps need human intervention, but some probably do.) That's not too bad; I can set a Calendar reminder to ping me when various steps are done so I can move the materials to the next one. But to use reminder software I need a laptop. How long and how many steps does that take to build? I would guess years of time and tens of thousands of steps. So even if I technically could perform all the required steps individually, doesn't mean I could feasibly deal with the sheer complexity of the task, or with the timescales involved.

Comment by justinpombrio on Bad names make you open the box · 2021-06-10T00:29:51.775Z · LW · GW

I have a technique for naming a thing. It goes like this. First, I realize that I can't find a good name, so I ask someone what to name it. But they don't understand what it is, so I describe it in more detail, and then notice that my description has the ideal name sitting in it.

In theory you could avoid the bit where you bother someone, by trying to describe it beforehand.

Comment by justinpombrio on Bad names make you open the box · 2021-06-10T00:23:31.654Z · LW · GW

If you generalize this from naming to interfaces, I think it's one of the most important aspects of how to code well. Thank you for sticking such a clear metaphor to it! Here's my thinking:

Useful programs are often large (say >100,000 LOC), and large programs are spectacularly complex. The majority of those lines are essential, and if you changed one of them, the program would break in a small or big way. No one can keep all of this in their head. Now add in a dozen or more programmers, all of who modify this code base daily, while trying to add features and fix bugs. This framing should make it obvious that managing complexity is one of the primary tasks of a programmer, for anyone who didn't already have that perspective.

Or in the words of Bill Gates, "Measuring programming progress by lines of code is like measuring aircraft building progress by weight." (The reason more lines is bad isn't on the computers' side: computers can handle millions of lines just fine. The reason is on the humans' side: it's the complexity they bring.)

I really only know one major approach to managing complexity: you split the big complicated thing into smaller pieces, recursively, and make it possible to understand each piece without understanding its implementation. So that you don't have to open the box.

In this post you talk about naming functions. If a function is a box, then a good name on the box lets you use the box without opening it. But there's more on the box than the function's name, and you should make use of all of it, for exactly the reasoning in this post!

  • Sometimes you can't fit all the salient information about what a function does in a short name; the rest should go in its doc string.
  • In a typed language, a function's type signature also serves as documentation. It tells you exactly what kinds of things it expects as argument, and exactly what it produces, and, depending on the language, what kinds of errors it might throw. The best part of this "type documentation" is that it can never get out of date, because the type checker validates it! There's a principle called "make illegal states unrepresentable", which means that you arrange
    your data types such that you cannot construct invalid data; this helps here by making the type signature convey more information.

Functions/methods are the smallest pieces, and their boundary is their (i) name, (ii) doc string,
and (iii) type signature. What the larger pieces are depends on the language and program, but I clump them all as "modules" in my head: interfaces, classes, modules, packages, APIs, etc.. The common shape tends to be a set of named functions.

The primary way I organize my code, is to split it into "modules" (generally construed), such that
each module "does one thing and does it well". How can you tell if it "does one thing"? Write the
module's docs, which should include a high-level overview of the whole module, plus shorter docs for each function in the module. The rule is that your docs have to fully describe how to use the
module and what its behavior will be under any use case. This tends to make it really obvious when things are poorly organized. I've often realized that it will literally be less work to re-organize the code than to properly document it as is, because of all the horrible edge cases I would have to talk about.

On the other hand, I find that many other people don’t even want to invest a few seconds in [brainstorming for a good name for something].

I'm sorry you don't have a good naming buddy! Everyone should have a naming buddy; it's so hard to come up with good names on your own.

Comment by justinpombrio on Saving Time · 2021-05-19T23:34:14.826Z · LW · GW

Your causal description is incomplete; the loopy part requires expanding T1:

T0: Omega accurately simulates the agent at T1-T2, determines that the agent will one-box, and puts money in both of the boxes. Omega's brain/processor contains a (near) copy of the part of the causal diagram at T1 and T2.

T1: The agent deliberates about whether to one-box or two-box. She draws a causal diagram on a piece of paper. It does not contain T1, because it isn't really useful for her to model her own deliberation as she deliberates. But it does contain T2, and a shallow copy of T0, including the copy of T2 inside T0.

T2: The agent irrevocably commits to one-boxing.

The loopy part is at T1. Forward arrows mean "physically causes", and backwards arrows mean "logically causes, via one part of the causal diagram being copied into another part".

Comment by justinpombrio on How to compute the probability you are flipping a trick coin · 2021-05-17T14:31:53.928Z · LW · GW

If you’re interested in making a follow-up post, I’d enjoy an analysis of the possibilities when the coin is not fair but is also not double sided. For example, if a coin has a 75% chance of turning up heads, how does the probability look?

I wrote this! The graphs of P(bias|flips) are fun. See this post starting at "computing a credible interval":

https://justinpombrio.net/2021/02/19/confidence-intervals.html

Sorry if you're viewing on mobile, I need to fix my styling.

Comment by justinpombrio on Place-Based Programming - Part 1 - Places · 2021-04-16T02:48:22.972Z · LW · GW

I think this is fixable. An invocation (f expr1 expr2) will produce the same result as the last time you invoked it if:

  • The body of f is the same as last time.
  • Every function it calls, including transitively, has the same source code as the last time you called f. Also every macro and type definition that is used transitively. Basically any code that it depends on in any way.
  • Every function involved is pure (no state, no IO).
  • Every function involved is top-level. I'm not sure this will play well with higher-order functions.
  • The invocations expr1 and expr2 also obey this checklist.

I'm not sure this list is exhaustive, but it should be do-able in principle. If I look at a function invocation and all the code it transitively depends on (say it's 50% of the codebase), and I know that that 50% of the codebase hasn't changed since last time you ran the program, and I see that that 50% of the codebase is pure, and I trust you that the other 50% of the codebase doesn't muck with it (as it very well could with e.g. macros), then that function invocation should produce the same result as last time.

This is tricky enough that it might need language level support to be practical. I'm glad that Isusr is thinking of it as "writing a compiler".

Comment by justinpombrio on Place-Based Programming - Part 1 - Places · 2021-04-16T02:15:42.293Z · LW · GW

I think this overstates the difficulty, referential transparency is the norm in functional programming, not something unusual.

It really depends on what your domain you're working in. If you're memoizing functions, you're not allowed to use the following things (or rather, you can only use them in functions that are not transitively called by memoized functions):

  • Global mutable state (to no-one's surprise)
  • A database, which is global mutable state
  • IO, including reading user input, fetching something non-static from the web, or logging
  • Networking with another service that has state
  • Getting the current date

Ask a programmer to obey this list of restrictions, and -- depending on the domain they're working in -- they'll either say "ok" or "wait what that's most of what my code does".

As I understand, this system is mostly useful if you’re using it for almost every function. In that case, your inputs are hashes which contain the source code of the function that generated them, and therefore your caches will invalidate if an upstream function’s source code changed.

That's very clever! I don't think it's sufficient, though.

For example, say you have this code:

(defnp add1 [x] (+ x 10)) ; oops typo
(defnp add2 [x] (add1 (add1 x)))
(add2 100)

You run it once and get this cache:

(add1 100) = 110
(add1 (add1 100)) = 120
(add2 100) = 120

You fix the first function:

(defnp add1 [x] (+ x 1)) ; fixed
(defnp add2 [x] (add1 (add1 x)))
(add2 100)

You run it again, which invokes (add2 100), which is found in the cache to be 120. The add2 cache entry is not invalidated because the add2 function has not changed, nor has its inputs. The add1 cache entries would be invalidated if anything ever invoked add1, but nothing does.

(This is what I meant by "You also have to look at the functions it calls (and the functions those call, etc.)" in my other comment.)

Comment by justinpombrio on Place-Based Programming - Part 1 - Places · 2021-04-15T14:37:46.368Z · LW · GW

More stable, but not significantly so.

You cannot tell what an expression does just by looking at the expression. You also have to look at the functions it calls (and the functions those call, etc.). If any of those change, then the expression may change as well.

You also need to look at local variables, as skybrain points out. For example, this function:

(defn myfunc [x] (value-of (place-of [EXPR INVOLVING x])))

will behave badly: the first time you call it it will compute the answer for the value of x you give it. The second time you call it, it will compute the same answer, regardless of what x you give it.

Comment by justinpombrio on Why We Launched LessWrong.SubStack · 2021-04-01T19:16:49.689Z · LW · GW

I'm deeply confused by the cycle of references. What order were these written in?

In the HPMOR epilogue, Dobby (and Harry to a lesser extent) solve most of the worlds' problems using the 7 step method Scott Alexander outlines in "Killing Moloch" (ending with of course with the "war to end all wars"). This strongly suggests that the HPMOR epilogue was written after "Killing Moloch".

However, "Killing Moloch" extensively quotes Muehlhauser's "Solution to the Hard Problem of Consciousness". (Very extensively. Yes Scott, you solved coordination problems, and describe in detail how to kill Moloch. But you didn't have to go on that long about it. Way more than I wanted to know.) In fact, I don't think the Killing Moloch approach would work at all if not for the immediate dissolution of aphrasia one gains upon reading Muehlhauser's Solution.

And Muehlhauser uses Julia Galef's "Infallible Technique for Maintaining a Scout Mindset" to do his 23 literature reviews, which as far as I know was only distilled down in her substack post. (It seems like most of the previous failures to solve the Hard Problem boiled down to subtle soldier mindset creep, that was kept at bay by the Infallible Technique.)

And finally, in the prologue, Julia Galef said she only realized it might be possible to compress her entire book into a short blog post with no content loss whatsoever after seeing how much was hidden in plain sight in HPMOR (because of just how inevitable the entire epilogue is once you see it).

So what order could these possibly have been written in?

Comment by justinpombrio on Demand offsetting · 2021-03-22T14:55:13.466Z · LW · GW

Wow. This brings me hope we can effectively fight factory farming in the near future. It's just such a good strategy.

I think this form of offsetting is acceptable on a very broad range of moral perspectives (practically any perspective that is comfortable with humane eggs themselves).

Within the EA / adjacent crowd. I suspect a lot of normal people will be averse to egg offsets because "you're still participating in the system".

What would happen if many people tried to use this offsetting strategy?

One additional effect: egg offsets would increase the fungibility of humane eggs (within each certification level). I can imagine this switching the business model of some humane chicken farmers from "find a restaurant willing to buy my eggs at the price it costs me to produce them" to "gain some extra money from humane egg certificates, then sell my eggs in the global egg wholesale market".

Comment by justinpombrio on Are the Born probabilities really that mysterious? · 2021-03-02T19:23:46.757Z · LW · GW

Epistemic status: very curious non-physicist.

Here's what I find weird about the Born rule.

Eliezer very successfully thought about intelligence by asking "how would you program a computer to be intelligent?". I would frame the Born rule using the analogous question for physics: "if you had an enormous amount of compute, how would you simulate a universe?".

Here is how I would go about it:

  1. Simulate an Alternate Earth, using quantum mechanics. The simulation has discrete time. At each step in time, the state of the simulation is a wavefunction: a set of (amplitude, world) pairs. If you would have two pairs with the same world in the same time step, you combine them into one pair by adding their amplitudes together. Standard QM, except for making time discrete, which is just there to make this easier to think about and run on a computer.
  2. Seed the Alternate Earth with humans, and run it for 100 years.
  3. Select a world at random, from some distribution. (!)
  4. Scan that world for a physicist on Alternate Earth who speaks English, and interview them.

The distribution used in step (3) determines what the physicist will tell you. For example, you could use the Born rule: pick at random from the distribution on worlds given by . If you do, the interview will go something like this:

Simulator: Hi, it's God.

Physicist: Oh wow.

Simulator: I just have a quick question. In quantum mechanics, what's the rule for the probability that an observer finds themselves in a particular world?

Physicist: The probability is proportional to the square of the magnitude of the amplitude. Why is that, anyways?

Simulator: Awkwardly, that's what I'm trying to find out.

Physicist: ...God, why did you make a universe with so much suffering in it? My child died of bone cancer.

Simulator: Uh, gotta go.

Remember that you (the simulator) were picking at random from an astronomically large set of possible worlds. For example, in one of those worlds, photons in double slit experiments happened to always go left, and the physicists were very confused. However, by the law of large numbers, the world you pick almost certainly looks from the inside like it obeyed the Born rule.

However, the Born rule isn't the only distribution you could pick from in step 3. You could also pick from the distribution given by (with normalization). And frankly that's more natural. In this case, you would (almost certainly, by the law of large numbers) pick a world in which the physicists thought that the Born rule said . By Everett's argument, in this world probability does not look additive between orthogonal states. I think that means that its physicists would have discovered QM a lot earlier: the non-linear effects would be a lot more obvious! But is there anything wrong with this world, that would make you as the simulator go "oops I should have picked from a different distribution"?

There's also a third reasonable distribution: ignore the amplitudes, and pick uniformly at random from among the (distinct) worlds. I don't know what this world looks like from the inside.

Comment by justinpombrio on Gauging the conscious experience of LessWrong · 2020-12-21T03:27:25.285Z · LW · GW

Visual—I don’t really see things, I just get some weird topological-ish representation.

My visual imagination matches your whole paragraph exactly. Great description.

I think the rest of my responses are typical: reasonable sound imagination, minimal taste&touch&smell imagination. Thinking is a mix of abstract stuff and words and images. Little mind control, no synesthesia. Strong internal monologue: at the extreme, most everything I think is backed by the monologue in some way, and the monologue is nearly continuous; at the other extreme if I've been meditating a lot in the past month there's much less monologue.

My memory is worse than average, I think. I don't remember a whole lot after a year has passed. I get the impression that many people associate many of their long term memories with time (like, what month it was or what season it was). I don't, at all. I'll remember something that happened during undergrad, but have to reason from context about whether it would have been the first year or last year (which is usually easy to figure out, but that knowledge is not attached to the memory).

Comment by justinpombrio on Introduction to Cartesian Frames · 2020-10-24T20:57:32.281Z · LW · GW

First: https://www.lesswrong.com/posts/Mc6QcrsbH5NRXbCRX/dissolving-the-question and then: http://lesswrong.com/lw/r0/thou_art_physics/

Comment by justinpombrio on Babble challenge: 50 ways of sending something to the moon · 2020-10-02T01:49:11.842Z · LW · GW

50 ways to send something to the moon. Although it ended up more like 25 ways to send something to the moon and 25 ways to avoid sending something to the moon.

  1. Mail it.
  2. Where will the moon end up in 1 billion years? Invent time travel, put something there in the future, then send it back in time to today's moon.
  3. Rail gun.
  4. Space elevator to get it to space, then nudge it. Assuming it can deal with the landing.
  5. Giant slingshot. By which I mean spin, then release. This isn't silly, there's a serious startup doing it right now. (To get things to space, not the moon, but shouldn't be very different.)
  6. Space elevator, then give it a parachute, then nudge it to the moon.
  7. Big cannon, with gunpowder.
  8. Put it on a rocket. Rocket to take off & rocket to land.
  9. Invent teleportation, and teleport it.
  10. Does the thing really have to start on earth? Make it on the moon. Makes shipping much easier.
  11. Compressed air cannon.
  12. Land bridge, that's connected to the moon but not the earth. It gets within a few miles of earth.
  13. Big tree on earth. At the right time of day, its highest point gets close to the moon.
  14. Earth is in such a big gravity well. Maybe make the thing on a moon like phobos (which I know from UT), then send it via one of these methods to the moon.
  15. Make it in space, then drop it to the moon.
  16. Is it digital? I hope it's digital. Email it!
  17. Send it through the IPFS. Because it's digital.
  18. Ok, it's not digital. But it can be 3d-printed right? Email the design to an automated printer!
  19. Seriously, you don't want to physically send the thing to the moon. Start a manufactoring service on the moon. It taAlthough this turned more into 50 ways to avoid sending something to the moon.kes instructions to make something, and makes it, and ships it. All very automated. You send them a JSON file and some dollars and they make the thing.
  20. Is it audio? Is it a song? Call them up and sing.
  21. Why are you still trying to physically send it? Is it because you feel that if the thing is also on Earth, you haven't really sent it to the moon after manufacturing it there? How about manufacturing it there, then destroying the copy on Earth? Is that satisfactory?
  22. Ok maybe the thing is very expensive. Like a big diamond. Don't send it on its own. You don't need a dedicated rocket to send a diamond! Bulk shipments! Group it with the next hundred items.
  23. Wait 50 years until we have better technology, then send it.
  24. Get someone to inadvertently bring it to the moon. Like Musk is going there because he likes space, slip it in his pocket. Might need to pay a lunar pick-pocket to get it back after.
  25. Convince a big company that they want to advertise the thing on the moon, and get them to foot the shipping bill. Ok, maybe there is no manufacturing capabilities on the moon, and that's why you're so insistent on shipping this thing. Maybe it is the manufacturing facilities.
  26. NANITES. Send nanites. Have them make the manufacturing facilities.
  27. Take the thing, turn it into magical goop, and haphazardly slingshot the goop. Then tell the goop to return to its original form.
  28. Invent AGI and ask it to ship the thing to the moon.
  29. Magic. Literal magic. Wave your wand and speak in latin.
  30. Does it really have to be the moon, or do you just need people to think its on the moon? Send it to a film set that looks like the moon.
  31. Pay the moon people to say you sent it to them even though you didn't.
  32. Fake the moon transmissions to make it sound like the moon people got the thing even though they didn't.
  33. If it's a plant, grow it on the moon.
  34. If it's a plant, send the seed, then grow it on the moon.
  35. In general, instead of sending X, send a generator for X. Ok I'm going to actually thing about how to get matter from Earth to the Moon again.
  36. Strap a rocket on it.
  37. Warp space so that the moon is 20 feet away, then toss it.
  38. Turn its matter into energy, beam it via microwaves, then turn the energy back into matter.
  39. Turn it into plasma, stream it over, turn it back.
  40. Put it in a big bouncy ball, and toss that over (say with a slingshot or railgun as previously mentioned). Like we did with that mars rover.
  41. Have a space station between the earth and moon, with long ropes (read: cararbon nanotube ropes). Lift it up one rope, and down the other.
  42. Take a chunk out of the moon, and send it to earth. Then ship everything you want there.
  43. Take a chunk out of the earth (say around a big factory city), and send it to the moon. Then ship from earth-chunk to moon-desination. Right, physically moving a thing from one place to another. Back on track.
  44. Space train. I'm just now feeling out of ideas.
  45. Regular slingshot. Like with big stretchy cables. With a big foamy spot for it to land.
  46. Defeat gravity. Then use a gravity-ignoring spaceship with tiny little compressed-air jets.
  47. Rocket, powered by nuclear explosions. Probably not good for the environment.
  48. Big see-saw. When a shipment comes in from the moon, it lands on one end. It is on the other end, and gets flung to the moon.
  49. Same idea for space elevator. For balance, an object from the earth and an object from the moon of the same weight are pulled in unison to meet at the middle, then lowered on the other side.
  50. Really big fans. Fast enough to send the thing out of Earth's gravity. Though that probably wouldn't be good for the environment.
  51. Compressed air tube.
Comment by justinpombrio on Puzzle Games · 2020-09-29T03:45:32.980Z · LW · GW

I would add the Talos Principle, which is I think my second favorite puzzle game, after Baba Is You. IIRC, the length and difficulty were on par with The Witness (i.e., long and hard).

I recall many of its puzzles being blindingly obvious in retrospect, after an hour of banging my head on a wall.

Comment by justinpombrio on Maybe Lying Can't Exist?! · 2020-08-23T14:03:41.366Z · LW · GW

Going back to your plain English definition of deception:

intentionally causing someone to have a false belief

notice that it is the liar's intention for the victim to have a false belief. That requires the liar to know the victim's map!

So I would distinguish between intentionally lying and intentionlessly misleading.

P. redator is merely intentionlessly misleading P. rey. The decision to mislead P. rey was made by evolution, not by P. redator. On the other hand, if I were hungry and wanted to eat a P. rey, and made mating sounds, I would be intentionally lying. My map contains a map of P. rey's map, and it is my decision, not evolution's, to exploit the signal.

causing the receiver to update its probability distribution to be less accurate

This is an undesired consequence of deception (undesired by the liar, that is), so it seems strange to use it as part of the definition of deception. An ideal deceiver leaves its victim's map intact, so that it can exploit it again in the future.

Comment by justinpombrio on What am I missing? (quantum physics) · 2020-08-22T02:53:49.114Z · LW · GW

There's also a 3blue1brown video on Bell's theorem: https://www.youtube.com/watch?v=zcqZHYo7ONs

Comment by justinpombrio on Uncalibrated quantum experiments act clasically · 2020-07-22T05:44:25.006Z · LW · GW

Thanks. It was the diagram that was backwards; I meant for to be the amplitude of reflection, not of transmission. I updated the diagram.

Comment by justinpombrio on Uncalibrated quantum experiments act clasically · 2020-07-22T05:35:23.257Z · LW · GW

Thanks for taking the time to write this response up! This made some things click together for me.

In quantum mechanics, probabilities of mutually exclusive events still add: P(A∨B)=P(A)+P(B). However, things like “particle goes through slit 1 then hits spot x on screen” and “particle goes through slit 2 then hits spot x on screen” aren’t such mutually exclusive events.

That's a good point; is a strong precise notation of "mutually exclusive" in quantum mechanics. I meant to say that "events whose amplitudes you add" would often naturally be considered mutually exclusive under classical reasoning. ("Slit 1 then spot x" and "slit 2 then spot x" sure sound exclusive). And that if the phases are unknown then the classical reasoning actually works.

But that's kind of vague, and my whole introduction was sloppy. I added it after the fact; maybe should have stuck with just the "three experiments".

The Born rule takes the following form:

Ah! So the first Born rule you give is the only one I saw in my QM class way back when.

The second one I hadn't seen. From the wiki page, it sounds like a density matrix is a way of describing a probability distribution over wavefunctions. Which is what I've spent some time thinking about (though in this post I only wrote about probability distributions over a single amplitude). Except it isn't so simple: many distributions are indistinguishable, so the density matrix can be vastly smaller than a probability distribution over all relevant wavefunctions.

And some distributions ("ensembles") that sound different but are indistinguishable:

The wiki page: Therefore, unpolarized light cannot be described by any pure state, but can be described as a statistical ensemble of pure states in at least two ways (the ensemble of half left and half right circularly polarized, or the ensemble of half vertically and half horizontally linearly polarized). These two ensembles are completely indistinguishable experimentally, and therefore they are considered the same mixed state.

This is really interesting. It's satisfying to see things I was confusedly wondering about answered formally by von-Neumann almost 100 years ago.

Comment by justinpombrio on Uncalibrated quantum experiments act clasically · 2020-07-22T03:06:32.482Z · LW · GW

I just find it mighty suspicious that when you add two amplitudes of unknown phase, their Born probabilities add:

E[Born(sa + tb)] = Born(sa) + Born(tb)    when s, t ~ ⨀

But, judging from the lack of object-level comments, no one else finds this suspicious. My conclusion is that I should update my suspicious-o-meter.

Comment by justinpombrio on Uncalibrated quantum experiments act clasically · 2020-07-21T23:16:41.621Z · LW · GW

Thanks for all the pointers! I was, somewhat embarrassingly, unaware of the existence of that whole field.

Comment by justinpombrio on A Sketch of Answers for Physicalists · 2020-03-14T14:18:29.475Z · LW · GW

Would you and Jessicata mind clarifying what you mean by "physicalism"? Is it the same or different than Yudkowski's definition of "reductionism", for which he said:

Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

For example, I'd like to separate:

Physics-y (i.e., low level) maps are always better than high-level maps.

from:

Physics-y (i.e., low level) maps always make predictions that are at least as accurate as high-level maps, given sufficient information and computation.

I'm suspicious that Jessicata may be attacking a straw version of physicalism while you're defending a steel version, but it's hard to tell. (And even if not, it's nice to know exactly what's under discussion.)

Comment by justinpombrio on Humans Are Embedded Agents Too · 2019-12-27T00:51:53.784Z · LW · GW

Ooh, that is very insightful. The word-boundary problem around "values" feels fuzzy and ill-defined, but that doesn't mean that the thing we care about is actually fuzzy and ill-defined.

Comment by justinpombrio on Humans Are Embedded Agents Too · 2019-12-26T18:28:41.455Z · LW · GW

This post points out that many alignment problems can be phrased as embedded agency problems. It seems to me that they can also all be phrased as word-boundary problems. More precisely, for each alignment/embedded-agency problem listed here, there's a question (or a set of questions) of the form "what is X?" such that answering that question would go a long way toward solving the alignment/embedded-agency problem, and vice-versa.

Is this a useful reduction?

The "what is X?" question I see for each problem:

The Keyboard is Not The Human

What does it mean for a person to "say" something (in the abstract sense of the word)?

Modified Humans

What is a "human"? Furthermore, what does it mean to "modify" or "manipulate" a human?

Off-Equilibrium

What are the meanings of counterfactual statements? For example, what does it mean to say "We will launch of nukes if you do."?

Perhaps also, what is a "choice"?

Drinking

What is a "valid profession of one's values"?

Value Drift

What are a person's "values"? Focus being on people changing over time.

Akrasia

What is a "person", and what are a person's "values"? Focus being on people being make of disparate parts.

Preferences Over Quantum Fields

What are the meanings of abstract, high-level statements? Do they change if your low-level model of the world fundamentally shifts?

Unrealized Implications

What are a person's "values"? Focus being on someone knowing A and knowing A->B but not yet knowing B.

Socially Strategic Self-Modification

What are a person's "true values"? Focus being on self-modification.

Comment by justinpombrio on What's going on with "provability"? · 2019-10-14T01:33:10.681Z · LW · GW

All provable statements follow from the axioms

Yes, in any formal system, all provable statements follow from the axioms. However, there are many formal systems. Two of the most commonly used ones are Classical Logic and the Calculus of Inductive Constructions.

In Classical Logic, "forall P, P or not P" is an axiom. So, it's technically provable, but it would be misleading to say that you can prove it without further comment.

In the Calculus of Inductive Constructions (which is an extension of Intuitionistic Logic, if I understand correctly), "forall P, P or not P" is not provable.

So if there's a non-trival proof of "forall P, P or not P", it isn't in either of these formal systems. If you do have one in mind, what formal system (logic) is it in, and what does the proof look like?

Comment by justinpombrio on What's going on with "provability"? · 2019-10-13T23:20:42.306Z · LW · GW

In what sense? In Classical Logic it's an axiom, and in the Calculus of Indutive Constructions it's unprovable.

(Interestingly, you can prove in Coq that the negation of "forall P, P or not P" is unprovable. So it's safe to assume it: https://stackoverflow.com/questions/32812834/how-to-prove-excluded-middle-is-irrefutable-in-coq )

Comment by justinpombrio on Who lacks the qualia of consciousness? · 2019-10-07T03:09:42.549Z · LW · GW

I may not be turning my attention to it all the time, but like my left foot, there it is whenever I do.

When you do turn your attention to it, what is it like? Could you try to describe it in a way that would be useful for someone who does not experience it? For example,

Smell is a sensation, distinct from others like sight and sound. It detects particles in the air using the nose, and if you hold your nose than you mostly stop experiencing smell. Air in different places will smell differently. Smell emanates from certain objects, like wet socks or foods, and spreads out through the air. There are very many distinct smells; for example I can tell if popcorn is nearby from the smell, and I don't think I've ever confused the smell of popcorn for anything else. While color can easily be separated into components (e.g. RGB), I'm not aware of any nice separation like that for smells. Smells can be pleasant or unpleasant: flowers really do smell good sometimes, and a smell can be so bad that it makes you feel nauseous and is painful to experience. People mostly agree on what smells are pleasant or unpleasant. If I enter a place with a different smell, I'll tend to notice it immediately, then adjust to it and stop noticing it, unless it is particularly strong. I don't recall ever having smelled something in a dream.

I'm asking because there is more than one thing that I have experienced that could be what you are describing, and I'm not yet sure which of these things you are trying to refer to, or if you're referring to something else which I have not experienced and I'm a p-zombie.

Comment by justinpombrio on Who lacks the qualia of consciousness? · 2019-10-06T15:17:57.911Z · LW · GW

So I shall try to describe the experience. I have a vivid sensation of my own presence, my own self. This is the thing I am pointing at when I say that I am conscious. Whether I sit in meditation or in the midst of life, there I am. Indeed, more vividly in meditation, because then, that is where I direct my attention. But only in dreamless sleep is it absent.

Can you be much more specific about what you mean?

For example, I have had dreams in which:

  • I was me, walking around and doing stuff, and aware that I was dreaming.
  • I was me, walking around and doing stuff, from the usual perspective, but unaware that I was dreaming.
  • I was someone else (with a different name, history, body etc.), walking around and doing stuff, from the usual perspective.
  • I was me, walking around and doing stuff, but viewed from a third-person vantage point.
  • There were some people. One of them stood out, and was the "focus", but they felt more like the main character of a movie than "me".
  • There were some people, and none of them stood out from one another.

In which of these cases was I a p-zombie?

However, dreams to me are vague and fuzzy in comparison to the real deal, being awake. While I'm awake, I typically have what could be called a "self" with the following properties:

  1. Spatially, my self is located behind my eyes, I think?
  2. My self comes with knowledge and expectations of my personality and behavior. Like "I'm a professional, I'll behave professionally" when working, or "wheee, kitty!" when in proximity to a feline or image thereof.
  3. My self comes with a mood. Like "I just woke up, and am groggy, ugghh".

I think these properties are generally lacking when I dream, but it's hard to tell. E.g., I recall dreams in which I was afraid, but not dreams in which I was grumpy or groggy.

After meditating for a long time, I sometimes enter a state of mind that lacks some of these properties:

  1. I'm not sure about the spatial location thing?
  2. The knowledge and expectation of my personality and behavior is still available, but feels less important and it feels more like I have a choice at each moment.
  3. I tend to view moods as their component pieces. E.g. "I'm grumpy" becomes "physical sensation plus change in movement of attention plus bias in what thoughts arise".

Beyond of these specifics, this states of mind tend to come with a very strange feeling of something missing that was ordinarily there.

Which of these properties, when lacking, makes me a p-zombie? Or have I not captured it; is this thing that I call a "self" totally different from what you mean by "qualia of consciousness"? Either way, what properties does your "qualia of consciousness" have?

Epistemic status: generally muddled about all of this; suspicious that my ontology is wrong; certain that most attempts at communication around this topic go poorly.

Comment by justinpombrio on The Curious Prisoner Puzzle · 2019-08-29T00:41:12.514Z · LW · GW

I believe I was thinking of this one:

https://www.lesswrong.com/posts/f6ZLxEWaankRZ2Crv/probability-is-in-the-mind

Comment by justinpombrio on Thoughts from a Two Boxer · 2019-08-23T03:42:55.837Z · LW · GW

There's also a general reason to try to handle unrealistic scenarios: it can be a shortcut to finding a good theory.

For example, say you have a real-valued cubic equation, and you want to find real-valued answers to it, and imaginary answers don't even make sense because in the situation you're trying to model they're physically impossible. Even so, your best approach is to use the cubic formula, and simply accept the fact that some of the intermediate computations may produce complex numbers (in which case you should continue with the computation, because they may become real again), and some of the answers may be complex (in which case you should ignore those particular answers).

Solving real-valued polynomials gets a lot easier when you first consider the more general problem of solving complex-valued polynomials. Likewise, solving decision theory without mind reading might get a lot easier when you first consider decision theory with mind reading. Good theories are often very general.

Put another way, I don't want my algebra to completely break down when I try to take the square root of a negative number, and I don't want my decision theory to completely break down just because someone can read my mind.

Comment by justinpombrio on Verification and Transparency · 2019-08-09T04:33:38.735Z · LW · GW

I'm confused: you say that transparency and verification are the same thing, but your examples only seem to support that transparency enables verification. Is that closer to what you were trying to say?

Type signatures in a strongly typed language can be seen as a method of ensuring that the compiler proves that certain errors cannot occur, while also giving a human reading the program a better sense of what various functions do.

Yes! A programming language does this by restricting the set of programs that you can write, disallowing both correct and incorrect programs in the process. It has to, because it's infeasible (uncomputable, to be precise) to tell whether a program will actually hit "certain errors". For example, suppose that searchForCounterexampleToRiemannHypothesis() will run forever if the Riemann Hypothesis is true, and return true if it finds a counterexample. (This is a function you could write, I think.) Then if the Riemann Hypothesis is true, this program:

if (searchForCounterexampleToRiemannHypothesis()) {
    "string" / "stringy" // type error
}

is a perfectly fine infinite loop that never attempts to divide "string" by "stringy". Nevertheless, a (static) type system will overzealously disallow this perfectly dandy program because it can't tell any better.

So type systems weaken the language. While that example was concocted, there are more realistic examples where you have to go out of your way to satisfy the conservative type checker. But, to your point (and against my point in the beginning), they weaken the language by requiring type annotations that make the language easier for the type checker to reason about (verification), and also easier for people to understand (transparency). There is a tradeoff between verification&transparency and expressiveness.

Comment by justinpombrio on The Competence Myth · 2019-07-01T02:34:18.303Z · LW · GW

Make sure you're not dividing people into the camps of "competent" and "incompetent" too strongly. Yes, competence varies between individuals. But it also varies between areas for a given individual. And it varies day-to-day.

Today I was getting ready to go for a bike ride. I filled a bottle of water, got the bike out, discovered that it had a flat tire and I couldn't fix it, and put the bike away. After I couldn't find the bottle of water. I looked everywhere, twice. An hour later I realized I had put it in the bike's bottle holder while I had the bike out.

This is normal. Everyone makes mistakes all the time. I'm a programmer, and one of the things you notice is that no matter how skilled the person, and no matter how trivial the program, they'll write buggy code. Usually a mistake every few lines.

One of the points of the sequence is to try to notice your own mistakes and your own biases, so that you can triage against them. As opposed to digging your heels in and refusing to admit you've done anything wrong, which is a common alternate strategy. (I've also found meditation to help with some of this.)

Comment by justinpombrio on Knights and Knaves · 2019-06-10T19:48:09.143Z · LW · GW

To be able to always lie outwardly, he has to know the truth for himself, so his inner opinion is the truth.

Does it? Imagine an island filled with two groups of people: one group that believes only true statements, and another group that believes only false statements. Even if both groups tried to always be truthful, the first group would only utter true statements and the second group would only utter false statements. How would you tell whether you were on an island with these groups of people, or on an island with knights and knaves?

If you haven't read it, you should check out Raymond Smullyan's book called "What is the Name of this Book?". It's the source of knight and knave riddles, and it's amazing.

If both know of the truth, but are still acting differently, this must be on purpose. So in other words, one wants to harm you and the other not.

Right. The knights want to harm you, and the knaves want to help you. Sadly, both groups were cursed by a witch to forever tell the truth or lie. A knight regrets every statement they make---they want to lead you astray, but are compelled to tell you the truth instead. And a knave also regrets every statement they make---they want to point you in the right direction, but are compelled to lie instead. Their only consolation is that, even by lying, they are revealing information, and they hope that you're clever enough to figure it out.

Comment by justinpombrio on Book Review - Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness · 2019-02-03T17:02:22.295Z · LW · GW

I’d also love to hear in the comments what updates other people had from this.

My (distilled and cleaned up) thinking was as follows:

  1. Humans recognize each other mostly by face. I know this because people with face blindness routinely don't recognize people, even if they know them well. I believe that face blindness partially refutes your #3.
  2. Octopuses almost certainly have no particular ability to distinguish human faces. Thus they're probably doing something very different from us.
  3. What are octopuses good at? Mimicking fish and avoiding predators. Maybe they're using some of these skills to recognize humans.
  4. Even so, what are the sensory modalities they might be recognizing us with? Many animals are good with smell, but I presume that doesn't work in the water. Voice, likewise, seems like it might not carry over well. There are a bunch of big visual characteristics: height, skin color, clothing (variable), hair style (may be variable). And gait. I could imagine octopuses being very good at recognizing gait.
Comment by justinpombrio on New edition of "Rationality: From AI to Zombies" · 2018-12-16T17:00:17.227Z · LW · GW

One thing to keep in mind is that---whether or not it should---price suggests quality. The paperback books are cheap (are you selling them at-price?), which makes me think "mass production novel", rather than "deeply impactful nonfiction". It might be worth putting out an overpriced high-quality version for this reason alone.

And I would be very happy to buy a high-quality version of the books. I like hard covers. Leather-bound would be impressive.

Comment by justinpombrio on Measly Meditation Measurements · 2018-12-13T00:40:07.197Z · LW · GW

Are you able to sit cross-legged for more than 30 minutes, without moving your legs, without pain? Is there a trick to doing so that isn't "sit cross-legged an hour every day for a year; by then the pain will stop"?

Personally, my leg goes to sleep and starts throbbing, and I hear this is pretty common.