Posts

Comments

Comment by nshepperd on A time-invariant version of Laplace's rule · 2022-07-18T13:04:28.717Z · LW · GW

Neat! This looks a lot like my quick note on survival time prediction I wrote a few years back, but more in depth. Very nice.

Comment by nshepperd on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T02:42:24.988Z · LW · GW

None of us are calling for blame, ostracism, or cancelling of Michael.

What I'm saying is that the Berkeley community should be.

Ziz’s sentence you quoted doesn’t implicate Michael in any crimes.

Supplying illicit drugs is a crime (but perhaps the drugs were BYO?). IDK if doing so and negligently causing permanent psychological injury is a worse crime, but it should be.

Comment by nshepperd on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T02:22:59.509Z · LW · GW

I don’t think we need to blame/ostracize/cancel him and his group, except maybe from especially sensitive situations full of especially vulnerable people.

Based on the things I am reading about what has happened, blame, ostracism, and cancelling seem like the bare minimum of what we should do.

Vassar has had, I think about 6, transfems gravitate to him, join his projects, go on his quests, that I’ve heard. Including Olivia, and Jessica, and I think Devi. Devi had a mental breakdown and detransitioned IIHC. Jessica had a mental breakdown and didn’t detransition. Olivia became an agent of mental breakdown, compulsively breaking others via drug trips because they went through gates they shouldn’t’ve.

This is really, really serious. If this happened to someone closer to me I'd be out for blood, and probably legal prosecution.

Let's not minimize how fucked up this is.

Comment by nshepperd on Ranked Choice Voting is Arbitrarily Bad · 2021-04-05T07:07:44.342Z · LW · GW

Each cohort knows that Carol is not a realistic threat to their preferred candidate, and will thus rank her second, while ranking their true second choice last.

Huh? This doesn't make sense. In which voting system would that help? In most systems that would make no difference to the relative probability of your first and second choices winning.

Comment by nshepperd on How sure are you that brain emulations would be conscious? · 2021-01-03T08:26:06.013Z · LW · GW

That's possible, although then the consciousness-related utterances would be of the form "oh my, I seem to have suddenly stopped being conscious" or the like (if you believe that consciousness plays a causal role in human utterances such as "yep, i introspected on my consciousness and it's still there"), implying that such a simulation would not have been a faithful synaptic-level WBE, having clearly differing macro-level behaviour.

Comment by nshepperd on What are your greatest one-shot life improvements? · 2020-05-18T03:57:26.606Z · LW · GW

As a more powerful version of this, you can install uBlock Origin and configure these custom filters to remove everything on youtube except for the video and the search box. As a user, I don't miss the comments, social stuff, 'recommendations', or any other stuff at all.

Comment by nshepperd on What is the subjective experience of free will for agents? · 2020-04-11T07:05:02.157Z · LW · GW

I must admit I can't make any sense of your objections. There aren't any deep philosophical issues with understanding decision algorithms from an outside perspective. That's the normal case! For instance, A*

Comment by nshepperd on What is the subjective experience of free will for agents? · 2020-04-07T21:56:50.681Z · LW · GW
Comment by nshepperd on What is the subjective experience of free will for agents? · 2020-04-02T19:54:22.436Z · LW · GW

Possibility and Could-ness

Comment by nshepperd on Circling as Cousin to Rationality · 2020-01-04T00:52:20.582Z · LW · GW

This isn't a criticism of this post or of Vaniver, but more a comment on Circling in general prompted by it. This example struck me in particular:

Orient towards your impressions and emotions and stories as being yours, instead of about the external world. “I feel alone” instead of “you betrayed me.”

It strikes me as very disturbing that this should be the example that comes to mind. It seems clear to me that one should not, under any circumstances engage in a group therapy exercise designed to lower your emotional barriers and create vulnerability in the presence of anyone you trust less than 100%, let alone someone you think has 'betrayed' you. This seems like a great way to get manipulated, taken advantage of by sexual abusers, gaslighted etc, which is a particular concern given the multiple allegations of abuse and sexual misconduct in the EA/Circling communities (1, 2, ChristianKl's comment). Reframing these behaviours as personal emotions and stories seems like it would further contribute to the potential for such abuse.

Comment by nshepperd on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-03T15:51:34.901Z · LW · GW

Where does that obligation come from?

This may not be Said's view, but it seems to me that this obligation comes from the sheer brute fact that if no satisfactory response is provided, readers will (as seems epistemically and instrumentally correct) conclude that there is no satisfactory response and judge the post accordingly. (Edit: And also, entirely separately, the fact that if these questions aren't answered the post author will have failed to communicate, rather defeating the point of making a public post.)

Obviously readers will conclude this more strongly if there's a back-and-forth in which the question is not directly answered, and less strongly if the author doesn't respond to any comments at all (which suggests they're just busy). (And readers will not conclude this at all if the question seems irrelevant or otherwise not to need a response.)

That is to say, the respect of readers on this site is not automatically deserved, and cannot be taken by force. Replying to pertinent questions asking for clarification with a satisfactory response that fills a hole in the post's logic is part of how one earns such respect; it is instrumentally obligatory.

On this view, preventing people from asking questions can do nothing but mislead readers by preventing them from noticing whatever unclearness / ambiguity etc the question would have asked about. It doesn't release authors from this obligation, but just means we have to downgrade our trust in all posts on the site since this obligation cannot be met.

Comment by nshepperd on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-03T00:57:30.346Z · LW · GW

T3t's explanations seem quite useless to me. The procedure they describe seems highly unlikely to reach anything like a correct interpretation of anything, being basically a random walk in concept space.

It's hard to see what "I don't understand what you meant by X, also here's a set of completely wrong definitions I arrived at by free association starting at X" could possibly add over "I don't understand what you meant by X", apart from wasting everyone's time redirecting attention onto a priori wrong interpretations.

I'm also somewhat alarmed to see people on this site advocating the sort of reasoning by superficial analogy we see here:

“Conforming to or based on fact” feels very similar to “the map corresponds to the territory”.

Performing the substitution: “An expression that is worthy of acceptance or belief, as the expression (map) corresponds to the internal state of the agent that generated it (territory).”

So, overall, I'm not very impressed, no.

Comment by nshepperd on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-02T09:11:08.564Z · LW · GW

But my sense is that if the goal of these comments is to reveal ignorance, it just seems better to me to argue for an explicit hypothesis of ignorance, or a mistake in the post.

My sense is the exact opposite. It seems better to act so as to provide concrete evidence of a problem with a post, which stands on its own, than to provide an argument for a problem existing, which can be easily dismissed (ie. show, don't tell). Especially when your epistemic state is that a problem may not exist, as is the case when you ask a clarifying question and are yet to receive the answer!

Comment by nshepperd on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-01T22:14:51.962Z · LW · GW

To be clear, I think your comment was still net-negative for the thread, and provided little value (in particular in the presence of other commenters who asked the relevant questions in a, from my perspective, much more productive way)

I just want to note that my comment wouldn't have come about were it not for Said's.

Again, this is a problem that would easily be resolved by tone-of-voice in the real world, but since we are dealing with text-based communication here, these kinds of confusions can happen again and again.

To be frank, I find your attitude here rather baffling. The only person in this thread who interpreted Said's original comment as an attack seems to have been you. Vaniver had no trouble posting a response, and agreed that an explanation was necessary but missing.

Comment by nshepperd on Meta-discussion from "Circling as Cousin to Rationality" · 2020-01-01T21:20:44.401Z · LW · GW

FWIW, that wasn't my interpretation of quanticle's comment at all. My reading is that "healthy" was not meant as a proposed interpretation of "authentic" but as an illustrative substitution demonstrating the content-freeness of this use of the word -- because the post doesn't get any more or less convincing when you replace "authentic" with different words.

This is similar to what EY does in Applause Lights itself, where he replaces words with their opposites to demonstrate that sentences are uninformative.

(As an interpretation, it would also be rather barren, and not particularly 'concrete' either: obviously "'authentic' means 'healthy'" just raises the question of what 'healthy' means in this context!)

Comment by nshepperd on Circling as Cousin to Rationality · 2020-01-01T07:53:20.199Z · LW · GW

Why should “that which can be destroyed by the truth” be destroyed? Because the truth is fundamentally more real and valuable than what it replaces, which must be implemented on a deeper level than “what my current beliefs think.” Similarly, why should “that which can be destroyed by authenticity” be destroyed? Because authenticity is fundamentally more real and valuable than what it replaces, which must be implemented on a deeper level than “what my current beliefs think.” I don’t mean to pitch ‘radical honesty’ here, or other sorts of excessive openness; authentic relationships include distance and walls and politeness and flexible preferences.

To expand on Said and quanticle's comments here, I find this argument deeply unconvincing, and here's why. I see three things missing here:

  1. A definition of 'authentic' in concrete terms -- what kind of behaviour does it entail, with what kind of consequences? This can be a dictionary definition, in exchange for shifting a lot of burden to the following two steps.
  2. An argument that 'authenticity' so defined is "real and valuable" enough to be more valuable than anything that might be lost in the course of such behaviour -- this is not as simple as a superficial argument by analogy to truth might make it appear, since the argument for believing true things is more complex than that in the first place (for instance, relying on the particular role of true beliefs in decision theory).
  3. An argument that Circling is 'authentic' in the manner so defined (presumably, since a defense of Circling seems to be the point of the post).

Currently all three holes here seem to be plugged by the simple use of 'authentic' as an applause light.

Comment by nshepperd on Can we make peace with moral indeterminacy? · 2019-10-16T07:31:12.532Z · LW · GW

If what you want is to do the right thing, there's no conflict here.

Conversely, if you don't want to do the right thing, maybe it would be prudent to reconsider doing it...?

Comment by nshepperd on Let Values Drift · 2019-06-21T03:35:25.551Z · LW · GW

I don't see the usual commonsense understanding of "values" (or the understanding used in economics or ethics) as relying on values being ontologically fundamental in any way, though. But you've the fact that they're not to make a seemingly unjustified rhetorical leap to "values are just habituations or patterns of action", which just doesn't seem to be true.

Most importantly, because the "values" that people are concerned with then they talk about "value drift" are idealized values (ala. extrapolated volition), not instantaneous values or opinions or habituations.

For instance, philosophers such as EY consider that changing one's mind in response to a new moral argument is not value drift because it preserves one's idealized values, and that it is generally instrumentally positive because (if it brings one's instantaneous opinions closer to their idealized values) it makes one better at accomplishing their idealized values. So indeed, we should let the EAs "drift" in that sense.

On the other hand, getting hit with a cosmic ray which alters your brain, or getting hacked by a remote code execution exploit is value drift because it does not preserve one's idealized values (and is therefore bad, according to the usual decision theoretic argument, because it makes you worse at accomplishing them). And those are the kind of problems we worry about with AI.

Comment by nshepperd on Let Values Drift · 2019-06-21T01:37:35.928Z · LW · GW

When we talk of values as nouns, we are talking about the values that people have, express, find, embrace, and so on. For example, a person might say that altruism is one of their values. But what would it mean to “have” altruism as a value or for it to be one of one’s values? What is the thing possessed or of one in this case? Can you grab altruism and hold onto it, or find it in the mind cleanly separated from other thoughts?

Since this appears to be a crux of your whole (fallacious, in my opinion) argument, I'm going to start by just criticizing this point. This argument proves far too much. It proves that:

  • People don't have beliefs, memories or skills
  • Books don't have concepts
  • Objects don't have colors
  • Shapes don't have total internal angles

It seems as if you've rhetorically denied the existence of any abstract properties whatsoever, for the purpose of minimizing values as being "merely" habituations or patterns of action. But I don't see why anyone should actually accept that claim.

Comment by nshepperd on Interpretations of "probability" · 2019-05-10T05:10:25.290Z · LW · GW

Doesn't it mean the same thing in either case? Either way, I don't know which way the coin will land or has landed, and I have some odds at which I'll be willing to make a bet. I don't see the problem.

(Though my willingness to bet at all will generally go down over time in the "already flipped" case, due to the increasing possibility that whoever is offering the bet somehow looked at the coin in the intervening time.)

Comment by nshepperd on Interpretations of "probability" · 2019-05-10T04:47:17.331Z · LW · GW

The idea that "probability" is some preexisting thing that needs to be "interpreted" as something always seemed a little bit backwards to me. Isn't it more straightforward to say:

  1. Beliefs exist, and obey the Kolmogorov axioms (at least, "correct" beliefs do, as formalized by generalizations of logic (Cox's theorem), or by possible-world-counting). This is what we refer to as "bayesian probabilities", and code into AIs when we want to them to represent beliefs.
  2. Measures over imaginary event classes / ensembles also obey the Kolmogorov axioms. "Frequentist probabilities" fall into this category.

Personally I mostly think about #1 because I'm interested in figuring out what I should believe, not about frequencies in arbitrary ensembles. But the fact is that both of these obey the same "probability" axioms, the Kolmogorov axioms. Denying one or the other because "probability" must be "interpreted" as exclusively either #1 or #2 is simply wrong (but that's what frequentists effectively do when they loudly shout that you "can't" apply probability to beliefs).

Now, sometimes you do need to interpret "probability" as something -- in the specific case where someone else makes an utterance containing the word "probability" and you want to figure out what they meant. But the answer there is probably that in many cases people don't even distinguish between #1 and #2, because they'll only commit to a specific number when there's a convenient instance of #2 that make #1 easy to calculate. For instance, saying 1/6 for a roll of a "fair" die.

People often act as though their utterances about probability refer to #1 though. For instance when they misinterpret p-values as the post-data probability of the null hypothesis and go around believing that the effect is real...

Comment by nshepperd on Functional Decision Theory vs Causal Decision Theory: Expanding on Newcomb's Problem · 2019-05-03T06:07:02.923Z · LW · GW

No, that doesn't work. It seems to me you've confused yourself by constructing a fake symmetry between these problems. It wouldn't make any sense for Omega to "predict" whether you choose both boxes in Newcomb's if Newcomb's were equivalent to something that doesn't involve choosing boxes.

More explicitly:

Newcomb's Problem is "You sit in front of a pair of boxes, which are either- both filled with money if Omega predicted you would take one box in this case, otherwise only one is filled". Note: describing the problem does not require mentioning "Newcomb's Problem"; it can be expressed as a simple game tree (see here for some explanation of the tree format):

diagram.

In comparison, your "Inverse Newcomb" is "Omega gives you some money iff it predicts that you take both boxes in Newcomb's Problem, an entirely different scenario (ie. not this case)."

The latter is more of the form "Omega arbitrarily rewards agents for taking certain hypothetical actions in a different problem" (of which a nearly limitless variety can be invented to justify any chosen decision theory¹), rather than being an actual self-contained problem which can be "solved".

The latter also can't be expressed as any kind of game tree without "cheating" and naming "Newcomb's Problem" verbally --- or rather, you can express a similar thing by embedding the Newcomb game tree and referring to the embedded tree, but that converts it into a legitimate decision problem, which FDT of course gives the correct answer to (TODO: draw an example ;).

(¹): Consider Inverse^2 Newcomb, which I consider the proper symmetric inverse of "Inverse Newcomb": Omega puts you in front of two boxes and says "this is not Newcomb's Problem, but I have filled both boxes with money iff I predicted that you take one box in standard Newcomb". Obviously here FDT takes both boxes and a tidy $1,000,1000 profit (plus the $1,000,000 from Standard Newcomb). Whereas CDT gets... $1000 (plus $1000 from Standard Newcomb).

Comment by nshepperd on The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence · 2019-04-25T17:19:39.715Z · LW · GW

Yes, you need to have a theory of physics to write down a transition rule for a physical system. That is a problem, but it's not at all the same problem as the "target format" problem. The only role the transition rule plays here is it allows one to apply induction to efficiently prove some generalization about the system over all time steps.

In principle a different more distinguished concise description of the system's behaviour could play the a similar role (perhaps, the recording of the states of the system + the shortest program that outputs the recording?). Or perhaps there's some way of choosing a distinguished "best" formalization of physics. But that's rather out of scope of what I wanted to suggest here.

But then you are measuring proof shortness relative to that system. And you could be using one of countless other formal systems which always make the same predictions, but relative to which different proofs are short and long.

It would be a O(1) cost to start the proof by translating the axioms into a more convenient format. Much as Kolmogorov complexity is "language dependent" but not asymptotically because any particular universal turing machine can be simulated in any other for a constant cost.

The assumption (including that it takes in and puts out in arabic numerals, and uses “*” as the multuplication command, and that buttons must be pressed,… and all the other things you need to actually use it) includes that.

These are all things that can be derived from a physical description of the calculator (maybe not in fewer steps than it takes to do long multiplication, but certainly in fewer steps than less trivial computations one might do with a calculator). There's no observer dependency here.

Comment by nshepperd on The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence · 2019-04-21T22:33:33.034Z · LW · GW

That's not an issue in my formalization. The "logical facts" I speak of in the formalized version would be fully specified mathematical statements, such as "if the simulation starts in state X at t=0, the state of the simulation at t=T is Y" or "given that Alice starts in state X, then <some formalized way of categorising states according to favourite ice cream flavour> returns Vanilla". The "target format" is mathematical proofs. Languages (as in English vs Chinese) don't and can't come in to it, because proof systems are language-ignorant.

Note, the formalized criterion is broader than the informal "could you do something useful with this simulation IRL" criterion, even though the latter is the 'inspiration' for it. For instance, it doesn't matter whether you understand the programming language the simulation is written in. If someone who did understand the language could write the appropriate proofs, then the proofs exist.

Similarly, if a simulation is run under Homomorphic_encryption, it is nevertheless a valid simulation, despite the fact that you can't read it if you don't have the decryption key. Because a proof exists which starts by "magically" writing down the key, proving that it's the correct decryption key, then proceeding from there.

An informal criterion which maybe captures this better would be: If you and your friend both have (view) access to a genuine computation of some logical facts X, it should be possible to convince your friend of X in fewer words by referring to the alleged computation (but you are permitted unlimited time to think first, so you can reverse engineer the simulation, bruteforce some encryption keys, learn Chinese, whatever you like, before talking). A bit like how it's more efficient to convince your friend that 637265729567*37265974 = 23748328109134853258 by punching the numbers into a calculator and saying "see?" than by handing over a paper with a complete long multiplication derivation (assuming you are familiar with the calculator and can convince your friend that it calculates correctly).

Comment by nshepperd on The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence · 2019-04-16T07:41:05.793Z · LW · GW

This idea is, as others have commented, pretty much Dust theory.

The solution, in my opinion, is the same as the answer to Dust theory: namely, it is not actually the case that anything is a simulation of anything. Yes, you can claim that (for instance) the motion of the atoms in a pebble can be interpreted as a simulation of Alice, in the sense that anything can be mapped to anything... but in a certain more real sense, you can't.

And that sense is this: an actual simulation of Alice running on a computer grants you certain powers - you can step through the simulation, examine what Alice does, and determine certain facts such as Alice's favourite ice cream flavour (these are logical facts, given the simulation's initial state). If the simulation is an upload of your friend Alice, then by doing so you learn meaningful new facts about your friend.

In comparison, a pebble "interpreted" as a simulation of Alice affords you no such powers, because the interpretation (mapping from pebble states to simulation data) is entirely post-hoc. The only way to pin down the mapping---such that you could, for instance, explicitly write it down, or take the pebble's state and map it to an answer about Alice's favourite ice cream---is to already have carried out the actual simulation, separately, and already know these things about Alice.

In general, "legitimate" computations of certain logical facts (such as the answers one might ask about simulations of people) should, in a certain sense, make it easier to calculate those logical facts then doing so from scratch.

A specific formalization of this idea would be that a proof system equipped with an oracle (axiom schema) describing the states of the physical system which allegedly computed these facts, as well as its transition rule, should be able to find proofs for those logical facts in less steps than one without such axioms.

Such proofs will involve first coming up with a mapping (such as interpreting certain electrical junctions as nand gates), proving them valid using the transition rules, then using induction to jump to "the physical state at timestep t is X therefore Alice's favourite ice cream colour is Y". Note that the requirement that these proofs be short naturally results in these "interpretations" being simple.

As far as I know, this specific formalization of the anti-Dust idea is original to me, though the idea that "interpretations" of things as computations ought to be "simple" is not particularly new.

Comment by nshepperd on Highlights from "Integral Spirituality" · 2019-04-16T06:31:58.499Z · LW · GW

We can (and should) have that discussion, we should just have it on a separate post

Can you point to the specific location that discussion "should" happen at?

Comment by nshepperd on Highlights from "Integral Spirituality" · 2019-04-16T06:01:46.539Z · LW · GW
Comment by nshepperd on The Hard Work of Translation (Buddhism) · 2019-04-15T17:59:21.246Z · LW · GW

The two parts I mentioned are simply the most obviously speculative and unjustified examples. I also don't have any real reason to believe the vaguer pop psychology claims about building stories, backlogs, etc.

The post would prob­a­bly have been a bit cleaner to not men­tion the few wild spec­u­la­tions he men­tions, but get­ting caught up on the tiny de­tails seems to miss the for­est from the trees.

It seems to me LW has a big epistemic hygiene problem, of late. We need to collectively stop make excuses for posting wild speculations as if they were fact, just because the same post also contains some interesting 'insight'. We should be downvoting such posts and saying to the author "go away and write this again, with the parts that you can't justify removed".

Doing so may reveal that the 'insight' either does or does not successfully stand alone when the supporting 'speculation' is removed. Either way, we learn something valuable about the alleged insight; and we benefit directly by not spreading claims that lack evidence.

Comment by nshepperd on The Hard Work of Translation (Buddhism) · 2019-04-12T21:09:13.500Z · LW · GW

For a post that claims to be a "translation" of Buddhism, this seems to contain:

  • No Pali text;
  • No specific references to Pali text, or any sources at all;
  • No actual translation work of any kind.

On the other hand, it does contain quite a bit of unjustified speculation. "Literal electrical resistance in the CNS", really? "Rewiring your CNS"? Why should I believe any of this?

Why are people upvoting this?

Comment by nshepperd on Boundaries - A map and territory experiment. [post-rationality] · 2019-02-02T04:47:04.439Z · LW · GW

"Above the map"? "Outside the territory"? This is utter nonsense. Rationality insists no such thing. Explicitly the opposite, in fact.

Given things like this too:

Existing map-less is very hard. The human brain really likes to put maps around things.

At this point I have to wonder if you're just rounding off rationality to the nearest thing to which you can apply new-age platitudes. Frankly, this is insulting.

Comment by nshepperd on Masculine Virtues · 2019-01-31T17:54:16.311Z · LW · GW

You don't need to estimate this.

A McGill University study found that more than 60 percent of college-level soccer players reported symptoms of concussion during a single season. Although the percentage at other levels of play may be different, these data indicate that head injuries in soccer are more frequent than most presume.

A 60% chance of concussion is more than enough for me to stay far away.

Comment by nshepperd on No option to report spam · 2019-01-08T17:40:48.609Z · LW · GW

Prevention over removal. Old LW required a certain amount of karma in order to create posts, and we correspondingly didn't have a post spam problem that I remember. I strongly believe that this requirement should be re-introduced (with or without a moderator approval option for users without sufficient karma).

Comment by nshepperd on Topological Fixed Point Exercises · 2018-11-21T01:16:05.388Z · LW · GW

Proof of #4, but with unnecessary calculus:

Not only is there an odd number of tricolor triangles, but they come in pairs according to their orientation (RGB clockwise/anticlockwise). Proof: define a continuously differentiable vector field on the plane, by letting the field at each vertex be 0, and the field in the center of each edge be a vector of magnitude 1 pointing in the direction R->G->B->R (or 0 if the two adjacent vertices are the same color). Extend the field to the complete edges, then the interiors of the triangles by some interpolation method with continuous derivative (eg. cosine interpolation).

Assume the line integral along one unit edge in the direction R->G or G->B or B->R to be 1/3. (Without loss of generality since we can rescale the graph/vectors to make this true). Then a similar parity argument to Sperner's 1d lemma (or the FTC) shows that the clockwise line integral along each large edge is 1/3, hence the line integral around the large triangle is 1/3+1/3+1/3=1.

By green's theorem, this is equal to the integrated curl of the field in the interior of the large triangle, and hence equal (by another invocation of green's theorem) to the summed clockwise line integrals around each small triangle. The integrals around a unicolor or bicolor triangle are 0 and -1/3 + 1/3 + 0 = 0 respectively, leaving only tricolor triangles, whose integral is again 1 depending on orientation. Thus: (tricolor clockwise) - (tricolor anticlockwise) = 1. QED.

Comment by nshepperd on Strong Votes [Update: Deployed] · 2018-11-08T03:02:27.143Z · LW · GW

Your interpretation of the bolded part is correct.

Comment by nshepperd on Strong Votes [Update: Deployed] · 2018-11-08T02:32:57.826Z · LW · GW

We got to discussing this on #lesswrong recently. I don't see anyone here pointing this out yet directly, so:

Can you technically Strong Upvote everything? Well, we can’t stop you. But we’re hoping a combination of mostly-good-faith + trivial inconveniences will result in people using Strong Upvotes when they feel it’s actually important.

This approach, hoping that good faith will prevent people from using Strong votes "too much", is a good example of an Asshole Filter (linkposted on LW last year). You've set some (unclear) boundaries, then due to not enforcing them, reward those who violate them with increased control over the site conversation. Chris_Leong gestures towards this without directly naming it in a sibling comment.

In my opinion “maybe put limits on strong upvotes if this seems to be a problem” is not the correct response to this problem, nor would be banning or otherwise 'disciplining' users who use strong votes "too much". The correct response is to remove the asshole filter by altering the incentives to match what you want to happen. Options include:

  1. Making votes normal by default but encouraging users to use strong votes freely, up to 100% of the time, so that good faith users are not disadvantaged. (Note: still disenfranchises users who don't notice that this feature exists, but maybe that's ok.)
  2. Making votes strong by default so that it's making a "weak" vote that takes extra effort. (Note: this gives users who carefully make weak votes when they have weak opinions less weight, but at least they do this with eyes open and in the absence of perverse incentives.)
  3. #2 but with some algorithmic adjustment to give careful users more weight instead of less. This seems extremely difficult to get right (cf. slashdot metamoderation). Probably the correct answer there is some form of collaborative filtering.

Personally I favour solution #1.

I'll add that this is not just a hypothetical troll-control issue. This is also a UX issue. Forcing users to navigate an unclear ethical question and prisoner's dilemma—how much strong voting is "too much"—in order to use the site is unpleasant and a bad user experience. There should not be a "wrong" action available in the user interface.

PS. I'll concede that making strong votes an actually limited resource that is enforced by the site economically (eg. with Token Bucket quota) would in a way also work, due to eliminating the perceived need for strong votes to be limited by "good faith". But IMO the need is only perceived, and not real. Voting is for expressing preferences, and preferences are unlimited.

Comment by nshepperd on Kalman Filter for Bayesians · 2018-10-23T04:21:49.306Z · LW · GW

Good post!

Is it common to use Kalman filters for things that have nonlinear transformations, by approximating the posterior with a Gaussian (eg. calculating the closest Gaussian distribution to the true posterior by JS-divergence or the like)? How well would that work?

Grammar comment--you seem to have accidentally a few words at

Measuring multiple quantities: what if we want to measure two or more quantities, such as temperature and humidity? Furthermore, we might know that these are [missing words?] Then we now have multivariate normal distributions.

Comment by nshepperd on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-21T22:57:58.293Z · LW · GW

How big was your mirror, and how much of your face did you see in it?

Comment by nshepperd on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-20T02:43:01.580Z · LW · GW

C is basically a statement that, if included in a valid argument about the truth of P, causes the argument to tell us either P or ~P. That’s definitionally what it means to be able to know the criterion of truth.

That's not how algorithms work and seems... incoherent.

That you want to deny C is great,

I did not say that either.

because I think (as I’m finding with Said), that we already agree, and any disagreement is the consequence of misunderstanding, probably because it comes too close to sounding to you like a position that I would also reject, and the rest of the fundamental disagreement is one of sentiment, perspective, having worked out the details, and emphasis.

No, I don't think we do agree. It seems to me you're deeply confused about all of this stuff.

Here's an exercise: Say that we replace "C" by a specific concrete algorithm. For instance the elementary long multiplication algorithm used by primary school children to multiply numbers.

Does anything whatsoever about your argument change with this substitution? Have we proved that we can explain multiplication to a rock? Or perhaps we've proved that this algorithm doesn't exist, and neither do schools?

Another exercise: suppose, as a counterfactual, that Laplace's demon exists, and furthermore likes answering questions. Now we can take a specific algorithm C: "ask the demon your question, and await the answer, which will be received within the minute". By construction this algorithm always returns the correct answer. Now, your task is to give the algorithm, given only these premises, that I can follow to convince a rock that Euclid's theorem is true.

Comment by nshepperd on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-19T23:29:01.944Z · LW · GW

It seems that you don't get it. Said just demonstrated that even if C exists it wouldn't imply a universally compelling argument.

In other words, this:

Suppose we know the criterion of truth, C; that is, there exists (not counterfactually but actually as in anyone can observe this thing) a procedure/​algorithm to assess if any given statement is true. Let P be a statement. Then there exists some argument, A, contingent on C such that A implies P or ~P. Thus for all P we can know if P or ~P. This would make A universally compelling, i.e. A is a mind-independent argument for the truth value of all statements that would convince even rocks.

appears to be a total non sequitur. How does the existence of an algorithm enable you to convince a rock of anything? At a minimum, an algorithm needs to be implemented on a computer... Your statement, and therefore your conclusion that C doesn't exist, doesn't follow at all.

(Note: In this comment, I am not claiming that C (as you've defined it) exists, or agreeing that it needs to exist for any of my criticisms to hold.)

Comment by nshepperd on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-19T18:34:30.674Z · LW · GW

It doesn't seem to be a strawman of what eg. gworley and TAG have been saying, judging by the repeated demands for me to supply some universally compelling "criterion of truth" before any of the standard criticisms can be applied. Maybe you actually disagree with them on this point?

It doesn't seem like applying full force in criticism is a priority for the 'postrationality' envisioned by the OP, either, or else they would not have given examples (compellingness-of-story, willingness-to-life) so trivial to show as bad ideas using standard arguments.

Comment by nshepperd on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-19T00:54:22.820Z · LW · GW

As for my story about how the brain works: yes, it is obviously a vast simplification. That does not make it false, especially given that “the brain learns to use what has worked before and what it thinks is likely to make it win in the future” is exactly what Eliezer is advocating in the above post.

Even if true, this is different from "epistemic rationality is just instrumental rationality"; as different as adaptation executors are from fitness maximisers.

Separately, it's interesting that you quote this part:

The important thing is to hold nothing back in your criticisms of how to criticize; nor should you regard the unavoidability of loopy justifications as a warrant of immunity from questioning.

Because it seems to me that this is exactly what advocates of "postrationality" here are not doing, when they take the absence of universally compelling arguments as license to dismiss rationality and truth-based arguments against their positions.¹

Eliezer also says this:

Always apply full force, whether it loops or not—do the best you can possibly do, whether it loops or not—and play, ultimately, to win.

It seems to me that applying full force in criticism of postrationality amounts to something like the below:

"Indeed, compellingness-of-story, willingness-to-life, mythic mode, and many other non-evidence-based criteria are alternative criteria which could be used to select beliefs. However we have huge amounts of evidence (catalogued in the Sequences, and in the heuristics and biases literature) that these criteria are not strongly correlated to truth, and therefore will lead you to holding wrong beliefs, and furthermore that holding wrong beliefs is instrumentally harmful, and, and [the rest of the sequences, Ethical Injunctions, etc]..."

"Meanwhile, we also have vast tracts of evidence that science works, that results derived with valid statistical methods replicate far more often than any others, that beliefs approaching truth requires accumulating evidence by observation. I would put the probability that rational methods are the best criteria I have for selecting beliefs at . Hence, it seems decisively not worth it to adopt some almost certainly harmful 'postrational' anti-epistomology just because of that probability. In any case, per Ethical Injunctions, even if my probabilities were otherwise, it would be far more likely that I've made a mistake in reasoning than that adopting non-rational beliefs by such methods would be a good idea."

Indeed, much of the Sequences could be seen as Eliezer considering alternative ways of selecting beliefs or "viewing the world", analyzing these alternative ways, and showing that they are contrary to and inferior to rationality. Once this has been demonstrated, we call them "biases". We don't cling to them on the basis that "we can't know the criterion of truth".

Advocates of postrationality seem to be hoping that the fact that P(Occam's razor) < makes these arguments go away. It doesn't work like that. P(Occam's razor) = at most makes of these arguments go away. And we have a lot of evidence for Occam's razor.

¹ As gworley seems to do here and here seemingly expecting me to provide a universally compelling argument in response.

Comment by nshepperd on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-18T17:31:12.692Z · LW · GW

I'll have more to say later but:

The way that I’d phrase it is that there’s a difference between considering a claim to be true, and considering its justification universally compelling.

Both of these are different from the claim actually being true. The fact that Occam's razor is true is what causes the physical process of (occamian) observation and experiment to yield correct results. So you see, you've already managed to rephrase what I've been saying into something different by conflating map and territory.

Comment by nshepperd on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-18T02:50:55.632Z · LW · GW

This stuff about rain dancing seems like just the most banal epistemological trivialities, which have already been dealt with thoroughly in the Sequences. The reasons why such "tests" of rain dancing don't work are well known and don't need to be recapitulated here.

But to do that, you need to use a meta-model. When I say that we don’t have direct access to the truth, this is what I mean;

This has nothing to do with causal pathways, magic or otherwise, direct or otherwise. Magic would not turn a rock into a philosopher even if it should exist.

Yes, carrying out experiments to determine reality relies on Occam's razor. It relies on Occam's razor being true. It does not in any way rely on me possessing some magical universally compelling argument for Occam's razor. Because Occam's razor is in fact true in our universe, experiment does in fact work, and thus the causal pathway for evaluating our models does in fact exist: experiment and observation (and bayesian statistics).

I'm going to stress this point because I noticed others in this thread make this seemingly elementary map-territory confusion before (though I didn't comment on it there). In fact it seems to me now that conflating these things is maybe actually the entire source of this debate: "Occam's razor is true" is an entirely different thing from "I have access to universally compelling arguments for Occam's razor", as different as a raven and the abstract concept of corporate debt. The former is true and useful and relevant to epistemology. The latter is false, impossible and useless.

Because the former is true, when I say "in fact, there is a causal pathway to evaluate our models: looking at reality and doing experiments", what I say is, in fact, true. The process in fact works. It can even be carried out by a suitably programmed robot with no awareness of what Occam's razor or "truth" even is. No appeals or arguments about whether universally compelling arguments for Occam's razor exist can change that fact.

(Why am I so lucky as to be a mind whose thinking relies on Occam's razor in a world where Occam's razor is true? Well, animals evolved via natural selection in an Occamian world, and those whose minds were more fit for that world survived...)

But honestly, I'm just regurgitating Where Recursive Justification Hits Bottom at this point.

This is a reinforcement learning system which responds to rewards: if particular thoughts or assumptions (...) have led to actions which brought the organism (internally or externally generated rewards), then those kinds of thoughts and assumptions will be reinforced.

This seems like a gross oversimplification to me. The mind is a complex dynamical system made of locally reinforcement-learning components, which doesn't do any one thing all the time.

In other words, we end up having the kinds of beliefs that seem useful, as evaluated by whether they succeed in giving us rewards. Epistemic and instrumental rationality were the same all along.

And this seems simply wrong. You might as well say "epistemic rationality and chemical action-potentials were the same all along". Or "jumbo jets and sheets of aluminium were the same all along". A jumbo jet might even be made out of sheets of aluminium, but a randomly chosen pile of the latter sure isn't going to fly.

As for your examples, I don't have anything to add to Said's observations.

Comment by nshepperd on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-16T15:15:55.398Z · LW · GW

Indeed, the scientific history of how observation and experiment led to a correct understanding of the phenomenon of rainbows is long and fascinating.

Comment by nshepperd on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-16T14:28:04.061Z · LW · GW

I'm sorry, what? In this discussion? That seems like an egregious conflict of interest. You don't get to unilaterally decide that my comments are made in bad faith based on your own interpretation of them. I saw which comment of mine you deleted and honestly I'm baffled by that decision.

Comment by nshepperd on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-16T04:39:01.399Z · LW · GW

If I may summarize what I think the key disagreement is, you think we can know truth well enough to avoid the problem of the criterion and gain nothing from addressing it.

and to be pointed about it I think believing you can identify the criterion of truth is a “comforting” belief that is either contradictory or demands adopting non-transcendental idealism

Actually... I was going to edit my comment to add that I'm not sure that I would agree that I "think we can know truth well enough to avoid the problem of the criterion" either, since your conception of this notion seems to intrinsically require some kind of magic, leading me to believe that you somehow mean something different by this than I would. But I didn't get around to it in time! No matter.

Comment by nshepperd on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-16T02:36:20.453Z · LW · GW

If I may summarize what I think the key disagreement is, you think we can know truth well enough to avoid the problem of the criterion and gain nothing from addressing it.

That's not my only disagreement. I also think that your specific proposed solution does nothing to "address" the problem (in particular because it just seems like a bad idea, in general because "addressing" it to your satisfaction is impossible), and only serves as an excuse to rationalize holding comforting but wrong beliefs under the guise of doing "advanced philosophy". This is why the “powerful but dangerous tool” rhetoric is wrongheaded. It's not a powerful tool. It doesn't grant any ability to step outside your own head that you didn't have before. It's just a trap.

Comment by nshepperd on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-16T00:49:56.100Z · LW · GW
Comment by nshepperd on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-16T00:44:05.858Z · LW · GW

I don't have to solve the problem of induction to look out my window and see whether it is raining. I don't need 100% certainty, a four-nines probability estimate is just fine for me.

Where's the "just go to the window and look" in judging beliefs according to "compellingness-of-story"?

Comment by nshepperd on Outline of Metarationality, or much less than you wanted to know about postrationality · 2018-10-16T00:33:59.698Z · LW · GW

Of course not, and that’s the point.

The point... is that judging beliefs according to whether they achieve some goal or anything-- is no more reliable than judging beliefs according to whether they are true, is in no way a solution to the problem of induction or even a sensible response to it, and most likely only makes your epistemology worse?

Indeed, which is why metarationality must not forget to also include all of rationality within it!

Can you explain this in a way that doesn't make it sound like an empty applause light? How can I take compellingness-of-story into account in my probability estimates without violating the Kolmogorov axioms?

To say a little more on danger, I mean dangerous to the purpose of fulfilling your own desires.

Yes, that's exactly the danger.

Unlike politics, which is an object-level danger you are pointing to, postrationality is a metalevel danger, but specifically because it’s a more powerful set of tools rather than a shiny thing people like to fight over. This is like the difference between being weary of generally unsafe conditions that cannot be used and dangerous tools that are only dangerous if used by the unskilled.

Thinking you're skilled enough to use some "powerful but dangerous" tool is exactly the problem. You will never be skilled enough to deliberately adopt false beliefs without suffering the consequences.

Ethical Injunctions:

But surely… if one is aware of these reasons… then one can simply redo the calculation, taking them into account. So we can rob banks if it seems like the right thing to do after taking into account the problem of corrupted hardware and black swan blowups. That’s the rational course, right?

There’s a number of replies I could give to that.

I’ll start by saying that this is a prime example of the sort of thinking I have in mind, when I warn aspiring rationalists to beware of cleverness.