Posts

FC final: Can Factored Cognition schemes scale? 2021-01-24T22:18:55.892Z
Three types of Evidence 2021-01-19T17:25:20.605Z
Book Review: On Intelligence by Jeff Hawkins (and Sandra Blakeslee) 2020-12-29T19:48:04.435Z
Intuition 2020-12-20T21:49:29.947Z
Clarifying Factored Cognition 2020-12-13T20:02:38.100Z
Traversing a Cognition Space 2020-12-07T18:32:21.070Z
Idealized Factored Cognition 2020-11-30T18:49:47.034Z
Preface to the Sequence on Factored Cognition 2020-11-30T18:49:26.171Z
Hiding Complexity 2020-11-20T16:35:25.498Z
A guide to Iterated Amplification & Debate 2020-11-15T17:14:55.175Z
What is a probability? 2020-11-13T16:12:27.969Z
Do you vote based on what you think total karma should be? 2020-08-24T13:37:52.987Z
Existential Risk is a single category 2020-08-09T17:47:08.452Z
Inner Alignment: Explain like I'm 12 Edition 2020-08-01T15:24:33.799Z
Rafael Harth's Shortform 2020-07-22T12:58:12.316Z
The "AI Dungeons" Dragon Model is heavily path dependent (testing GPT-3 on ethics) 2020-07-21T12:14:32.824Z
UML IV: Linear Predictors 2020-07-08T19:06:05.269Z
How to evaluate (50%) predictions 2020-04-10T17:12:02.867Z
UML final 2020-03-08T20:43:58.897Z
UML XIII: Online Learning and Clustering 2020-03-01T18:32:03.584Z
What to make of Aubrey de Grey's prediction? 2020-02-28T19:25:18.027Z
UML XII: Dimensionality Reduction 2020-02-23T19:44:23.956Z
UML XI: Nearest Neighbor Schemes 2020-02-16T20:30:14.112Z
A Simple Introduction to Neural Networks 2020-02-09T22:02:38.940Z
UML IX: Kernels and Boosting 2020-02-02T21:51:25.114Z
UML VIII: Linear Predictors (2) 2020-01-26T20:09:28.305Z
UML VII: Meta-Learning 2020-01-19T18:23:09.689Z
UML VI: Stochastic Gradient Descent 2020-01-12T21:59:25.606Z
UML V: Convex Learning Problems 2020-01-05T19:47:44.265Z
Excitement vs childishness 2020-01-03T13:47:44.964Z
Understanding Machine Learning (III) 2019-12-25T18:55:55.715Z
Understanding Machine Learning (II) 2019-12-22T18:28:07.158Z
Understanding Machine Learning (I) 2019-12-20T18:22:53.505Z
Insights from the randomness/ignorance model are genuine 2019-11-13T16:18:55.544Z
The randomness/ignorance model solves many anthropic problems 2019-11-11T17:02:33.496Z
Reference Classes for Randomness 2019-11-09T14:41:04.157Z
Randomness vs. Ignorance 2019-11-07T18:51:55.706Z
We tend to forget complicated things 2019-10-20T20:05:28.325Z
Insights from Linear Algebra Done Right 2019-07-13T18:24:50.753Z
Insights from Munkres' Topology 2019-03-17T16:52:46.256Z
Signaling-based observations of (other) students 2018-05-27T18:12:07.066Z
A possible solution to the Fermi Paradox 2018-05-05T14:56:03.143Z
The master skill of matching map and territory 2018-03-27T12:06:53.377Z
Intuition should be applied at the lowest possible level 2018-02-27T22:58:42.000Z
Consider Reconsidering Pascal's Mugging 2018-01-03T00:03:32.358Z

Comments

Comment by sil-ver on Why I'm excited about Debate · 2021-01-17T10:24:57.647Z · LW · GW

I think the Go example really gets to the heart of why I think Debate doesn't cut it.

Your comment is an argument against using Debate to settle moral questions. However, what if Debate is trained on Physics and/or math questions, with the eventual goal of asking "what is a provably secure alignment proposal?"

Comment by sil-ver on [deleted post] 2021-01-17T10:20:44.414Z

Before offering an "X is really about Y" signaling explanation, it's important to falsify the "X is about X" hypothesis first. Once that's done, signaling explanations require, at minimum:

  1. An action or decision by the receiver that the sender is trying to motivate.
  2. (2.1) An explanation for why the receiver is listening for signals in the first place, and (2.2) why the sender is trying to communicate them.
  3. A language that the sender has reason to think the receiver will understand and believe as the sender intended.
  4. A physical mechanism for sending and receiving the signal.

(Added numbers for reference.)

I think 1, 2.1, and 3 are all wrong, in that none of them are required for a signaling hypothesis to be plausible. I believe you're assuming that signaling is effective and/or rational, but this is a mistake. Signaling was optimized to be effective in the ancestral environment, so there's no reason why it should still be effective today. As far as I can tell, it generally is not.

As an example, consider men wearing solid shoes in the summer despite finding those uncomfortable. There is no action this is trying to motivate, and there is no reason to expect the receiver is listening -- in fact, there is often good reason to expect that they are not listening (in many contexts, people really don't care about your shoes). Nonetheless, I think conformity signaling is the correct explanation for this behavior.

The pilot example is problematic because in this case, signaling is part of a high-level plan. This is a non-central example. Most of the time, signaling is motivated by evolutionary instincts, like the fear of standing out. In the case of religion, I think this is most of the story. Those instincts can then translate into high-level behavior like going to the church, but it's not the beginning of the causal chain.

Comment by sil-ver on Pseudorandomness contest: prizes, results, and analysis · 2021-01-15T10:11:39.253Z · LW · GW

Thanks for hosting this contest. The overconfidence thing in particular is a fascinating data point. When I was done with my function that output final probabilities, I deliberately made it way more agnostic, thinking that at least now my estimates are modest -- but it turns out that I should have gone quite a bit farther with that adjustment.

I'm also intrigued by the variety of approaches for analyzing strings. I solely looked at frequency of monotone groups (i.e., how many single 0's, how many 00's, how many 000's, how many 111's, etc.), and as a result, I have widely different estimates (compared to the winning submissions) on some of the strings where other methods were successful.

Comment by sil-ver on Open & Welcome Thread - January 2021 · 2021-01-12T08:23:55.819Z · LW · GW

Do the newest numbers indicate that the new Covid strand isn't that bad after all, for whatever reason? If not, why not?

Edit: Zvi gave a partial answer here.

Comment by sil-ver on In Defense of Twitter's Decision to Ban Trump · 2021-01-11T18:17:43.501Z · LW · GW

I happen to agree with your conclusion, but I don't think you're addressing what EY said. He tweeted the following:

What America needs now, to heal, is for the left and the right to be on entirely different social networks. Still with the ability to subtweet alleged screencaps from the Other network of Others being outrageous, of course! But with no ability for Others to clarify or respond.

My Translation: I'm worried that banning Trump from twitter will increase polarization because it will make the two tribes more segregated than they were before. This is not that similar to your #7, and otherwise missing from the list entirely.

I also think #8 is unlikely. It doesn't strike me as plausible that the Capitol incident provided any rational person with significant evidence on which to update their view of Trump. On the other hand, public opinion appears to have shifted significantly. A financial motive seems likely here, especially for Zukerberg.

Comment by sil-ver on Grokking illusionism · 2021-01-06T23:32:39.142Z · LW · GW

Comparing consciousness to plastic surgery seems to me to be a false analogy. If you have your model of particles bouncing around, then plastic surgery is a label you can put on a particular class of sequences of particles doing things. If you didn't have the name, there wouldn't be anything to explain, the particles can still do the same thing. Consciousness/subjective experience describes something that is fundamentally non-material. It may or may not be cause by particles doing things, but it's not itself made of particles.

If your response to this is that there is no such thing as subjective experience -- which is what I thought your position was, and what I understand strong illusionism to be -- then this is exactly what I mean when I say consciousness isn't real. By 'consciousness', I'm exclusively referring to the qualitatively different thing called subjective experience. This thing either exists or doesn't exist. I'm not talking about the process that makes people move their fingers to type things about consciousness.

I apologize for not tabooing 'real', but I don't have a model of how 'is consciousness real' can be anything but a well-defined question whose answer is either 'yes' or 'no'. The 'as real as X' framing doesn't make any sense to me. it seems like trying to apply a spectrum to a binary question.

Comment by sil-ver on Grokking illusionism · 2021-01-06T17:23:57.180Z · LW · GW

Apologies, I communicated poorly. ImE, discussions about consciousness are particularly prone to misunderstandings. Let me rephrase my comment.

  1. Many (most?) people believe that consciousness is an emergent phenomenon but also a real thing.
  2. My assumption from reading your first comment was that you believe #1 is close to impossible. I agree with that.
  3. I took your first comment (in particular this paragraph)...

Because ultimately, down at the floor, it's all just particles and forces and extremely well understood probabilities. There's no fundamental primitive for 'consciousness' or 'experience', any more than there's a fundamental primitive for 'green' or 'traffic' or 'hatred'. Those particles and forces down at the floor are the territory; everything else is a label.

... as saying that #2 implies illusionism must be true. I'm saying this is not the case because you can instead stipulate that consciousness is a primitive. If every particle is conscious, you don't have the problem of getting real consciousness out of nothing. (You do have the problem of why your experience appears unified, but that seems much less impossible.)

Or to say the same thing differently, my impression/worry is that people accept that 'consciousness isn't real' primarily because they think the only alternative is 'consciousness is real and emerges from unconscious matter', when in fact you can have a coherent world view that disputes both claims.

Comment by sil-ver on Grokking illusionism · 2021-01-06T15:48:08.388Z · LW · GW

That's fair. However, if you share the intuition that consciousness being emergent is extremely implausible, then going from there directly to illusionsism means only comparing it to the (for you) weakest alternative. And that seems like the relevant step for people in this thread other than you.

Comment by sil-ver on Grokking illusionism · 2021-01-06T15:14:51.421Z · LW · GW

There's no fundamental primitive for 'consciousness'

I'm not sure if this is the case, but I'm worried that people subscribe to illusionism because they only compare it to the weakest possible alternative, which (I would say) is consciousness being an emergent phenomenon. If you just assume that there's no primitive for consciousness, I would agree that the argument for illusionism is extremely strong since [unconscious matter spontaneously spawning consciousness] is extremely implausible.

However, you can also just dispute the claim and assume consciousness is a primitive, which gets around the hard problem. That leaves the question 'why is consciousness a primitive', which doesn't seem particularly more mysterious than 'why is matter a primitive'.

Comment by sil-ver on Predictions for 2021 · 2020-12-31T22:08:01.509Z · LW · GW

I’d also like to advertise a challenge for my readers. You can email me with your predictions for a subset of my predictions with your own prediction. I’ll judge your predictions against mine using the logarithmic scoring rule.

Out of curiosity, why logarithmic scoring and not Brier scoring? (I like logarithmic scoring better, but you used Brier in the pseudorandomness contest.)

Would you also take money bets in addition to just virtual scores?

Comment by sil-ver on Book review: Rethinking Consciousness · 2020-12-31T17:19:25.026Z · LW · GW

There's a funny thing about nihilism: It's not decision-relevant. Imagine being a nihilist, deciding whether to spend your free time trying to bring about an awesome post-AGI utopia, vs sitting on the couch and watching TV. Well, if you're a nihilist, then the awesome post-AGI utopia doesn't matter. But watching TV doesn't matter either. Watching TV entails less exertion of effort. But that doesn't matter either. Watching TV is more fun (well, for some people). But having fun doesn't matter either. There's no reason to throw yourself at a difficult project. There's no reason not to throw yourself at a difficult project. Isn't it funny?

I agree except for the funny part.

I don't have a grand ethical theory, I'm not ready to sit in judgment of anyone else, I'm just deciding what to do for my own account. There's a reason I ended the post with "Dentin's prayer of the altruistic nihilist"; that's how I feel, at least sometimes. I choose to care about information-processing systems that are (or "perceive themselves to be"?) conscious in a way that's analogous to how humans do that, with details still uncertain. I went them to be (or "to perceive themselves to be"?) happy and have awesome futures. So here I am :-D

Thanks for describing this. I'm both impressed and a bit shocked that you're being consistent.

This is a pretty weird claim, right? I mean, you remember writing down the statement. Would you agree with that claim? No way, right?

Let's assume I do. (I think I would have agreed a few years ago, or at least assigned significant probability to this.) I still think (and thought then) that there is a slam-dunk chain from 'I experience consciousness' to 'therefore, consciousness exists'.

Let and . Clearly because experiencing anything is already sufficient for what I call consciousness. Furthermore, clearly is true. Hence is true. Nothing about your Claim contradicts any step of this argument.

I think the reason why this topic has intuitions differ so much is that we are comparing very low probability theories against each other, and the question is which one is lower. (And operations with low numbers are prone to higher errors than operations with higher numbers.) At least my impression (correct me if I'm wrong) is that the subjective proof of consciousness would be persuasive, except that it seems to imply Claim, and Claim is a no-go, so therefore the subjective proof has to give in. I.e., you have both and , and therefore .

My main point is that it doesn't make sense to assign anything lower probability than and because is immediately proven by the fact that you experience stuff, and is the definition of so is utterly trivial. You can make a coherent-sounding (if far fetched) argument for why Claim is true, but I'm not familiar with any coherent argument that is false (other than that it must be false because of what it implies, which is again the argument above.)

My probabilities (not adjusted for the fact that one of them must be true) look something like this:

  • or is false
  • Consciousness is an emergent phenomenon. (I.e., matter is unconscious but consciousness appears as a result of information processing and has no causal effect on the world. This would imply Claim.)
  • Something weird like Dual-aspect monism (consciousness and materialism are two views on the same process, in particular all matter is conscious),

Hence what I said earlier: I don't believe Claim right now because I think there is actually a not-super-low-probability explanation, but even if there weren't, it would still not change anything because is a lot more than . I do remember finding EY's anti-p-zombie post persuasive, although it's been years since I've read it.

I can't say I understand it very well either, and see also Luke's report Appendix F and Joe's blog post. From where I'm at right now, there's a set of phenomena that people describe using words like "consciousness" and "qualia", and nothing we say will make those phenomena magically disappear. However, it's possible that those phenomena are not what they appear to be.

We all perceive that we have qualia. You can think of statements like "I perceive X" as living on continuum, like a horizontal line. On the left extreme of the line, we can perceive things because those things are out there in the world and our senses are accurately and objectively conveying them to us. On the right extreme of the line, we can perceive things because of quirks of our perceptual systems.

I think that's just dodging the problem since any amount of subjective experience is enough for . The question isn't how accurately your brain reports on the outside world, it's why you have subjective experience of any kind.

Comment by sil-ver on Book review: Rethinking Consciousness · 2020-12-31T14:29:40.652Z · LW · GW

I guess it was too nice that I tend to agree with everything you say about the brain, so there had to be an exception.

Normal Person: What about qualia?

Person Who Has Solved The Meta-Problem Of Consciousness: Let me explain why the brain, as an information processing system, would ask the question "What about qualia"...

NP: What about subjective experience?

PWHSTMPOC: Let me explain why the brain, as an information processing system, would ask the question "What about subjective experience"...

NP: You're not answering my questions!

PWHSTMPOC: Let me explain why the brain, as an information processing system, would say "You're not answering my questions"...

It seems to me like PWHSTMPOC is being chicken here. The real answer is "there is no qualia" followed by "however, I can explain why your brain outputs the question about qualia". Right?

If so, well I know that there's qualia because I experience it, and I genuinely don't understand why that's not the end of the conversation. It's also true that a brain like main could say this if it weren't true, but this doesn't change anything about the fact that I experience qualia. (Unless the claim isn't that there's no qualia, in which case I don't understand illusionism.)

I'm also not following your part on morality. If consciousness isn't real, why doesn't that just immediately imply nihilism? (This isn't an argument for it being real, or course.) Anyway, please feel free to ignore this paragraph if the answer is too complicated.

Comment by sil-ver on Book Review: On Intelligence by Jeff Hawkins (and Sandra Blakeslee) · 2020-12-31T12:19:41.897Z · LW · GW

Thanks for those thoughts. And also for linking to Kaj's post again; I finally decided to read it and it's quite good. I don't think it helps at all with the hard problem (i.e., you could replace 'consciousness' with some other process in the brain that has these properties but doesn't have the subjective component, and I don't think that would pose any problems), but it helps quite a bit with the 'what is consciousness doing' question, which I also care about.

(Now I'm trying to look at the wall of my room and to decide whether I actually do see pixels or 'line segments', which is an exercise that really puts a knot into my head.)

One of the things that makes this difficult is that, whenever you focus on a particular part, it's probably consistent with the framework that this part gets reported in a lot more detail. If that's true, then testing the theory requires you to look at the parts you're not paying attention to, which is... um.

Maybe evidence here would be something like, do you recognize concepts in your peripheral vision more than hard-to-clasiffy-things and actually I think you do. (E.g, if I move my gaze to the left, I can still kind of see the vertical cable of a light on the wall even though the wall itself seems not visible.)

Comment by sil-ver on 2021 New Year Optimization Puzzles · 2020-12-31T10:20:57.211Z · LW · GW

Possible solution for P1:

for a score of 14 (now 144 with multiplicative scoring, but it's still the lowest among the solutions I've found)

Comment by sil-ver on Covid 12/24: We’re F***ed, It’s Over · 2020-12-29T14:41:56.330Z · LW · GW

Do you have an opinion on what stocks will move as a result?

Comment by sil-ver on What trade should we make if we're all getting the new COVID strain? · 2020-12-26T17:21:31.862Z · LW · GW

Thanks!

Comment by sil-ver on What trade should we make if we're all getting the new COVID strain? · 2020-12-26T12:59:41.300Z · LW · GW

That could well have been priced in already, but probably not all of it.

Why not?

Comment by sil-ver on What trade should we make if we're all getting the new COVID strain? · 2020-12-26T12:58:22.828Z · LW · GW

Since you can't buy VIX directly, can you describe what exact thing you bought?

Comment by sil-ver on What trade should we make if we're all getting the new COVID strain? · 2020-12-26T12:49:17.752Z · LW · GW

This is very much a question, not an answer: why not buy shares of Zoom? Wouldn't an increase in cases further drive up their value?

Comment by sil-ver on Debate update: Obfuscated arguments problem · 2020-12-23T08:52:00.308Z · LW · GW

In the ball-attached-to-a-pole example, the honest debater has assigned probabilities that are indistinguishable from what you would do if you knew noting except that the claim is false. (I.e., assign probabilities that doubt each component equally.) I'm curious how difficult it is to find the flaw in this argument structure. Have you done anything like showing these transcripts to other experts and seeing if they will be able to answer it?

If I had to summarize this finding in one sentence, it would be "it seems like an expert can generally find a set of arguments for a false claim that is flawed such that an equally competent expert can't identify the flawed component, and the set of arguments doesn't immediately look suspect". This seems surprising, and I'm wondering whether it's unique to physics. (The cryptographic example was of this kind, but there, the structure of the dishonest arguments was suspect.)

If this finding holds, my immediate reaction is "okay, in this case, the solution for the honest debater is to start a debate about whether the set of arguments from the dishonest debater has this character". I'm not sure how good this sounds. I think my main issue here is that I don't know enough physics understand why the dishonest arguments are hard to identify

Comment by sil-ver on Rafael Harth's Shortform · 2020-12-21T13:28:48.102Z · LW · GW

Ah, I didn't know that. (Even though I use the English Wikipedia more than the German one.)

Comment by sil-ver on Pseudorandomness contest, Round 2 · 2020-12-20T12:12:50.024Z · LW · GW

My thanks for making me practice python :-)

Comment by sil-ver on Rafael Harth's Shortform · 2020-12-19T22:30:01.275Z · LW · GW

Interesting, but worth pointing out that this is 15 years old. One thing that I believe changed within that time is that anyone can edit articles (now, edits aren't published until they're approved). And in general, I believe Wikipedia has gotten better over time, though I'm not sure.

Comment by sil-ver on Rafael Harth's Shortform · 2020-12-19T17:27:44.612Z · LW · GW

The ideal situation to which Wikipedia contributors\editors are striving for kinda makes desires to cite Wikipedia itself pointless. Well written Wikipedia article should not contain any information that has no original source attached. So it should always be available to switch from wiki article to original material doing citing.

I see what you're saying, but citing Wikipedia has the benefit that a person looking at the source gets to read Wikipedia (which is generally easier to read) rather than the academic paper. Plus, it's less work for the person doing the citation.

Comment by sil-ver on Rafael Harth's Shortform · 2020-12-19T10:11:15.586Z · LW · GW

It's a meme that Wikipedia is not a trustworthy source. Wikipedia agrees:

We advise special caution when using Wikipedia as a source for research projects. Normal academic usage of Wikipedia and other encyclopedias is for getting the general facts of a problem and to gather keywords, references and bibliographical pointers, but not as a source in itself. Remember that Wikipedia is a wiki. Anyone in the world can edit an article, deleting accurate information or adding false information, which the reader may not recognize. Thus, you probably shouldn't be citing Wikipedia. This is good advice for all tertiary sources such as encyclopedias, which are designed to introduce readers to a topic, not to be the final point of reference. Wikipedia, like other encyclopedias, provides overviews of a topic and indicates sources of more extensive information. See researching with Wikipedia and academic use of Wikipedia for more information.

This seems completely bonkers to me. Yes, Wikipedia is not 100% accurate, but this is a trivial statement. What is the alternative? Academic papers? My experience suggests that I'm more than 10 times as likely to find errors in academic papers than in Wikipedia. Journal articles? Pretty sure the factor here is even higher. And on top of that, Wikipedia tends to be way better explained.

I can mostly judge mathy articles, and honestly, it's almost unbelievable to me how good Wikipedia actually seems to be. A data point here is the Monty Hall problem. I think the thing that's most commonly misunderstood about this problem is that the solution depends on how the host chooses the door they reveal. Wikipedia:

The given probabilities depend on specific assumptions about how the host and contestant choose their doors. A key insight is that, under these standard conditions, there is more information about doors 2 and 3 than was available at the beginning of the game when door 1 was chosen by the player: the host's deliberate action adds value to the door he did not choose to eliminate, but not to the one chosen by the contestant originally. Another insight is that switching doors is a different action than choosing between the two remaining doors at random, as the first action uses the previous information and the latter does not. Other possible behaviors than the one described can reveal different additional information, or none at all, and yield different probabilities. Yet another insight is that your chance of winning by switching doors is directly related to your chance of choosing the winning door in the first place: if you choose the correct door on your first try, then switching loses; if you choose a wrong door on your first try, then switching wins; your chance of choosing the correct door on your first try is 1/3, and the chance of choosing a wrong door is 2/3.

It's possible that Wikipedia's status as not being a cite-able source is part of the reason why it's so good. I'm not sure. But the fact that a system based entirely on voluntary contributions so thoroughly outperforms academic journals is remarkable.

Another more rambly aspect here is that, when I hear someone lament the quality of Wikipedia, almost always my impression is that this person is doing superiority signaling rather than having a legitimate reason for the comment.

Comment by sil-ver on Clarifying Factored Cognition · 2020-12-17T22:45:11.896Z · LW · GW

I agree for literal HCH. However, I think that falls under brute force, which is the one thing that HCH isn't 'allowed' to do because it can't be emulated. I think I say this somewhere in a footnote.

Comment by sil-ver on Hiding Complexity · 2020-12-17T22:41:54.028Z · LW · GW

This is really good comment. I'm not sure yet what I think about it, but it's possible that the post is not quite right. Which might be a big deal because the upcoming sequence relies on it pretty heavily. I take purely abstract examples seriously.

One thing to note, though, is that your counterexample is less of a counterexample than it looks on first glance because while the size of the solutions to [subproblems of your natural decomposition] can be made arbitrarily large, the size of the overall solution grows equally fast.

If we allow two subproblems, then the optimal decomposition (as defined by the post) would be , , where and denote the first and second half of a string. Here, the solutions to subproblems are half as long, which is optimal.

These subproblems sound like they're harder to solve, but that's not necessarily the case; depends on . And if they can be solved, it seems like the decomposition would still be preferable.

Comment by sil-ver on Luna Lovegood and the Chamber of Secrets - Part 1 · 2020-12-17T13:55:14.732Z · LW · GW

Isn't that going to be handled by the karma system? If people don't like it, and I honestly don't think many people are capable of writing fiction that LW will appreciate (mb I'm wrong?), it will just disappear quickly.

Comment by sil-ver on An argument for personal identity transfer. · 2020-12-14T16:18:43.199Z · LW · GW

That's interesting, both because I wouldn't expect an Open Individualist to be interested in cryonics, and because I wouldn't expect an OI to make this argument. Do you agree that you could prove much stronger claims about identity with equal validity?

It feels strange to me, somewhat analogous to arguing that Bigfoot can't do magic while neglecting to mention that he also doesn't exist. But I'm not saying that arguing under an assumption you don't believe in isn't valuable.

Why would believing Open Individualism to be true cause disinterest in cryonics? I would be ecstatic to continue working on what I love after my natural lifespan has ended.

I enthusiastically agree with Eliezer Yudkowsky that the utilitarian argument against cryonics is weak under the assumption of Closed Individualism. Even committed EA's enjoy so many luxuries that there is no good reason why you can't pay for cryonics if that's what you value, especially if it helps you live with less fear (in which case it's an investment).

However, if you're an open individualist, there is no reason to be afraid anyway, so I don't see why you would spend the ~200.000$ on cryonics when you can use it for higher priority causes instead. I don't have any moral qualms with it, I just don't see the motivation. I don't think I'm happy or smart enough for it to be worth it, and I don't really care if my identity is preserved in this particular form. I just care about having positive experiences.

(I still approve of advertising cryonics for practical reasons. It may change the behavior of powerful people if they believe they have skin in the game.)

Comment by sil-ver on An argument for personal identity transfer. · 2020-12-14T15:41:11.012Z · LW · GW

My honest take on this is that it's completely missing the point. There is an assumption here that your future self shares an identity with your current self that other people don't, which is called Closed Individualism. People tend to make this assumption without questioning it, but personally, I assign it < 1% of being true.

I think it's fair to say that, if you accept reasoning of the kind you made in this post (which I'm not claiming is wrong), you can prove arbitrarily absurd things about identity with equal justification. Just imagine a procedure for uploads that does not preserve identity, one that is perfect and does, and then gradually change one into the other. Either identity is a spectrum (???), or it can shift on a single atom (which I believe would contradict #4).

The hypothesis that you share identity with everyone (Open Individualism) is strictly simpler, equally consistent with everyday experience, has no compatibility issues with physics, and is resistant to thought experiments.

I'm not saying that Open Individualism is definitely true, but Closed Individualism is almost certainly not true, and that's enough to be disinterested in cryonics. Maybe you share identity with your upload, maybe you don't, but the idea that you share identity with your upload and not with other future people is extremely implausible. My impression is that most people agree with the difficulty of justifying Closed Individualism, but have a hard-coded assumption that it must be true and therefore think of it as an inexplicably difficult problem that must be solved, rather than drawing the conclusion that it's untrue.

Comment by sil-ver on Rafael Harth's Shortform · 2020-12-12T16:28:50.361Z · LW · GW

I was initially extremely disappointed with the reception of this post. After publishing it, I thought it was the best thing I've ever written (and I still think that), but it got < 10 karma. (Then it got more weeks later.)

If my model of what happened is roughly correct, the main issue was that I failed to communicate the intent of the post. People seemed to think I was trying to say something about the 2020 election, only to then be disappointed because I wasn't really doing that. Actually, I was trying to do something much more ambitious: solving the 'what is a probability' problem. And I genuinely think I've succeeded. I used to have this slight feeling of confusion every time I've thought about this because I simultaneously believed that predictions can be better or worse and that talking about the 'correct probability' is silly, but had no way to reconcile the two. But in fact, I think there's a simple ground truth that solves the philosophical problem entirely.

I've now changed the title and put a note at the start. So anyway, if anyone didn't click on it because of the title or low karma, I'm hereby virtually resubmitting it.

Comment by sil-ver on The institution of email · 2020-12-12T12:53:44.431Z · LW · GW

And the rarer strategy of actually dealing with all of one’s emails promptly doesn’t even seem obviously better.

Why not?

I get lost there, because my model is 'emails are important for communication, they have the problems you describe here, and the solution is to answer as quickly as possible'.

Sometimes, responding takes too much effort for that to be reasonable, but it often doesn't.

Comment by sil-ver on My computational framework for the brain · 2020-12-12T10:21:47.373Z · LW · GW

That's a shame. Seems like an important piece.

Although, I now think my primary issue is actually not quite that. It's more like, when I try to concretely sketch how I now imagine thinking to work, I naturally invoke an additional mysterious steering module that allows me to direct my models to the topics that I want outputs on. I probably want to do this because that's how my mind feels like: I can steer it where I want it, and then it spits out results.

Now, on the one hand, I don't doubt that the sense of control is an evolutionarily adaptive deception, and I certainly don't think Free Will is a real thing. On the other hand, it seems hard to take out the mysterious steering module. I think I was asking about how models are created to fill in that hole, but on second thought, it may not actually be all that connected, unless there is a module which does both.

So, is the sense of deciding what to apply my generative models to subsumed by the model outputs in this framework? Or is there something else?

I realize that the subcortex is steering the neocortex, but I'm still thinking about an evolutionarily uninteresting setting, like me sitting in a safe environment and having my mind contemplate various evolutionarily alien concepts.

Comment by sil-ver on Death Positive Movement · 2020-12-11T23:17:05.489Z · LW · GW

I think taking Many Worlds and/or Open Individualism seriously can do a lot to reduce fear of death.

Comment by sil-ver on My computational framework for the brain · 2020-12-11T11:45:00.089Z · LW · GW

I've tried to apply this framework earlier and realized that I'm confused about how new models are generated. Say I'm taught about the derivative for the first time. This process should result in me getting a 'derivative' model, but how is this done? At the point where the model isn't yet there, how does the neocortex (or another part of the brain?) do that?

Comment by sil-ver on Where does this community lean politically ? · 2020-12-11T09:09:07.822Z · LW · GW

There is a soft ban on talking about politics on LessWrong, but downvoting this question without explaining why strikes me as a bad way to enforce it.

To answer the question, this may be the most up to date data, although the sample size is small.

Comment by sil-ver on Traversing a Cognition Space · 2020-12-10T15:55:34.337Z · LW · GW

Yeah, I guess it's hierarchical on the level of articles, although each article is linear.

And I would argue the hierarchical structure is a big asset.

Comment by sil-ver on Idealized Factored Cognition · 2020-12-08T19:51:06.513Z · LW · GW

Yes. I don't think the formalism can say much about this process because it's so dependent on the human (that's why the second post is almost only about Debate), but that's the right picture.

Comment by sil-ver on Traversing a Cognition Space · 2020-12-07T18:49:31.355Z · LW · GW

Somewhat tangential to the sequence itself, I'm pretty interested in the idea of using non-linear structure to explain stuff, especially in math. Section titles and footnotes can function like that, but to a really limited degree. I think Arbital has somewhat of a hierarchical structure? But I've never seen it taken far.

Does anyone have strong feelings about this?

Comment by sil-ver on Supervised learning of outputs in the brain · 2020-12-06T14:18:55.366Z · LW · GW

I think so. I was imagining an additional mechanism where the outputs compete with other parts of the brain for the final say on what your muscles are doing. If they control muscles directly, that would mean 'I' can't choose not to have flinch if the supervised learning algorithm says I should (right?) -- which I guess does actually align with experience.

Comment by sil-ver on Supervised learning of outputs in the brain · 2020-12-06T12:03:17.855Z · LW · GW

Clarifying question: how are the outputs of the supervised learning algorithm used (other than in model #6)?

Comment by sil-ver on 12 Rules for Life · 2020-12-03T13:47:17.196Z · LW · GW

Not sure if there is interest in discussing these. If so, I'm quite curious about #6. What is the reason for this and do other people have strong feelings about it?

Comment by sil-ver on Hiding Complexity · 2020-12-01T11:01:33.742Z · LW · GW

Under that view the stereotypical forgetful professor isn't brilliant because he has a lot of memory free to think with at any time, but because he has had a lot of practice doing the most with a small memory. These seem experimentally distinguishable.

Not necessarily. This post only argues that the absolute ability of memory is highly limited, so in general, the ability of humans to solve complex tasks comes from being very clever with the small amount of memory we have. (Although, is 'memory' the right term for 'the ability to think about many things at once'?) I'm very confident this is true.

Comparative ability (i.e., differences between different humans) could still be due to memory, and I'm not confident either way. Although my impression from talking to professors does suggest that it's better internal representations, I think.

Comment by sil-ver on Idealized Factored Cognition · 2020-12-01T09:32:07.654Z · LW · GW

Not part of the model; see my reply to Slider. The assumption is that the first agent has to be consistent ("problem of ambiguity is ignored"), and the final statement (or any other statement) is not equal to the string but to the real claim that talks about however many concepts.

Comment by sil-ver on Idealized Factored Cognition · 2020-12-01T09:24:50.635Z · LW · GW

Good observation. is not the same as the string that represents it in the transcript (the "Now we have [...]" in the example). I can see how the way I've written the post suggests that it is; this is a mistake.

(and statements in general) can depend on other concepts in the transcript. If cross-examination (which I think does have a place in Ideal Debate) is allowed, they can even depend on concepts that haven't been introduced in the transcript, and which the first agent only defines after the second agent pointed to . E.g., the first agent could just postulate that a set exists without saying anything about its properties until they matter for the argument. This is fine as long as there is a mechanism that forces the first agent to be consistent (which is exactly what cross-examination does).

In general, statements are not strings. The same string can map to several different statements if the words refer to different things, and the length of the corresponding string doesn't meaningfully restrict the actual complexity. I think of ambiguous words much like of concepts that haven't yet been defined. This is what I was getting at by saying that ambiguity has been abstracted away.

What this does suggest in the formalism as-is is that the difficulty (in terms of ) of statements will tend to go up if they are later in the debate since they depend on more prior concepts. (Does that mean the judge effectively has to deal with the entire problem after all? Absolutely not; the key move the first agent can pull over and over again is to hide most of the complexity of a concept. The second agent can only question one claim, so whenever she accepts that a certain thing exists, the first agent gets away with only defining the properties of that thing that are relevant for . E.g., in a more complicated mathematical proof, the first agent may say something like 'there is a function with properties and ', and if the second agent chooses not to doubt that this function exists, then the first agent gets away with never talking about how the function was constructed, which may exclude pages of work. This is why I've written post #-1.)

I've previously had the vague plan to make an additional post about this stuff after the main sequence. Maybe instead, what I've just said here should be in this post, not sure yet. I'll definitely change something to make it clear that . [Edit: for now, I added two paragraphs at the end of chapter 1.]

I've so far not thought about extending the formalism to model this explicitly. It's not an unreasonable idea, but my current sense is that it doesn't need to be included. Maybe I'll change my mind about this.

Comment by sil-ver on A guide to Iterated Amplification & Debate · 2020-11-30T21:16:31.865Z · LW · GW

Should all be fixed now; I've rehosted the images from this post. This was really bad timing; thanks a bunch for letting me know.

Comment by sil-ver on A guide to Iterated Amplification & Debate · 2020-11-30T20:45:06.643Z · LW · GW

Yeah, directupload.net is an unreliable hosting service; they have downtimes quite often. The images will reappear by themselves once they're back online. I've since switched to imgbb.com but a few of the images in this post are still hosted on directupload.

I should definitely go through all of my posts at some point and rehost all directupload images (some of my older posts have all pictures there and those look pretty stupid if the pictures don't show). Sorry for the inconvenience.

Comment by sil-ver on Inner alignment in the brain · 2020-11-29T18:15:44.694Z · LW · GW

PrincipiaQualia is definitely the thing to read if you want to engage with QRI. It reviews the science and explains the core theory that the research is structured around. I'm not sure if you want to engage with it -- I begin from the strong intuition that qualia is real, and so I'm delighted that someone is working on it. My impression is that it makes an excellent case, but my judgment is severely limited since I don't know the literature. Either way, it doesn't have a lot of overlap with what you're working on.

There's also an AI alignment podcast episode.

Comment by sil-ver on Inner alignment in the brain · 2020-11-29T14:21:30.568Z · LW · GW

When I wrote here "Thanks but I don't see the connection between what I wrote and what they wrote", I did not mean that QRI was talking about a different phenomenon than I was talking about. I meant that their explanation is wildly different than mine. Re-reading the conversation, I think I was misinterpreting the comment I was replying to; I just went back to edit.

That makes sense.

I don't think there's anything fundamental in the universe besides electrons, quarks, photons, and so on, following their orderly laws as described by the Standard Model of Particle Physics etc. Therefore it follows that there should be an answer to the question "why do people describe a certain state as pleasant" that involves purely neuroscience / psychology and does not involve the philosophy of consciousness or any new ontologically fundamental entities. After all, "describing a certain state as pleasant" is an observable behavioral output of the brain, so it should have a chain of causation that we can trace within the underlying neural algorithms, which in turn follows from the biochemistry of firing neurons and so on, and ultimately from the laws of physics. So, that's what I was trying to do in that blog post: Trace a chain of causation from "underlying neural algorithms" to "people describing a state as pleasant".

Ah, but QRI also thinks that the material world is exhaustively described by the laws of physics. I believe they would give a blanket endorsement to everything in the above paragraph except the first sentence. Their view is not that valence is an additional parameter that your model of physics needs to take into consideration to be accurate. Rather, it's that the existing laws of physics exhaustively describe the future states of particles (so in particular, you can explain the behavior of humans, including their reaction to pain and such, without modeling valence), and the phenomenology can also be described precisely. The framework is dual-aspect monism plus physicalism.

You might still have substantial disagreements with that view, but I as far as I can tell, your posts about neuroscience and even your post on emotional valence are perfectly compatible, except for the one sentence I quoted earlier

the neocortex gets to decide whether or not to classify a situation as "pain", based on not only nociception but also things like context and valence.)

because it has valence as an input to the neocortex' decision rather than a property of the output (i.e., if our phenomenology 'lives' in the neocortex, then the valence of situation should depend on what the neocortex classifies it as, not vice-versa). And even that could just be using valence to refer to a different thing that's also real.

Comment by sil-ver on Why are young, healthy people eager to take the Covid-19 vaccine? · 2020-11-29T13:32:22.053Z · LW · GW

Okay, in that case, your position is actually consistent and your question valid. I'm pretty sure that's a minority position on LW, though.