Posts

Insights from "All of Statistics": Statistical Inference 2021-04-08T17:49:16.270Z
Insights from "All of Statistics": Probability 2021-04-08T17:48:10.972Z
FC final: Can Factored Cognition schemes scale? 2021-01-24T22:18:55.892Z
Three types of Evidence 2021-01-19T17:25:20.605Z
Book Review: On Intelligence by Jeff Hawkins (and Sandra Blakeslee) 2020-12-29T19:48:04.435Z
Intuition 2020-12-20T21:49:29.947Z
Clarifying Factored Cognition 2020-12-13T20:02:38.100Z
Traversing a Cognition Space 2020-12-07T18:32:21.070Z
Idealized Factored Cognition 2020-11-30T18:49:47.034Z
Preface to the Sequence on Factored Cognition 2020-11-30T18:49:26.171Z
Hiding Complexity 2020-11-20T16:35:25.498Z
A guide to Iterated Amplification & Debate 2020-11-15T17:14:55.175Z
Information Charts 2020-11-13T16:12:27.969Z
Do you vote based on what you think total karma should be? 2020-08-24T13:37:52.987Z
Existential Risk is a single category 2020-08-09T17:47:08.452Z
Inner Alignment: Explain like I'm 12 Edition 2020-08-01T15:24:33.799Z
Rafael Harth's Shortform 2020-07-22T12:58:12.316Z
The "AI Dungeons" Dragon Model is heavily path dependent (testing GPT-3 on ethics) 2020-07-21T12:14:32.824Z
UML IV: Linear Predictors 2020-07-08T19:06:05.269Z
How to evaluate (50%) predictions 2020-04-10T17:12:02.867Z
UML final 2020-03-08T20:43:58.897Z
UML XIII: Online Learning and Clustering 2020-03-01T18:32:03.584Z
What to make of Aubrey de Grey's prediction? 2020-02-28T19:25:18.027Z
UML XII: Dimensionality Reduction 2020-02-23T19:44:23.956Z
UML XI: Nearest Neighbor Schemes 2020-02-16T20:30:14.112Z
A Simple Introduction to Neural Networks 2020-02-09T22:02:38.940Z
UML IX: Kernels and Boosting 2020-02-02T21:51:25.114Z
UML VIII: Linear Predictors (2) 2020-01-26T20:09:28.305Z
UML VII: Meta-Learning 2020-01-19T18:23:09.689Z
UML VI: Stochastic Gradient Descent 2020-01-12T21:59:25.606Z
UML V: Convex Learning Problems 2020-01-05T19:47:44.265Z
Excitement vs childishness 2020-01-03T13:47:44.964Z
Understanding Machine Learning (III) 2019-12-25T18:55:55.715Z
Understanding Machine Learning (II) 2019-12-22T18:28:07.158Z
Understanding Machine Learning (I) 2019-12-20T18:22:53.505Z
Insights from the randomness/ignorance model are genuine 2019-11-13T16:18:55.544Z
The randomness/ignorance model solves many anthropic problems 2019-11-11T17:02:33.496Z
Reference Classes for Randomness 2019-11-09T14:41:04.157Z
Randomness vs. Ignorance 2019-11-07T18:51:55.706Z
We tend to forget complicated things 2019-10-20T20:05:28.325Z
Insights from Linear Algebra Done Right 2019-07-13T18:24:50.753Z
Insights from Munkres' Topology 2019-03-17T16:52:46.256Z
Signaling-based observations of (other) students 2018-05-27T18:12:07.066Z
A possible solution to the Fermi Paradox 2018-05-05T14:56:03.143Z
The master skill of matching map and territory 2018-03-27T12:06:53.377Z
Intuition should be applied at the lowest possible level 2018-02-27T22:58:42.000Z
Consider Reconsidering Pascal's Mugging 2018-01-03T00:03:32.358Z

Comments

Comment by Rafael Harth (sil-ver) on Covid vaccine safety: how correct are these allegations? · 2021-06-19T11:28:27.703Z · LW · GW

This is the video, right? You could link to that instead of the removed youtube link.

Comment by Rafael Harth (sil-ver) on Covid vaccine safety: how correct are these allegations? · 2021-06-19T11:22:36.641Z · LW · GW

This is relevant because I read that "FDA requires healthcare providers to report any death after COVID-19 vaccination to VAERS"

How does this square with OpenVaers's claim that only about 1% of injuries are reported?

Without knowing the reporting rate, it's difficult to interpret the data. If we take the 1% and 5869 numbers at face value, it implies that the vaccines killed about 560.000 people, whereas if we assume 100% reporting rate, it looks like they're an amazing preventer of unrelated causes of death. Is there any reasonable way to estimate what % to use?

Comment by Rafael Harth (sil-ver) on Open problem: how can we quantify player alignment in 2x2 normal-form games? · 2021-06-18T08:41:22.202Z · LW · GW

I'll take a shot at this. Let and be the sets of actions of Alice and Bob. Let (where 'n' means 'nice') be function that orders by how good the choices are for Alice, assuming that Alice gets to choose second. Similarly, let (where 's' means 'selfish') be the function that orders by how good the choices are for Bob, assuming that Alice gets to choose second. Choose some function measuring similarity between two orderings of a finite set (should range over ); the alignment of with is then .

Example: in the prisoner's dilemma, , and orders whereas orders . Hence should be , i.e., Bob is maximally unaligned with Alice. Note that this makes it different from Mykhailo's answer which gives alignment , i.e., medium aligned rather than maximally unaligned.

This seems like an improvement over correlation since it's not symmetrical. In the game where Alice and Bob both get to choose numbers and Alice's utility function outputs whereas Bob's outputs , Bob would be perfectly aligned with Alice (his and both order ) but Alice perfectly unaligned with Bob (her orders but her orders ).

I believe this metric meets criteria 1,3,4 you listed. It could be changed to be sensitive to players' decision theories by changing (for alignment from Bob to Alice) to be the order output by Bob's decision theory, but I think that would be a mistake. Suppose I build an AI that is more powerful than myself, and the game is such that we can both decide to steal some of the other's stuff. If the AI does this, it leads to -10 utils for me and +2 for it (otherwise 0/0); if I do it, it leads to -100 utils for me because the AI kills me in response (otherwise 0/0). This game is trivial: the AI will take my stuff and I'll do nothing. Also, the AI is maximally unaligned with me. Now suppose I become as powerful as the AI and my 'take AI's stuff' becomes -10 for AI, +2 for me. This makes the game a prisoner's dilemma. If we both run UDT or FDT, we would now cooperate. If is the ordering of the AI's decision theory, this would mean the AI is now aligned with me, which is odd since the only thing that changed is me getting more powerful. With the original proposal, the AI is still maximally unaligned with me. More abstractly, game theory assumes your actions have influence on the other player's rewards (else the game is trivial), so if you cooperate for game-theoretical reasons, this doesn't seem to capture what we mean by alignment.

Comment by Rafael Harth (sil-ver) on The dumbest kid in the world (joke) · 2021-06-07T19:50:04.559Z · LW · GW

You only have two votes right now, but they counted for -10, so probably 2 strong downvotes. You can see the number of votes by hovering your mouse over the number.

Comment by Rafael Harth (sil-ver) on We need a standard set of community advice for how to financially prepare for AGI · 2021-06-07T14:22:56.432Z · LW · GW

This seems like a very surprising claim to me. You can make money on stocks by knowing things above pure chance. Do you really think that for all stocks?

Comment by Rafael Harth (sil-ver) on Often, enemies really are innately evil. · 2021-06-07T13:28:38.399Z · LW · GW

I don't believe a significant percentage of people is innately evil, and at the end of part I of this post, I don't think you've given me significant evidence to chance my mind. The study is not convincing, and not because of effect size -- people could have misunderstood the game or just pressed the red button for fun since we're talking about cents. I would have predicted few people to press the red button if the payouts were significant (thinking at least 100$ difference); I genuinely don't know what I would have predicted for the game as-is.

there are many, many, MANY more pieces of evidence from (almost) every internet troll, bully, and rapist, and many other criminals too.

I mean, rape has a pretty obvious advantage for the rapist. "Troll" is so overloaded that I think you'd have to define it before I can consider it seriously for anything. Bullying is the most convincing case, but my model of bullies, especially if they're young, isn't that they're innately evil. If I remember correctly, I have participated in bullying a couple of times before thinking about it and deciding that it's morally indefensible. I imagine most bullies are similar except that they skipped the part where they think about it, or that they have thought about it, maybe decided to stop, but then proceeded anyway because the instinct was too strong.

Anyway, this is quite speculative, but my point is that I don't think you're making a strong case for your leading claim. I realize that this may come across as nitpicking details in a mountain of obvious evidence, but that's often just how it feels like if someone doubts what you consider an obvious truth.

There's also an issue that we may have different ideas of what 'innately evil' means.

Comment by Rafael Harth (sil-ver) on The dumbest kid in the world (joke) · 2021-06-06T08:03:40.283Z · LW · GW

If you just cut everything from "Later" in the third-to-last paragraph onward, smart readers would probably still get it but it would be less obvious.

Comment by Rafael Harth (sil-ver) on What is the most effective way to donate to AGI XRisk mitigation? · 2021-05-30T14:44:13.507Z · LW · GW

You may be interested in Lark's AI Alignment charity reviews. The only organization I would add is the Qualia Research Institute, which is my personal speculative pick for the highest impact organization, even though they don't do alignment research. (They're trying to develop a mathematical theory of consciousness and qualia.)

Comment by Rafael Harth (sil-ver) on Opinions on Interpretable Machine Learning and 70 Summaries of Recent Papers · 2021-05-20T15:01:24.463Z · LW · GW

Thanks a bunch for summarizing your thoughts; this is helpful.

Comment by Rafael Harth (sil-ver) on Rafael Harth's Shortform · 2021-05-18T17:05:16.966Z · LW · GW

This paper is amazing. I don't think I've ever seen such a scathing critique in an academic context as is presented here.

There is now a vast and confusing literature on some combination of interpretability and ex- plainability. Much literature on explainability confounds it with interpretability/comprehensibility, thus obscuring the arguments, detracting from their precision, and failing to convey the relative importance and use-cases of the two topics in practice. Some of the literature discusses topics in such generality that its lessons have little bearing on any specific problem. Some of it aims to design taxonomies that miss vast topics within interpretable ML. Some of it provides definitions that we disagree with. Some of it even provides guidance that could perpetuate bad practice. Most of it assumes that one would explain a black box without consideration of whether there is an interpretable model of the same accuracy.

[...]

XAI surveys have (thus far) universally failed to acknowledge the important point that inter- pretability begets accuracy when considering the full data science process, and not the other way around. [...]

[...]

In this survey, we do not aim to provide yet another dull taxonomy of “explainability” termi- nology. The ideas of interpretable ML can be stated in just one sentence: [...]

As far as I can tell, this is all pretty on point. (And I know I've conflated explanability and interpretability before.)

I think I like this because it makes up update downward on how restricted you actually are in what you can publish, as soon as you have some reasonable amount of reputation. I used to find the idea of diving into the publishing world paralyzing because you have to adhere to the process, but nowadays that seems like much less of a big deal.

Comment by Rafael Harth (sil-ver) on Let's Rename Ourselves The "Metacognitive Movement" · 2021-04-24T07:43:30.905Z · LW · GW

"Metacognition" is defined as "thinking about thinking." That's exactly what we do.

I think it's an ok description of what we do in terms of epistemic rationality. I'm not so sure it captures the instrumental part. The biggest impact that joining this community had on my life was that I started really taking actions to further my goals.

Comment by Rafael Harth (sil-ver) on Opinions on Interpretable Machine Learning and 70 Summaries of Recent Papers · 2021-04-12T09:42:13.079Z · LW · GW

there are books on the topic

Does anyone know if this book is any good? I'm planning to get more familiar with interpretability research, and 'read a book' has just appeared in my set of options.

Comment by Rafael Harth (sil-ver) on Some blindspots in rationality and effective altruism · 2021-03-20T17:39:31.099Z · LW · GW

I think the culprit is 'overturned'. That makes it sound like their counterarguments were a done deal or something. I'll reword that to 'rebutted and reframed in finer detail'.

Yeah, I think overturned is the word I took issue with. How about 'disputed'? That seems to be the term that remains agnostic about whether there is something wrong with the original argument or not.

Perhaps, your impression from your circle is different from mine in terms of what proportion of AIS researchers prioritise work on the fast takeoff scenario?

My impression is that gradual takeoff has gone from a minority to a majority position on LessWrong, primarily due to Paul Christiano, but not an overwhelming majority. (I don't know how it differs among Alignment Researchers.)

I believe the only data I've seen on this was in a thread where people were asked to make predictions about AI stuff, including takeoff speed and timelines, using the new interactive prediction feature. (I can't find this post -- maybe someone else remembers what it was called?) I believe that was roughly compatible with the sizeable minority summary, but I could be wrong.

Comment by Rafael Harth (sil-ver) on Some blindspots in rationality and effective altruism · 2021-03-19T21:06:26.825Z · LW · GW
  • Eliezer Yudkowsky's portrayal of a single self-recursively improving AGI (later overturned by some applied ML researchers)

I've found myself doubting this claim, so I've read the post in question. As far as I can tell, it's a reasonable summary of the fast takeoff position that many people still hold today. If all you meant to say was that there was disagreement, then fine -- but saying 'later overturned' makes it sound like there is consensus, not that people still have the same disagreement they've had 13 years ago. (And your characterization in the paragraph I'll quote below also gives that impression.)

In hindsight, judgements read as simplistic and naive in similar repeating ways (relying on one metric, study, or paradigm and failing to factor in mean reversion or model error there; fixating on the individual and ignoring societal interactions; assuming validity across contexts):

Comment by Rafael Harth (sil-ver) on Rafael Harth's Shortform · 2021-02-15T19:56:34.847Z · LW · GW

Here is a construction of : We have that is the inverse of . Moreover, is the inverse of . [...]

Yeah, that's conclusive. Well done! I guess you can't divide by zero after all ;)

I think the main mistake I've made here is to assume that inverses are unique without questioning it, which of course doesn't make sense at all if I don't yet know that the structure is a field.

My hunch is that any bidirectional sum of integer powers of x which we can actually construct is "artificially complicated" and it can be rewritten as a one-directional sum of integer powers of x. So, this would mean that your number system is what you get when you take the union of Laurent series going in the positive and negative directions, where bidirectional coordinate representations are far from unique. Would be delighted to hear a justification of this or a counterexample.

So, I guess one possibility is that, if we let be the equivalence class of all elements that are in this structure, the resulting set of classes is isomorphic to the Laurent numbers. But another possibility could be that it all collapses into a single class -- right? At least I don't yet see a reason why that can't be the case (though I haven't given it much thought). You've just proven that some elements equal zero, perhaps it's possible to prove it for all elements.

Comment by Rafael Harth (sil-ver) on Rafael Harth's Shortform · 2021-02-14T08:31:24.557Z · LW · GW

You've understood correctly minus one important detail:

The structure you describe (where we want elements and their inverses to have finite support)

Not elements and their inverses! Elements or their inverses. I've shown the example of to demonstrate that you quickly get infinite inverses, and you've come up with an abstract argument why finite inverses won't cut it:

To show that nothing else works, let and be any two nonzero sums of finitely many integer powers of (so like ). Then, the leading term (product of the highest power terms of and ) will be some nonzero thing. But also, the smallest term (product of the lower power terms of and ) will be some nonzero thing. Moreover, we can't get either of these to cancel out. So, the product can never be equal to . (Unless both are monomials.)

In particular, your example of has the inverse . Perhaps a better way to describe this set is 'all you can build in finitely many steps using addition, inverse, and multiplication, starting from only elements with finite support'. Perhaps you can construct infinite-but-periodical elements with infinite-but-periodical inverses; if so, those would be in the field as well (if it's a field).

If you can construct , it would not be field. But constructing this may be impossible.

I'm currently completely unsure if the resulting structure is a field. If you get a bunch of finite elements, take their infinite-but-periodical inverse, and multiply those inverses, the resulting number has again a finite inverse due to the argument I've shown in the previous comment. But if you use addition on one of them, things may go wrong.

A larger structure to take would be formal Laurent series in . These are sums of finitely many negative powers of x and arbitrarily many positive powers of . This set is closed under multiplicative inverses.

Thanks; this is quite similar -- although not identical.

Comment by Rafael Harth (sil-ver) on Rafael Harth's Shortform · 2021-02-11T17:27:00.746Z · LW · GW

Edit: this structure is not a field as proved by just_browsing.

Here is a wacky idea I've had forever.

There are a bunch of areas in math where you get expressions of the form and they resolve to some number, but it's not always the same number. I've heard some people say that "can be any number". Can we formalize this? The formalism would have to include as something different than , so that if you divide the first by 0, you get 4, but the second gets 3.

Here is a way to turn this into what may be a field or ring. Each element is a function , where a function of the form reads as . Addition is component-wise (; this makes sense), i.e., , and multiplication is, well, , so we get the rule

This becomes a problem once elements with infinite support are considered, i.e., functions that are nonzero at infinitely many values, since then the sum may not converge. But it's well defined for numbers with finite support. This is all similar to how polynomials are handled formally, except that polynomials only go in one direction (i.e., they're functions from rather than ), and that also solves the non-convergence problem. Even if infinite polynomials are allowed, multiplication is well-defined since for any , there are only finitely many pairs of natural numbers such that .

The additively neutral element in this setting is and the multiplicatively neutral element is . Additive inverses are easy; . The interesting part is multiplicative inverses. Of course, there is no inverse of , so we still can't divide by the 'real' zero. But I believe all elements with finite support do have a multicative inverse (there should be a straight-forward inductive proof for this). Interestingly, those inverses are not finite anymore, but they are periodical. For example, the inverse of is just , but the inverse of is actually

I think this becomes a field with well-defined operations if one considers only the elements with finite support and elements with inverses of finite support. (The product of two elements-whose-inverses-have-finite-support should itself have an inverse of finite support because ). I wonder if this structure has been studied somewhere... probably without anyone thinking of the interpretation considered here.

Comment by Rafael Harth (sil-ver) on Open & Welcome Thread – February 2021 · 2021-02-06T14:29:54.855Z · LW · GW

There are a bunch of sequences, like the value learning sequence, that have structured formatting in the sequence overview (the page the link goes to), so something like Headline, a bunch of posts, headline, a bunch of more posts.

How is this done? When I go into the sequence editor, I only see one text field where I can write something which then appears in front of the list of posts.

Comment by Rafael Harth (sil-ver) on The GameStop Situation: Simplified · 2021-01-29T20:24:56.554Z · LW · GW

This post is similar to the one Eliezer Yudkowsky wrote.

Comment by Rafael Harth (sil-ver) on Preface to the Sequence on Factored Cognition · 2021-01-26T19:45:52.179Z · LW · GW

Cool, thanks.

Comment by Rafael Harth (sil-ver) on Qualia Research Institute: History & 2021 Strategy · 2021-01-26T10:38:55.258Z · LW · GW

While QRI is only occasionally talked about on LessWrong, I personally continue to think that they're doing the most exciting research that exists today, provided you take a utilitarian perspective. I've donated to Miri in the past, in part because their work seems highly non-replaceable. I still stand by that reason, but it applies even more to QRI. Even if there is only a small chance that formalizing consciousness is both possible and practically feasible, the potential upside seems enormous. Success in formalizing suffering wouldn't solve AI alignment (for several reasons, one of them being Inner Optimizers), but I imagine it would be extremely helpful. There is nothing approximating a consensus on the related philosophical problems in the community, and positions on those issues seem to have a significant causal influence on what research is being pursued.

It helps that I share most if not all of the essential philosophical intuitions that motivate QRI's research. On the other hand, research should be asymmetrical with regard to what's true. In the world where moral realism is false and suffering isn't objective or doesn't have structure, beliefs to the contrary (which many people in the community hold today) could lead to bad alignment-related decisions. In that case, any attempts to quantify suffering would inevitably fail, and that would itself be relevant evidence.

Comment by Rafael Harth (sil-ver) on Preface to the Sequence on Factored Cognition · 2021-01-26T09:24:23.317Z · LW · GW

Re personal opinion: what is your take on the feasibility of human experiments? It seems like your model is compatible with IDA working out even though no-one can ever demonstrate something like 'solve the hardest exercise in a textbook' using participants with limited time who haven't read the book.

Comment by Rafael Harth (sil-ver) on Preface to the Sequence on Factored Cognition · 2021-01-26T09:19:54.512Z · LW · GW

This is an accurate summary, minus one detail:

The judge decides the winner by evaluating whether the final statement is true or not.

"True or not" makes it sound symmetrical, but the choice is between 'very confident that it's true' and 'anything else'. Something like '80% confident' goes into the second category.

One thing I would like to be added is just that I come out moderately optimistic about Debate. It's not too difficult for me to imagine the counter-factual world where I think about FC and find reasons to be pessimistic about Debate, so I take the fact that I didn't as non-zero evidence.

Comment by Rafael Harth (sil-ver) on Why I'm excited about Debate · 2021-01-17T10:24:57.647Z · LW · GW

I think the Go example really gets to the heart of why I think Debate doesn't cut it.

Your comment is an argument against using Debate to settle moral questions. However, what if Debate is trained on Physics and/or math questions, with the eventual goal of asking "what is a provably secure alignment proposal?"

Comment by sil-ver on [deleted post] 2021-01-17T10:20:44.414Z

Before offering an "X is really about Y" signaling explanation, it's important to falsify the "X is about X" hypothesis first. Once that's done, signaling explanations require, at minimum:

  1. An action or decision by the receiver that the sender is trying to motivate.
  2. (2.1) An explanation for why the receiver is listening for signals in the first place, and (2.2) why the sender is trying to communicate them.
  3. A language that the sender has reason to think the receiver will understand and believe as the sender intended.
  4. A physical mechanism for sending and receiving the signal.

(Added numbers for reference.)

I think 1, 2.1, and 3 are all wrong, in that none of them are required for a signaling hypothesis to be plausible. I believe you're assuming that signaling is effective and/or rational, but this is a mistake. Signaling was optimized to be effective in the ancestral environment, so there's no reason why it should still be effective today. As far as I can tell, it generally is not.

As an example, consider men wearing solid shoes in the summer despite finding those uncomfortable. There is no action this is trying to motivate, and there is no reason to expect the receiver is listening -- in fact, there is often good reason to expect that they are not listening (in many contexts, people really don't care about your shoes). Nonetheless, I think conformity signaling is the correct explanation for this behavior.

The pilot example is problematic because in this case, signaling is part of a high-level plan. This is a non-central example. Most of the time, signaling is motivated by evolutionary instincts, like the fear of standing out. In the case of religion, I think this is most of the story. Those instincts can then translate into high-level behavior like going to the church, but it's not the beginning of the causal chain.

Comment by Rafael Harth (sil-ver) on Pseudorandomness contest: prizes, results, and analysis · 2021-01-15T10:11:39.253Z · LW · GW

Thanks for hosting this contest. The overconfidence thing in particular is a fascinating data point. When I was done with my function that output final probabilities, I deliberately made it way more agnostic, thinking that at least now my estimates are modest -- but it turns out that I should have gone quite a bit farther with that adjustment.

I'm also intrigued by the variety of approaches for analyzing strings. I solely looked at frequency of monotone groups (i.e., how many single 0's, how many 00's, how many 000's, how many 111's, etc.), and as a result, I have widely different estimates (compared to the winning submissions) on some of the strings where other methods were successful.

Comment by Rafael Harth (sil-ver) on Open & Welcome Thread - January 2021 · 2021-01-12T08:23:55.819Z · LW · GW

Do the newest numbers indicate that the new Covid strand isn't that bad after all, for whatever reason? If not, why not?

Edit: Zvi gave a partial answer here.

Comment by Rafael Harth (sil-ver) on In Defense of Twitter's Decision to Ban Trump · 2021-01-11T18:17:43.501Z · LW · GW

I happen to agree with your conclusion, but I don't think you're addressing what EY said. He tweeted the following:

What America needs now, to heal, is for the left and the right to be on entirely different social networks. Still with the ability to subtweet alleged screencaps from the Other network of Others being outrageous, of course! But with no ability for Others to clarify or respond.

My Translation: I'm worried that banning Trump from twitter will increase polarization because it will make the two tribes more segregated than they were before. This is not that similar to your #7, and otherwise missing from the list entirely.

I also think #8 is unlikely. It doesn't strike me as plausible that the Capitol incident provided any rational person with significant evidence on which to update their view of Trump. On the other hand, public opinion appears to have shifted significantly. A financial motive seems likely here, especially for Zukerberg.

Comment by Rafael Harth (sil-ver) on Grokking illusionism · 2021-01-06T23:32:39.142Z · LW · GW

Comparing consciousness to plastic surgery seems to me to be a false analogy. If you have your model of particles bouncing around, then plastic surgery is a label you can put on a particular class of sequences of particles doing things. If you didn't have the name, there wouldn't be anything to explain, the particles can still do the same thing. Consciousness/subjective experience describes something that is fundamentally non-material. It may or may not be cause by particles doing things, but it's not itself made of particles.

If your response to this is that there is no such thing as subjective experience -- which is what I thought your position was, and what I understand strong illusionism to be -- then this is exactly what I mean when I say consciousness isn't real. By 'consciousness', I'm exclusively referring to the qualitatively different thing called subjective experience. This thing either exists or doesn't exist. I'm not talking about the process that makes people move their fingers to type things about consciousness.

I apologize for not tabooing 'real', but I don't have a model of how 'is consciousness real' can be anything but a well-defined question whose answer is either 'yes' or 'no'. The 'as real as X' framing doesn't make any sense to me. it seems like trying to apply a spectrum to a binary question.

Comment by Rafael Harth (sil-ver) on Grokking illusionism · 2021-01-06T17:23:57.180Z · LW · GW

Apologies, I communicated poorly. ImE, discussions about consciousness are particularly prone to misunderstandings. Let me rephrase my comment.

  1. Many (most?) people believe that consciousness is an emergent phenomenon but also a real thing.
  2. My assumption from reading your first comment was that you believe #1 is close to impossible. I agree with that.
  3. I took your first comment (in particular this paragraph)...

Because ultimately, down at the floor, it's all just particles and forces and extremely well understood probabilities. There's no fundamental primitive for 'consciousness' or 'experience', any more than there's a fundamental primitive for 'green' or 'traffic' or 'hatred'. Those particles and forces down at the floor are the territory; everything else is a label.

... as saying that #2 implies illusionism must be true. I'm saying this is not the case because you can instead stipulate that consciousness is a primitive. If every particle is conscious, you don't have the problem of getting real consciousness out of nothing. (You do have the problem of why your experience appears unified, but that seems much less impossible.)

Or to say the same thing differently, my impression/worry is that people accept that 'consciousness isn't real' primarily because they think the only alternative is 'consciousness is real and emerges from unconscious matter', when in fact you can have a coherent world view that disputes both claims.

Comment by Rafael Harth (sil-ver) on Grokking illusionism · 2021-01-06T15:48:08.388Z · LW · GW

That's fair. However, if you share the intuition that consciousness being emergent is extremely implausible, then going from there directly to illusionsism means only comparing it to the (for you) weakest alternative. And that seems like the relevant step for people in this thread other than you.

Comment by Rafael Harth (sil-ver) on Grokking illusionism · 2021-01-06T15:14:51.421Z · LW · GW

There's no fundamental primitive for 'consciousness'

I'm not sure if this is the case, but I'm worried that people subscribe to illusionism because they only compare it to the weakest possible alternative, which (I would say) is consciousness being an emergent phenomenon. If you just assume that there's no primitive for consciousness, I would agree that the argument for illusionism is extremely strong since [unconscious matter spontaneously spawning consciousness] is extremely implausible.

However, you can also just dispute the claim and assume consciousness is a primitive, which gets around the hard problem. That leaves the question 'why is consciousness a primitive', which doesn't seem particularly more mysterious than 'why is matter a primitive'.

Comment by Rafael Harth (sil-ver) on Predictions for 2021 · 2020-12-31T22:08:01.509Z · LW · GW

I’d also like to advertise a challenge for my readers. You can email me with your predictions for a subset of my predictions with your own prediction. I’ll judge your predictions against mine using the logarithmic scoring rule.

Out of curiosity, why logarithmic scoring and not Brier scoring? (I like logarithmic scoring better, but you used Brier in the pseudorandomness contest.)

Would you also take money bets in addition to just virtual scores?

Comment by Rafael Harth (sil-ver) on Book review: Rethinking Consciousness · 2020-12-31T17:19:25.026Z · LW · GW

There's a funny thing about nihilism: It's not decision-relevant. Imagine being a nihilist, deciding whether to spend your free time trying to bring about an awesome post-AGI utopia, vs sitting on the couch and watching TV. Well, if you're a nihilist, then the awesome post-AGI utopia doesn't matter. But watching TV doesn't matter either. Watching TV entails less exertion of effort. But that doesn't matter either. Watching TV is more fun (well, for some people). But having fun doesn't matter either. There's no reason to throw yourself at a difficult project. There's no reason not to throw yourself at a difficult project. Isn't it funny?

I agree except for the funny part.

I don't have a grand ethical theory, I'm not ready to sit in judgment of anyone else, I'm just deciding what to do for my own account. There's a reason I ended the post with "Dentin's prayer of the altruistic nihilist"; that's how I feel, at least sometimes. I choose to care about information-processing systems that are (or "perceive themselves to be"?) conscious in a way that's analogous to how humans do that, with details still uncertain. I went them to be (or "to perceive themselves to be"?) happy and have awesome futures. So here I am :-D

Thanks for describing this. I'm both impressed and a bit shocked that you're being consistent.

This is a pretty weird claim, right? I mean, you remember writing down the statement. Would you agree with that claim? No way, right?

Let's assume I do. (I think I would have agreed a few years ago, or at least assigned significant probability to this.) I still think (and thought then) that there is a slam-dunk chain from 'I experience consciousness' to 'therefore, consciousness exists'.

Let and . Clearly because experiencing anything is already sufficient for what I call consciousness. Furthermore, clearly is true. Hence is true. Nothing about your Claim contradicts any step of this argument.

I think the reason why this topic has intuitions differ so much is that we are comparing very low probability theories against each other, and the question is which one is lower. (And operations with low numbers are prone to higher errors than operations with higher numbers.) At least my impression (correct me if I'm wrong) is that the subjective proof of consciousness would be persuasive, except that it seems to imply Claim, and Claim is a no-go, so therefore the subjective proof has to give in. I.e., you have both and , and therefore .

My main point is that it doesn't make sense to assign anything lower probability than and because is immediately proven by the fact that you experience stuff, and is the definition of so is utterly trivial. You can make a coherent-sounding (if far fetched) argument for why Claim is true, but I'm not familiar with any coherent argument that is false (other than that it must be false because of what it implies, which is again the argument above.)

My probabilities (not adjusted for the fact that one of them must be true) look something like this:

  • or is false
  • Consciousness is an emergent phenomenon. (I.e., matter is unconscious but consciousness appears as a result of information processing and has no causal effect on the world. This would imply Claim.)
  • Something weird like Dual-aspect monism (consciousness and materialism are two views on the same process, in particular all matter is conscious),

Hence what I said earlier: I don't believe Claim right now because I think there is actually a not-super-low-probability explanation, but even if there weren't, it would still not change anything because is a lot more than . I do remember finding EY's anti-p-zombie post persuasive, although it's been years since I've read it.

I can't say I understand it very well either, and see also Luke's report Appendix F and Joe's blog post. From where I'm at right now, there's a set of phenomena that people describe using words like "consciousness" and "qualia", and nothing we say will make those phenomena magically disappear. However, it's possible that those phenomena are not what they appear to be.

We all perceive that we have qualia. You can think of statements like "I perceive X" as living on continuum, like a horizontal line. On the left extreme of the line, we can perceive things because those things are out there in the world and our senses are accurately and objectively conveying them to us. On the right extreme of the line, we can perceive things because of quirks of our perceptual systems.

I think that's just dodging the problem since any amount of subjective experience is enough for . The question isn't how accurately your brain reports on the outside world, it's why you have subjective experience of any kind.

Comment by Rafael Harth (sil-ver) on Book review: Rethinking Consciousness · 2020-12-31T14:29:40.652Z · LW · GW

I guess it was too nice that I tend to agree with everything you say about the brain, so there had to be an exception.

Normal Person: What about qualia?

Person Who Has Solved The Meta-Problem Of Consciousness: Let me explain why the brain, as an information processing system, would ask the question "What about qualia"...

NP: What about subjective experience?

PWHSTMPOC: Let me explain why the brain, as an information processing system, would ask the question "What about subjective experience"...

NP: You're not answering my questions!

PWHSTMPOC: Let me explain why the brain, as an information processing system, would say "You're not answering my questions"...

It seems to me like PWHSTMPOC is being chicken here. The real answer is "there is no qualia" followed by "however, I can explain why your brain outputs the question about qualia". Right?

If so, well I know that there's qualia because I experience it, and I genuinely don't understand why that's not the end of the conversation. It's also true that a brain like main could say this if it weren't true, but this doesn't change anything about the fact that I experience qualia. (Unless the claim isn't that there's no qualia, in which case I don't understand illusionism.)

I'm also not following your part on morality. If consciousness isn't real, why doesn't that just immediately imply nihilism? (This isn't an argument for it being real, or course.) Anyway, please feel free to ignore this paragraph if the answer is too complicated.

Comment by Rafael Harth (sil-ver) on Book Review: On Intelligence by Jeff Hawkins (and Sandra Blakeslee) · 2020-12-31T12:19:41.897Z · LW · GW

Thanks for those thoughts. And also for linking to Kaj's post again; I finally decided to read it and it's quite good. I don't think it helps at all with the hard problem (i.e., you could replace 'consciousness' with some other process in the brain that has these properties but doesn't have the subjective component, and I don't think that would pose any problems), but it helps quite a bit with the 'what is consciousness doing' question, which I also care about.

(Now I'm trying to look at the wall of my room and to decide whether I actually do see pixels or 'line segments', which is an exercise that really puts a knot into my head.)

One of the things that makes this difficult is that, whenever you focus on a particular part, it's probably consistent with the framework that this part gets reported in a lot more detail. If that's true, then testing the theory requires you to look at the parts you're not paying attention to, which is... um.

Maybe evidence here would be something like, do you recognize concepts in your peripheral vision more than hard-to-clasiffy-things and actually I think you do. (E.g, if I move my gaze to the left, I can still kind of see the vertical cable of a light on the wall even though the wall itself seems not visible.)

Comment by Rafael Harth (sil-ver) on 2021 New Year Optimization Puzzles · 2020-12-31T10:20:57.211Z · LW · GW

Possible solution for P1:

for a score of 14 (now 144 with multiplicative scoring, but it's still the lowest among the solutions I've found)

Comment by Rafael Harth (sil-ver) on Covid 12/24: We’re F***ed, It’s Over · 2020-12-29T14:41:56.330Z · LW · GW

Do you have an opinion on what stocks will move as a result?

Comment by Rafael Harth (sil-ver) on What trade should we make if we're all getting the new COVID strain? · 2020-12-26T17:21:31.862Z · LW · GW

Thanks!

Comment by Rafael Harth (sil-ver) on What trade should we make if we're all getting the new COVID strain? · 2020-12-26T12:59:41.300Z · LW · GW

That could well have been priced in already, but probably not all of it.

Why not?

Comment by Rafael Harth (sil-ver) on What trade should we make if we're all getting the new COVID strain? · 2020-12-26T12:58:22.828Z · LW · GW

Since you can't buy VIX directly, can you describe what exact thing you bought?

Comment by Rafael Harth (sil-ver) on What trade should we make if we're all getting the new COVID strain? · 2020-12-26T12:49:17.752Z · LW · GW

This is very much a question, not an answer: why not buy shares of Zoom? Wouldn't an increase in cases further drive up their value?

Comment by Rafael Harth (sil-ver) on Debate update: Obfuscated arguments problem · 2020-12-23T08:52:00.308Z · LW · GW

In the ball-attached-to-a-pole example, the honest debater has assigned probabilities that are indistinguishable from what you would do if you knew noting except that the claim is false. (I.e., assign probabilities that doubt each component equally.) I'm curious how difficult it is to find the flaw in this argument structure. Have you done anything like showing these transcripts to other experts and seeing if they will be able to answer it?

If I had to summarize this finding in one sentence, it would be "it seems like an expert can generally find a set of arguments for a false claim that is flawed such that an equally competent expert can't identify the flawed component, and the set of arguments doesn't immediately look suspect". This seems surprising, and I'm wondering whether it's unique to physics. (The cryptographic example was of this kind, but there, the structure of the dishonest arguments was suspect.)

If this finding holds, my immediate reaction is "okay, in this case, the solution for the honest debater is to start a debate about whether the set of arguments from the dishonest debater has this character". I'm not sure how good this sounds. I think my main issue here is that I don't know enough physics understand why the dishonest arguments are hard to identify

Comment by Rafael Harth (sil-ver) on Rafael Harth's Shortform · 2020-12-21T13:28:48.102Z · LW · GW

Ah, I didn't know that. (Even though I use the English Wikipedia more than the German one.)

Comment by Rafael Harth (sil-ver) on Pseudorandomness contest, Round 2 · 2020-12-20T12:12:50.024Z · LW · GW

My thanks for making me practice python :-)

Comment by Rafael Harth (sil-ver) on Rafael Harth's Shortform · 2020-12-19T22:30:01.275Z · LW · GW

Interesting, but worth pointing out that this is 15 years old. One thing that I believe changed within that time is that anyone can edit articles (now, edits aren't published until they're approved). And in general, I believe Wikipedia has gotten better over time, though I'm not sure.

Comment by Rafael Harth (sil-ver) on Rafael Harth's Shortform · 2020-12-19T17:27:44.612Z · LW · GW

The ideal situation to which Wikipedia contributors\editors are striving for kinda makes desires to cite Wikipedia itself pointless. Well written Wikipedia article should not contain any information that has no original source attached. So it should always be available to switch from wiki article to original material doing citing.

I see what you're saying, but citing Wikipedia has the benefit that a person looking at the source gets to read Wikipedia (which is generally easier to read) rather than the academic paper. Plus, it's less work for the person doing the citation.

Comment by Rafael Harth (sil-ver) on Rafael Harth's Shortform · 2020-12-19T10:11:15.586Z · LW · GW

It's a meme that Wikipedia is not a trustworthy source. Wikipedia agrees:

We advise special caution when using Wikipedia as a source for research projects. Normal academic usage of Wikipedia and other encyclopedias is for getting the general facts of a problem and to gather keywords, references and bibliographical pointers, but not as a source in itself. Remember that Wikipedia is a wiki. Anyone in the world can edit an article, deleting accurate information or adding false information, which the reader may not recognize. Thus, you probably shouldn't be citing Wikipedia. This is good advice for all tertiary sources such as encyclopedias, which are designed to introduce readers to a topic, not to be the final point of reference. Wikipedia, like other encyclopedias, provides overviews of a topic and indicates sources of more extensive information. See researching with Wikipedia and academic use of Wikipedia for more information.

This seems completely bonkers to me. Yes, Wikipedia is not 100% accurate, but this is a trivial statement. What is the alternative? Academic papers? My experience suggests that I'm more than 10 times as likely to find errors in academic papers than in Wikipedia. Journal articles? Pretty sure the factor here is even higher. And on top of that, Wikipedia tends to be way better explained.

I can mostly judge mathy articles, and honestly, it's almost unbelievable to me how good Wikipedia actually seems to be. A data point here is the Monty Hall problem. I think the thing that's most commonly misunderstood about this problem is that the solution depends on how the host chooses the door they reveal. Wikipedia:

The given probabilities depend on specific assumptions about how the host and contestant choose their doors. A key insight is that, under these standard conditions, there is more information about doors 2 and 3 than was available at the beginning of the game when door 1 was chosen by the player: the host's deliberate action adds value to the door he did not choose to eliminate, but not to the one chosen by the contestant originally. Another insight is that switching doors is a different action than choosing between the two remaining doors at random, as the first action uses the previous information and the latter does not. Other possible behaviors than the one described can reveal different additional information, or none at all, and yield different probabilities. Yet another insight is that your chance of winning by switching doors is directly related to your chance of choosing the winning door in the first place: if you choose the correct door on your first try, then switching loses; if you choose a wrong door on your first try, then switching wins; your chance of choosing the correct door on your first try is 1/3, and the chance of choosing a wrong door is 2/3.

It's possible that Wikipedia's status as not being a cite-able source is part of the reason why it's so good. I'm not sure. But the fact that a system based entirely on voluntary contributions so thoroughly outperforms academic journals is remarkable.

Another more rambly aspect here is that, when I hear someone lament the quality of Wikipedia, almost always my impression is that this person is doing superiority signaling rather than having a legitimate reason for the comment.

Comment by Rafael Harth (sil-ver) on Clarifying Factored Cognition · 2020-12-17T22:45:11.896Z · LW · GW

I agree for literal HCH. However, I think that falls under brute force, which is the one thing that HCH isn't 'allowed' to do because it can't be emulated. I think I say this somewhere in a footnote.

Comment by Rafael Harth (sil-ver) on Hiding Complexity · 2020-12-17T22:41:54.028Z · LW · GW

This is really good comment. I'm not sure yet what I think about it, but it's possible that the post is not quite right. Which might be a big deal because the upcoming sequence relies on it pretty heavily. I take purely abstract examples seriously.

One thing to note, though, is that your counterexample is less of a counterexample than it looks on first glance because while the size of the solutions to [subproblems of your natural decomposition] can be made arbitrarily large, the size of the overall solution grows equally fast.

If we allow two subproblems, then the optimal decomposition (as defined by the post) would be , , where and denote the first and second half of a string. Here, the solutions to subproblems are half as long, which is optimal.

These subproblems sound like they're harder to solve, but that's not necessarily the case; depends on . And if they can be solved, it seems like the decomposition would still be preferable.