Posts

Has Moore's Law actually slowed down? 2019-08-20T19:18:41.488Z · score: 9 (7 votes)
How can you use music to boost learning? 2019-08-17T06:59:32.582Z · score: 8 (4 votes)
A Primer on Matrix Calculus, Part 3: The Chain Rule 2019-08-17T01:50:29.439Z · score: 5 (2 votes)
A Primer on Matrix Calculus, Part 2: Jacobians and other fun 2019-08-15T01:13:16.070Z · score: 17 (7 votes)
A Primer on Matrix Calculus, Part 1: Basic review 2019-08-12T23:44:37.068Z · score: 19 (7 votes)
Matthew Barnett's Shortform 2019-08-09T05:17:47.768Z · score: 5 (5 votes)
Why Gradients Vanish and Explode 2019-08-09T02:54:44.199Z · score: 27 (14 votes)
Four Ways An Impact Measure Could Help Alignment 2019-08-08T00:10:14.304Z · score: 21 (25 votes)
Understanding Recent Impact Measures 2019-08-07T04:57:04.352Z · score: 17 (6 votes)
What are the best resources for examining the evidence for anthropogenic climate change? 2019-08-06T02:53:06.133Z · score: 11 (8 votes)
A Survey of Early Impact Measures 2019-08-06T01:22:27.421Z · score: 22 (7 votes)
Rethinking Batch Normalization 2019-08-02T20:21:16.124Z · score: 19 (5 votes)
Understanding Batch Normalization 2019-08-01T17:56:12.660Z · score: 19 (7 votes)
Walkthrough: The Transformer Architecture [Part 2/2] 2019-07-31T13:54:44.805Z · score: 6 (8 votes)
Walkthrough: The Transformer Architecture [Part 1/2] 2019-07-30T13:54:14.406Z · score: 30 (13 votes)

Comments

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-24T20:44:15.788Z · score: 1 (1 votes) · LW · GW

I agree I would not be able to actually accomplish time travel. The point is whether we could construct some object in Minkowski space (or whatever General Relativity uses, I'm not a physicist) that we considered to be loop-like. I don't think it's worth my time to figure out whether this is really possible, but I suspect that something like it may be.

Edit: I want to say that I do not have an intuition for physics or spacetime at all. My main reason for thinking this is possible is mainly that I think my idea is fairly minimal: I think you might be able to do this even in R^3.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-24T20:34:36.913Z · score: 1 (1 votes) · LW · GW

I agree with the objection. :) Personally I'm not sure whether I'd want to be stuck in a loop of experiences repeating over and over forever.

However, even if we considered "true" immortality, repeat experiences are inevitable simply because there's a finite number of possible experiences. So, we'd have to start repeating things eventually.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-23T00:39:53.860Z · score: 1 (1 votes) · LW · GW

"Immortality is cool and all, but our universe is going to run down from entropy eventually"

I consider this argument wrong for two reasons. The first is the obvious reason, which is that even if immortality is impossible, it's still better to live for a long time.

The second reason why I think this argument is wrong is because I'm currently convinced that literal physical immortality is possible in our universe. Usually when I say this out loud I get an audible "what" or something to that effect, but I'm not kidding.

It's going to be hard to explain my intuitions for why I think real immortality is possible, so bear with me. First, this is what I'm not saying:

  • I'm not saying that we can outlast the heat death of the universe somehow
  • I'm not saying that we just need to shift our conception of immortality to be something like, "We live in the hearts of our countrymen" or anything like that.
  • I'm not saying that I have a specific plan for how to become immortal personally, and
  • I'm not saying that my proposal has no flaws whatsoever and that this is a valid line of research to be conducting at the moment.

So what am I saying?

A typical model of our life as humans is that we are something like a worm in 4 dimensional space. On one side of the worm there's our birth, and on the other side of the worm is our untimely death. We 'live through' this worm, and that is our life. The length of our life is measured by considering the length of the worm in 4 dimensional space, measured just like a yardstick.

Now just change the perspective a little bit. If we could somehow abandon our current way of living, then maybe we can alter the geometry of this worm so that we are immortal. Consider: a circle has no starting point and no end. If someone could somehow 'live through' a circle, then their life would consist of an eternal loop through experiences, repeating endlessly.

The idea is that we somehow construct a physical manifestation of this immortality circle. I think of it like an actual loop in 4 dimensional space because it's difficult to visualize without an analogy. A superintelligence could perhaps predict what type of actions would be necessary to construct this immortal loop. And once it is constructed, it'll be there forever.

From an outside view in our 3d mind's eye, the construction of this loop would look very strange. It could look like something popping into existence suddenly and getting larger, and then suddenly popping out of existence. I don't really know; that's just the intuition.

What matters is that within this loop someone will be living their life on repeat. True Déjà vu. Each moment they live is in their future, and in their past. There are no new experiences and no novelty, but the superintelligence can construct it so that this part is not unenjoyable. There would be no right answer to the question "how old are you." And in my view, it is perfectly valid to say that this person is truly, actually immortal.

Perhaps someone who valued immortality would want one of these loops to be constructed for themselves. Perhaps for some reason constructing one of these things is impossible in our universe (though I suspect that it's not). There are anthropic reasons that I have considered for why constructing it might not be worth it... but that would be too much to go into for this shortform post.

To close, I currently see no knockdown reasons to believe that this sort of scheme is impossible.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-22T22:38:45.252Z · score: 1 (1 votes) · LW · GW

Thanks for engaging with me on this thing. :)

I know I'm not being as clear as I could possibly be, and at some points I sort of feel like just throwing "Quining Qualia" or Keith Frankish's articles or a whole bunch of other blog posts at people and say, "Please just read this and re-read it until you have a very distinct intuition about what I am saying." But I know that that type of debate is not helpful.

I think I have a OK-to-good understanding of what you are saying. My model of your reply is something like this,

"Your claim is that qualia don't exist because nothing with these three properties exists (ineffability/private/intrinsic), but it's not clear to me that these three properties are universally identified with qualia. When I go to Wikipedia or other sources, they usually identify qualia with 'what it's like' rather than these three very specific things that Daniel Dennett happened to list once. So, I still think that I am pointing to something real when I talk about 'what it's like' and you are only disputing a perhaps-strawman version of qualia."

Please correct me if this model of you is inaccurate.

I recognize what you are saying, and I agree with the place you are coming from. I really do. And furthermore, I really really agree with the idea that we should go further than skepticism and we should always ask more questions even after we have concluded that something doesn't exist.

However, the place I get off the boat is where you keep talking about how this 'what it's like' thing is actually referring to something coherent in the real world that has a crisp, natural boundary around it. That's the disagreement.

I don't think it's an accident of history either that those properties are identified with qualia. The whole reason Daniel Dennett identified them was because he showed that they were the necessary conclusion of the sort of thought experiments people use for qualia. He spends the whole first several paragraphs justifying them using various intuition pumps in his essay on the matter.

Point being, when you are asked to clarify what 'what it's like' means, you'll probably start pointing to examples. Like, you might say, "Well, I know what it's like to see the color green, so that's an example of a quale." And Daniel Dennett would then press the person further and go, "OK could you clarify what you mean when you say you 'know what it's like to see green'?" and the person would say, "No, I can't describe it using words. And it's not clear to me it's even in the same category of things that can be either, since I can't possibly conceive of an English sentence that would describe the color green to a blind person." And then Daniel Dennett would shout, "Aha! So you do believe in ineffability!"

The point of those three properties (actually he lists 4, I think), is not that they are inherently tied to the definition. It's that the definition is vague, and every time people are pressed to be more clear on what they mean, they start spouting nonsense. Dennett did valid and good deconfusion work where he showed that people go wrong in these four places, and then showed how there's no physical thing that could possibly allow those four things.

These properties also show up all over the various thought experiments that people use when talking about qualia. For example, Nagel uses the private property in his essay "What Is it Like to Be a Bat?" Chalmers uses the intrinsic property when he talks about p-zombies being physically identical to humans in every respect except for qualia. Frank Jackson used the ineffability property when he talked about how Mary the neuroscientist had something missing when she was in the black and white room.

All of this is important to recognize. Because if you still want to say, "But I'm still pointing to something valid and real even if you want to reject this other strawman-entity" then I'm going to treat you like the person who wants to believe in souls even after they've been shown that nothing soul-like exists in this universe.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-22T21:19:40.997Z · score: 1 (1 votes) · LW · GW

If you identify qualia as behavioral parts of our physical models, then are you also willing to discard the properties philosophers have associated with qualia, such as

  • Ineffable, as they can't be explained using just words or mathematical sentences
  • Private, as they are inaccessible to outside third-person observers
  • Intrinsic, as they are fundamental to the way we experience the world

If you are willing to discard these properties, then I suggest we stop using the world "qualia" since you have simply taken all the meaning away once you have identified them with things that actually exist. This is what I mean when I say that I am denying qualia.

It is analogous to someone who denies that souls exist by first conceding that we could identify certain physical configurations as examples of souls, but then explaining that this would be confusing to anyone who talks about souls in the traditional sense. Far better in my view to discard the idea altogether.

Comment by matthew-barnett on What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause? · 2019-08-22T18:38:08.460Z · score: 1 (1 votes) · LW · GW

You're right. I initially put this in the answer category, but I really meant it as clarification. I assumed that the personal question was more important since the humanity question is not very useful (except maybe to governments and large corporations).

Comment by matthew-barnett on Simulation Argument: Why aren't ancestor simulations outnumbered by transhumans? · 2019-08-22T17:40:27.956Z · score: 3 (2 votes) · LW · GW
I guess the question boils down to the choice of reference classes, so what makes the reference class "early 21st century humans" so special?

One very speculative reason why it might be worth modeling 21st century humanity is that this century could be a pivotal period in civilizational development. This might be useful because it provides insight into what sort of value systems end up getting "locked in" after this stage of our development concludes.

Roughly speaking, given that the future civilization could determine the distribution of value systems that are eventually optimized by civilizations at our stage of development, they could use this information to predict what type of stuff is being optimized throughout the multiverse. This is helpful because it allows the future civilization to cooperate with other civilizations in the multiverse, which is probably useful if the civilization cares about more than just astronomical waste.

Comment by matthew-barnett on Open & Welcome Thread August 2019 · 2019-08-22T02:39:29.812Z · score: 1 (1 votes) · LW · GW

Will Lesswrong at some point have curated shortform posts? Furthermore, is such a feature desirable? I will leave this question here for discussion.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-22T01:41:14.398Z · score: 4 (2 votes) · LW · GW

I generally agree with the heuristic that we should "live on the mainline", meaning that we should mostly plan for events which capture the dominant share of our probability. This heuristic causes me to have a tendency to do some of the following things

  • Work on projects that I think have a medium-to-high chance of succeeding and quickly abandon things that seem like they are failing.
  • Plan my career trajectory based on where I think I can plausibly maximize my long term values.
  • Study subjects only if I think that I will need to understand them at some point in order to grasp an important concept. See more details here.
  • Avoid doing work that leverages small probabilities of exceptionally bad outcomes. For example, I don't focus my studying on worst-case AI safety risk (although I do think that analyzing worst-case failure modes is useful from the standpoint of a security mindset).

I see a few problems with this heuristic, however, and I'm not sure quite how to resolve them. More specifically, I tend to float freely between different projects because I am quick to abandon things if I feel like they aren't working out (compare this to the mindset that some game developers have when they realize their latest game idea isn't very good).

One case where this shows up is when I change my beliefs about where the most effective ways to spend my time as far as long-term future scenarios are concerned. I will sometimes read an argument about how some line of inquiry is promising and for an entire day believe that this would be a good thing to work on, only for the next day to bring another argument.

And things like my AI timeline predictions vary erratically, much more than I expect most people's: I sometimes wake up and think that AI might be just 10 years away and other days I wake up and wonder if most of this stuff is more like a century away.

This general behavior makes me into someone who doesn't stay consistent on what I try to do. My life therefore resembles a battle between two competing heuristics: on one side there's the heuristic of planning for the mainline, and on the other there's the heuristic of committing to things even if they aren't panning out. I am unsure of the best way to resolve this conflict.

Comment by matthew-barnett on Two senses of “optimizer” · 2019-08-21T17:25:57.864Z · score: 2 (2 votes) · LW · GW

The dominant framework that I expect people to have which disagree with distinction is simply that when optimizers become more powerful, there might be a smooth transition between an optimizer_1 and an optimizer_2. That is, if an optimizer is trained on some simulated environment, then from our point of view it may well look like it is performing a local constrained search for policies within its training environment. However, when the optimizer is taken off the distribution, then it may act more like an optimizer_2.

One particular example would be if we were dumping so much compute into selecting for mesa optimizers that they became powerful enough to understand external reality. On the training distribution they would do well, but off it they would just aim for whatever their mesa objective was. In this case it might look more like it was just an optimizer_2 all along and we were simply mistaken about its search capabilities, but on the other hand, the task we gave it was limited enough that we initially thought it would only run optimizer_1 searches.

That said, I agree that it is difficult to see how such a transition from optimizer_1 to optimization_2 could occur in the real world.

Comment by matthew-barnett on Walkthrough: The Transformer Architecture [Part 2/2] · 2019-08-21T00:28:31.442Z · score: 3 (2 votes) · LW · GW

Thanks :)

There are actually a quite a few errors in this post. Thanks for catching more. At some point I'll probably go back and fix stuff.

Comment by matthew-barnett on What are the reasons to *not* consider reducing AI-Xrisk the highest priority cause? · 2019-08-20T22:05:18.573Z · score: 12 (14 votes) · LW · GW

There are two questions which I think are important to distinguish:

Is AI x-risk the top priority for humanity?

Is AI x-risk the top priority of some individual?

The first question is perhaps extremely important in a general sense. However, the second question is, I think, more useful since it provides actionable information to specific people. Of course, the difficulty of answering the second question is that it depends heavily on individual factors, such as

  • The ethical system of the individual which they are using the evaluate the question.
  • The specific talents, and time-constraints of the individual.

I also partially object to placing AI x-risk into one entire bundle. There are many ways that people can influence the development of artificial intelligence:

  • Technical research
  • Social research to predict and intervene on governance for AI
  • AI forecasting to help predict which type of AI will end up existing and what their impact will be

Even within technical research, it is generally considered that there are different approaches:

  • Machine learning research with an emphasis on creating systems that could scale to superhuman capabilities while remaining aligned. This would include, but would not be limited to
    • Paul Christiano-style research, such as expanding iterated distillation and amplification
    • ML transparency
    • ML robustness to distributional shifts
  • Fundamental mathematical research which could help dissolve confusion about AI capabilities and alignment. This includes
    • Uncovering insights into decision theory
    • Discovering the necessary conditions for a system to be value aligned
    • Examining how systems could be stable upon reflection, such as after self-modification
Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-20T20:54:57.879Z · score: 1 (1 votes) · LW · GW

I am not denying that humans take in sensory input and process it using their internal neural networks. I am denying that process has any of the properties associated with consciousness in the philosophical sense. And I am making an additional claim which is that if you merely redefine consciousness so that it lacks these philosophical properties, you have not actually explained anything or dissolved any confusion.

The illusionist approach is the best approach because it simultaneously takes consciousness seriously and doesn't contradict physics. By taking this approach we also have an understood paradigm for solving the hard problem of consciousness: namely, the hard problem is reduced to the meta-problem (see Chalmers).

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-20T20:30:36.654Z · score: 2 (2 votes) · LW · GW
There is the phenomenon of qualia and then there is the ontological extension. The word does not refer to the ontological extension.

My basic claim is that the way that people use the word qualia implicitly implies the ontological extensions. By using the term, you are either smuggling these extensions in, or you are using the term in a way that no philosopher uses it. Here are some intuitions:

Qualia are private entities which occur to us and can't be inspected via third person science.

Qualia are ineffable; you can't explain them using a sufficiently complex English or mathematical sentence.

Qualia are intrinstic; you can't construct a quale if you had the right set of particles.

etc.

Now, that's not to say that you can't define qualia in such a way that these ontological extensions are avoided. But why do so? If you are simply re-defining the phenomenon, then you have not explained anything. The intuitions above still remain, and there is something still unexplained: namely, why people think that there are entities with the above properties.

That's why I think that instead, the illusionist approach is the correct one. Let me quote Keith Frankish, who I think does a good job explaining this point of view,

Suppose we encounter something that seems anomalous, in the sense of being radically inexplicable within our established scientific worldview. Psychokinesis is an example. We would have, broadly speaking, three options.
First, we could accept that the phenomenon is real and explore the implications of its existence, proposing major revisions or extensions to our science, perhaps amounting to a paradigm shift. In the case of psychokinesis, we might posit previously unknown psychic forces and embark on a major revision of physics to accommodate them.
Second, we could argue that, although the phenomenon is real, it is not in fact anomalous and can be explained within current science. Thus, we would accept that people really can move things with their unaided minds but argue that this ability depends on known forces, such as electromagnetism.
Third, we could argue that the phenomenon is illusory and set about investigating how the illusion is produced. Thus, we might argue that people who seem to have psychokinetic powers are employing some trick to make it seem as if they are mentally influencing objects.

In the case of lightning, I think that the first approach would be correct, since lightning forms a valid physical category under which we can cast our scientific predictions of the world. In the case of the orbit of Uranus, the second approach is correct, since it was adequately explained by appealing to understood Newtonian physics. However, the third approach is most apt for bizarre phenomena that seem at first glance to be entirely incompatible with our physics. And qualia certainly fit the bill in that respect.


Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-20T05:45:15.477Z · score: 2 (2 votes) · LW · GW

I mean, I agree that this was mostly covered in the sequences. But I also think that I disagree with the way that most people frame the debate. At least personally I have seen people who I know have read the sequences still make basic errors. So I'm just leaving this here to explain my point of view.

Intuition: On a first approximation, there is something that it is like to be us. In other words, we are beings who have qualia.

Counterintuition: In order for qualia to exist, there would need to exist entities which are private, ineffable, intrinsic, subjective and this can't be since physics is public, effable, and objective and therefore contradicts the existence of qualia.

Intuition: But even if I agree with you that qualia don't exist, there still seems to be something left unexplained.

Counterintuition: We can explain why you think there's something unexplained because we can explain the cause of your belief in qualia, and why you think they have these properties. By explaining why you believe it we have explained all there is to explain.

Intuition: But you have merely said that we could explain it. You have not have actually explained it.

Counterintuition: Even without the precise explanation, we now have a paradigm for explaining consciousness, so it is not mysterious anymore.

This is essentially the point where I leave.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-20T04:09:44.924Z · score: 1 (1 votes) · LW · GW
The difference between God and consciousness is that the interesting bit about consciousness *is* my perception of it, full stop.

If by perception you simply mean "You are an information processing device that takes signals in and outputs things" then this is entirely explicable on our current physical models, and I could dissolve the confusion fairly easily.

However, I think you have something else in mind which is that there is somehow something left out when I explain it by simply appealing to signal processing. In that sense, I think you are falling right into the trap! You would be doing something similar to the person who said, "But I am still praying to God!"

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-20T03:56:16.601Z · score: 1 (1 votes) · LW · GW

Also just in general, I disagree that skepticism is not progress. If I said, "I don't believe in God because there's nothing in the universe with those properties..." I don't think it's fair to say, "Cool, but like, I'm still praying to something right, and that needs to be explained" because I don't think that speaks fully to what I just denied.

In the case of religion, many people have a very strong intuition that God exists. So, is the atheist position not progress because we have not explained this intuition?

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-20T03:36:52.785Z · score: 1 (1 votes) · LW · GW
It feels like you're just changing the name of the confusing thing from 'the fact that I seem conscious to myself' to 'the fact that I'm experiencing an illusion of consciousness.' Cool, but, like, there's still a mysterious thing that seems quite important to actually explain.

I don't actually agree. Although I have not fully explained consciousness, I think that I have shown a lot.

In particular, I have shown us what the solution to the hard problem of consciousness would plausibly look like if we had unlimited funding and time. And to me, that's important.

And under my view, it's not going to look anything like, "Hey we discovered this mechanism in the brain that gives rise to consciousness." No, it's going to look more like, "Look at this mechanism in the brain that makes humans talk about things even though the things they are talking about have no real world referent."

You might think that this is a useless achievement. I claim the contrary. As Chalmers points out, pretty much all the leading theories of consciousness fail the basic test of looking like an explanation rather than just sounding confused. Don't believe me? Read Section 3 in this paper.

In short, Chalmers reviews the current state of the art in consciousness explanations. He first goes into Integrated Information Theory (IIT), but then convincingly shows that IIT fails to explain why we would talk about consciousness and believe in consciousness. He does the same for global workspace theories, first order representational theories, higher order theories, consciousness-causes-collapse theories, and panpsychism. Simply put, none of them even approach an adequate baseline of looking like an explanation.

I also believe that if you follow my view carefully you might stop being confused about a lot of things. Like, do animals feel pain? Well it depends on your definition of pain -- consciousness is not real in any objective sense so this is a definition dispute. Same with asking whether person A is happier than person B, or asking whether computers will ever be conscious.

Perhaps this isn't an achievement strictly speaking relative to the standard Lesswrong points of view. But that's only because I think the standard Lesswrong point of view is correct. Yet even so, I still see people around me making fundamentally basic mistakes about consciousness. For instance, I see people treating consciousness as intrinsic, ineffable, private -- or they think there's an objectively right answer to whether animals feel pain and argue over this as if it's not the same as a tree falling in a forest.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-20T02:32:26.586Z · score: 1 (1 votes) · LW · GW
Like, I assume that I am a neural net predicting things and deciding things and if you had full access to my brain you could (in principle, given sufficient time) understand everything that was going on in there. But, like, one way or another I experience the perception of perceiving things.

To me this is a bit like the claim of someone who claimed psychic powers but still wanted to believe in physics who would say, "I assume you could perfectly well understand what was going on at a behavioral level within my brain, but there is still a datum left unexplained: the datum of me having psychic powers."

There are a number of ways to respond to the claim:

  • We could redefine psychic powers to include mere physical properties. This has the problem that psychics insist that psychic power is entirely separate from physical properties. Simple re-definition doesn't make the intuition go away and doesn't explain anything.
  • We could alternatively posit new physics which incorporates psychic powers. This has the occasional problem that it violates Occam's razor, since the old physics was completely adequate. Hence the debunking argument I presented above.
  • Or, we could incorporate the phenomenon within a physical model by first denying that it exists and then explaining the mechanism which caused you to believe in it, and talk about it.

In the case of consciousness, the third response amounts to Illusionism, which is the view that I am defending. It has the advantage that it conservatively doesn't promise to contradict known physics, and it also does justice to the intuition that consciousness really exists.

I'd prefer to taboo 'Qualia' in case it has particular connotations I don't share. Just 'that thing where Ray perceives himself perceiving things, and perhaps the part where sometimes Ray has preferences about those perceptions of perceiving because the perceptions have valence.'

To most philosophers who write about it, qualia is defined as the experience of what it's like. Roughly speaking, I agree with thinking of it as a particular form of perception that we experience.

However, it's not just any perception, since some perceptions can be unconscious perceptions. Qualia specifically refer to the qualitative aspects of our experience of the world: the taste of wine, the touch of fabric, the feeling of seeing blue, the suffering associated with physical pain etc. These are said to be directly apprehensible to our 'internal movie' that is playing inside our head. It is this type of property which I am applying the framework of illusionism to.

The reason I care about any of this is that I believe that a "perceptions-having-valence" is probably morally relevant.

I agree. That's why I typically take the view that consciousness is a powerful illusion, and that we should take it seriously. Those who simply re-define consciousness as essentially a synonym for "perception" or "observation" or "information" are not doing justice to the fact that it's the thing I care about in this world. I have a strong intuition that consciousness is what is valuable even despite the fact that I hold an illusionist view. To put it another way, I would care much less if you told me a computer was receiving a pain-signal (labeled in the code as some variable with suffering set to maximum), compared to the claim that a computer was actually suffering in the same way a human does.

Are you saying the my perceiving-that-I-perceive-things-with-valence is an illusion, and that I am in fact not doing that? Or some other thing?

Roughly speaking, yes. I am denying that that type of thing actually exists, including the valence claim.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-20T02:06:39.763Z · score: 1 (1 votes) · LW · GW

As a qualia denier, I sometimes feel like I side more with the Chalmers side of the argument, which at least admits that there's a strong intuition for consciousness. It's not that I think that the realist side is right, but it's that I see the naive physicalists making statements that seem to completely misinterpret the realist's argument.

I don't mean to single you out in particular. However, you state that Mary's room seems uninteresting because Mary is able to predict the "bit pattern" of color qualia. This seems to me to completely miss the point. When you look at the sky and see blue, is it immediately apprehensible as a simple bit pattern? Or does it at least seem to have qualitative properties too?

I'm not sure how to import my argument onto your brain without you at least seeing this intuition, which is something I considered obvious for many years.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-20T01:39:48.601Z · score: 1 (1 votes) · LW · GW

I think you are using the word "observation" to refer to consciousness. If this is true, then I do not deny that humans take in observations and process them.

However, I think the issue is that you have simply re-defined consciousness into something which would be unrecognizable to the philosopher. To that extent, I don't say you are wrong, but I will allege that you have not done enough to respond to the consciousness-realist's intuition that consciousness is different from physical properties. Let me explain:

If qualia are just observations, then it seems obvious that Mary is not missing any information in her room, since she can perfectly well understand and model the process by which people receive color observations.

Likewise, if qualia are merely observations, then the Zombie argument amounts to saying that p-Zombies are beings which can't observe anything. This seems patently absurd to me, and doesn't seem like it's what Chalmers meant at all when he came up with the thought experiment.

Likewise, if we were to ask, "Is a bat conscious?" then the answer would be a vacuous "yes" under your view, since they have echolocaters which take in observations and process information.

In this view even my computer is conscious since it has a camera on it. For this reason, I suggest we are talking about two different things.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-19T23:54:45.608Z · score: 1 (1 votes) · LW · GW

If belief is construed as some sort of representation which stands for external reality (as in the case of some correspondence theories of truth), then we can take the claim to be strong prediction of contemporary neuroscience. Ditto for whether we can explain why we talk about qualia.

It's not that I could explain exactly why you in particular talk about qualia. It's that we have an established paradigm for explaining it.

It's similar in the respect that we have an established paradigm for explaining why people report being able to see color. We can model the eye, and the visual cortex, and we have some idea of what neurons do even though we lack the specific information about how the whole thing fits together. And we could imagine that in the limit of perfect neuroscience, we could synthesize this information to trace back the reason why you said a particular thing.

Since we do not have perfect neuroscience, the best analogy would be analyzing the 'beliefs' and predictions of an artificial neural network. If you asked me, "Why does this ANN predict that this image is a 5 with 98% probability" it would be difficult to say exactly why, even with full access to the neural network parameters.

However, we know that unless our conception of neural networks is completely incorrect, in principle we could trace exactly why the neural network made that judgement, including the exact steps that caused the neural network to have the parameters that it has in the first place. And we know that such an explanation requires only the components which make up the ANN, and not any conscious or phenomenal properties.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-19T23:43:02.722Z · score: 1 (1 votes) · LW · GW

Here's a thought experiment which helped me lose my 'belief' in qualia: would a robot scientist, who was only designed to study physics and make predictions about the world, ever invent qualia as a hypothesis?

Assuming the actual mouth movements we make when we say things like, "Qualia exist" are explainable via the scientific method, the robot scientist could still predict that we would talk and write about consciousness. But would it posit consciousness as a separate entity altogether? Would it treat consciousness as a deep mystery, even after peering into our brains and finding nothing but electrical impulses?

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-19T23:04:24.670Z · score: 1 (2 votes) · LW · GW

It seems to me that you are trying to recover the properties of conscious experience in a way that can be reduced to physics. Ultimately, I just feel that this approach is not likely to succeed without radical revisions to what you consider to be conscious experience. :)

Generally speaking, I agree with the dualists who argue that physics is incompatible with the claimed properties of qualia. Unlike the dualists, I see this as a strike against qualia rather than a strike against physics. David Chalmers does a great job in his articles outlining why conscious properties don't fit nicely in our normal physical models.

It's not simply that we are awaiting more data to fill in the details: it's that there seems to be no way even in principle to incorporate conscious experience into physics. Physics is just a different type of beast: it has no mental core, it is entirely made up of mathematical relations, and is completely global. Consciousness as it's described seems entirely inexplicable in that respect, and I don't see how it could possibly supervene on the physical.

One could imagine a hypothetical heaven-believer (someone who claimed to have gone to heaven and back) listing possible ways to incorporate their experience into physics. They could say,

Hard-to-eff, as it's not clear how physics interacts with the heavenly realm. We must do more work to find out where the entry points of heaven and earth are.
In practice private due to the fact that technology hasn't been developed yet that can allow me to send messages back from heaven while I'm there.
Pretty directly apprehensible because how would it even be possible for me to have experienced that without heaven literally being real!

On the other hand, a skeptic could reply that:

Even if mind reading technology isn't good enough yet, our best models say that humans can be described as complicated computers with a particular neural network architecture. And we know that computers can have bugs in them causing them to say things when there is no logical justification.

Also, we know that computers can lack perfect introspection so we know that even if it is utterly convinced that heaven is real, this could just be due to the fact that the computer is following its programming and is exceptionally stubborn.

Heaven has no clear interpretation in our physical models. Yes, we could see that a supervenience is possible. But why rely on that hope? Isn't it better to say that the belief is caused by some sort of internal illusion? The latter hypothesis is at least explicable within our models and doesn't require us to make new fundamental philosophical advances.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-19T22:18:07.028Z · score: 3 (2 votes) · LW · GW

Sure. There are a number of properties usually associated with qualia which are the things I deny. If we strip these properties away (something Kieth Frankish refers to as zero qualia) then we can still say that they exist. But it's confusing to say that something exists when its properties are so minimal. Daniel Dennett listed a number of properties that philosophers have assigned to qualia and conscious experience more generally:

(1) ineffable (2) intrinsic (3) private (4) directly or immediately apprehensible

Ineffable because there's something Mary the neuroscientist is missing when she is in the black and white room. And someone who tried explaining color to her would not be able to fully.

Intrinsic because it cannot be reduced to bare physical entities, like electrons (think: could you construct a quale if you had the right set of particles?).

Private because they are accessible to us and not globally available. In this sense, if you tried to find out the qualia that a mouse was experiencing as it fell victim to a trap, you would come up fundamentally short because it was specific to the mouse mind and not yours. Or as Nagel put it, there's no way that third person science could discover what it's like to be a bat.

Directly apprehensible because they are the elementary things that make up our experience of the world. Look around and qualia are just what you find. They are the building blocks of our perception of the world.

It's not necessarily that none of these properties could be steelmanned. It is just that they are so far from being steelmannable that it is better to deny their existence entirely. It is the same as my analogy with a person who claims to have visited heaven. We could either talk about it as illusory or non-illusory. But for practical purposes, if we chose the non-illusory route we would probably be quite confused. That is, if we tried finding heaven inside the physical world, with the same properties as the claimant had proposed, then we would come up short. Far better then, to treat it as a mistake inside of our cognitive hardware.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-19T21:40:50.853Z · score: 1 (1 votes) · LW · GW

I won't lie -- I have a very strong intuition that there's this visual field in front of me, and that I can hear sounds that have distinct qualities, and simultaneously I can feel thoughts rush into my head as if there is an internal speaker and listener. And when I reflect on some visual in the distance, it seems as though the colors are very crisp and exist in some way independent of simple information processing in a computer-type device. It all seems very real to me.

I think the main claim of the illusionist is that these intuitions (at least insofar as the intuitions are making claims about the properties of qualia) are just radically incorrect. It's as if our brains have an internal error in them, not allowing us to understand the true nature of these entities. It's not that we can't see or something like that. It's just that the quality of perceiving the world has essentially an identical structure to what one might imagine a computer with a camera would "see."

Analogy: Some people who claim to have experienced heaven aren't just making stuff up. In some sense, their perception is real. It just doesn't have the properties we would expect it to have at face value. And if we actually tried looking for heaven in the physical world we would find it to be little else than an illusion.

Comment by matthew-barnett on Goodhart's Curse and Limitations on AI Alignment · 2019-08-19T20:52:35.804Z · score: 1 (1 votes) · LW · GW
a very slight misalignment would be disastrous. That seems possible, per Eliezer's Rocket Example, but is far from certain.

Just a minor nitpick, I don't think the point of the Rocket Alignment Metaphor was supposed to be that slight misalignment was catastrophic. I think the more apt interpretation is that apparent alignment does not equal actual alignment, and you need to do a lot of work before you get to the point where you can talk meaningfully about aligning an AI at all. Relevant quote from the essay,

It’s not that current rocket ideas are almost right, and we just need to solve one or two more problems to make them work. The conceptual distance that separates anyone from solving the rocket alignment problem is much greater than that.
Right now everyone is confused about rocket trajectories, and we’re trying to become less confused. That’s what we need to do next, not run out and advise rocket engineers to build their rockets the way that our current math papers are talking about. Not until we stop being confused about extremely basic questions like why the Earth doesn’t fall into the Sun.
Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-19T18:48:51.145Z · score: 3 (2 votes) · LW · GW

In discussions about consciousness I find myself repeating the same basic argument against the existence of qualia constantly. I don't do this just to be annoying: It is just my experience that

1. People find consciousness really hard to think about and has been known to cause a lot of disagreements.

2. Personally I think that this particular argument dissolved perhaps 50% of all my confusion about the topic, and was one of the simplest, clearest arguments that I've ever seen.

I am not being original either. The argument is the same one that has been used in various forms across Illusionist/Eliminativist literature that I can find on the internet. Eliezer Yudkowsky used a version of it many years ago. Even David Chalmers, who is quite the formidable consciousness realist, admits in The Meta-Problem of Consciousness that the argument is the best one he can find against his position.

The argument is simply this:

If we are able to explain why you believe in, and talk about qualia without referring to qualia whatsoever in our explanation, then we should reject the existence of qualia as a hypothesis.

This is the standard debunking argument. It has a more general form which can be used to deny the existence of a lot of other non-reductive things: distinct personal identities, gods, spirits, libertarian free will, a mind-independent morality etc. In some sense it's just an extended version of Occam's razor, showing us that qualia don't do anything in our physical theories, and thus can be rejected as things that actually exist out there in any sense.

To me this argument is very clear, and yet I find myself arguing it a lot. I am not sure how else to get people to see my side of it other than sending them a bunch of articles which more-or-less make the exact same argument but from different perspectives.

I think the human brain is built to have a blind spot on a lot of things, and consciousness is perhaps one of them. I think quite a bit how if humanity is not able to think clearly about this thing which we have spent many research years on, then it seems like there might be some other low hanging philosophical fruits still remaining.

Addendum: I am not saying I have consciousness figured out. However, I think it's analogous to how atheists haven't "got religion figured out" yet they have at the very least taken their first steps by actually rejecting religion. It's not a full theory of religious belief, or even a theory at all. It's just the first thing you do if you want to understand the subject. I roughly agree with Keith Frankish's take on the matter.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-19T18:06:28.176Z · score: 4 (3 votes) · LW · GW

Related to: Realism about rationality

I have talked to some people who say that they value ethical reflection, and would prefer that humanity reflected for a very long time before colonizing the stars. In a sense I agree, but at the same time I can't help but think that "reflection" is a vacuous feel-good word that has no shared common meaning.

Some forms of reflection are clearly good. Epistemic reflection is good if you are a consequentialist, since it can help you get what you want. I also agree that narrow forms of reflection can also be good. One example of a narrow form of reflection is philosophical reflection where we compare the details of two possible outcomes and then decide which one is better.

However, there are much broader forms of reflection which I'm less hesitant to endorse. Namely, the vague types of reflection, such as reflecting on whether we really value happiness, or whether we should really truly be worried about animal suffering.

I can perhaps sympathize with the intuition that we should really try to make sure that what we put into an AI is what we really want, rather than just what we superficially want. But fundamentally, I have skepticism that there is any canonical way of doing this type of reflection that leads to non-arbitrariness.

I have heard something along the lines of "I would want a reflective procedure that extrapolates my values as long as the procedure wasn't deceiving me or had some ulterior motive" but I just don't see how this type of reflection corresponds to any natural class. At some point, we will just have to put some arbitrariness into the value system, and there won't be any "right answer" about how the extrapolation is done.

Comment by matthew-barnett on A Primer on Matrix Calculus, Part 2: Jacobians and other fun · 2019-08-18T05:57:01.805Z · score: 1 (1 votes) · LW · GW
This isn't quite true; the determinant being small is consistent with small changes in input making arbitrarily large changes in output, just so long as small changes in input in a different direction make sufficiently small changes in output.

Hmm, good point. I suppose why that's not why we're minimizing determinant, but rather frobenius norm. Hence:

An alternative definition of the frobenius norm better highlights its connection to the motivation of regularizing the Jacobian frobenius

Makes sense.

Comment by matthew-barnett on A Primer on Matrix Calculus, Part 3: The Chain Rule · 2019-08-17T06:21:34.242Z · score: 1 (1 votes) · LW · GW

Thanks. I agree with using computational graphs. I think understanding backpropagation using graphs is much easier to understand if you are new to the subject. The reason I didn't do it here is mainly because there's already a lot of guides that do that online, but fewer that introduce tensors and how they interact with deep learning. Also I'm writing these posts primarily so that I can learn, although of course I hope other people find these posts useful.

I also want to add that this guide is far from complete, and so I would want to read yours to see what types of things I might have done better. :)

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-16T23:37:16.820Z · score: 3 (2 votes) · LW · GW
Perhaps you shouldn't frame it as "study early" vs "study late", but "study X" vs "study Y".

My point was that these are separate questions. If you begin to suspect that understanding ML research requires an understanding of type theory, then you can start learning type theory. Alternatively, you can learn type theory before researching machine learning -- ie. reading machine learning papers -- in the hopes that it builds useful groundwork.

But what you can't do is learn type theory and read machine learning research papers at the same time. You must make tradeoffs. Each minute you spend learning type theory is a minute you could have spent reading more machine learning research.

The model I was trying to draw was not one where I said, "Don't learn math." I explicitly said it was a model where you learn math as needed.

My point was not intended to be about my abilities. This is a valid concern, but I did not think that was my primary argument. Even conditioning on having outstanding abilities to learn every subject, I still think my argument (weakly) holds.

Note: I also want to say I'm kind of confused because I suspect that there's an implicit assumption that reading machine learning research is inherently easier than learning math. I side with the intuition that math isn't inherently difficult, it just requires memorizing a lot of things and practicing. The same is true for reading ML papers, which makes me confused why this is being framed as a debate over whether people have certain abilities to learn and do research.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-16T22:17:14.055Z · score: 6 (3 votes) · LW · GW

That's a good point about motivated reasoning. I should distinguish arguments that the lazy approach is better for people and arguments that it's better for me. Whether it's better for people more generally depends on the reference class we're talking about. I will assume people who are interested in the foundations of mathematics as a hobby outside of AI safety should take my advise less seriously.

However, I still think that it's not exactly clear that going the foundational route is actually that useful on a per-unit time basis. The model I proposed wasn't as simple as "learn the formal math" versus "think more intuitively." It was specifically a question of whether we should learn the math on an as-needed basis. For that reason, I'm still skeptical that going out and reading textbooks on subjects that are only vaguely related to current machine learning work is valuable for the vast majority of people who want to go into AI safety as quickly as possible.

Sidenote: I think there's a failure mode of not adequately optimizing time, or being insensitive to time constraints. Learning an entire field of math from scratch takes a lot of time, even for the brightest people alive. I'm worried that, "Well, you never know if subject X might be useful" is sometimes used as a fully general counterargument. The question is not, "Might this be useful?" The question is, "Is this the most useful thing I could learn in the next time interval?"

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-16T20:17:52.934Z · score: 5 (4 votes) · LW · GW

Sometimes people will propose ideas, and then those ideas are met immediately after with harsh criticism. A very common tendency for humans is to defend our ideas and work against these criticisms, which often gets us into a state that people refer to as "defensive."

According to common wisdom, being in a defensive state is a bad thing. The rationale here is that we shouldn't get too attached to our own ideas. If we do get attached, we become liable to become crackpots who can't give an idea up because it would make them look bad if we did. Therefore, the common wisdom advocates treating ideas as being handed to us by a tablet from the clouds rather than a product of our brain's thinking habits. Taking this advice allows us to detach ourselves from our ideas so that we don't confuse criticism with insults.

However, I think the exact opposite failure mode is not often enough pointed out and guarded against. Specifically, the failure mode is being too willing to abandon beliefs based on surface level counterarguments. To alleviate this I suggest we shouldn't be so ready to give up our ideas in the face of criticism.

This might sound irrational -- why should we get attached to our beliefs? I'm certainly not advocating that we should actually associate criticism with insults to our character or intelligence. Instead, my argument is that the process of defensively defending against criticism generates a productive adversarial structure.

Consider two people. Person A desperately wants to believe proposition X, and person B desperately wants to believe not X. If B comes up to A and says, "Your belief in X is unfounded. Here are the reasons..." Person A can either admit defeat, or fall into defensive mode. If A admits defeat, they might indeed get closer to the truth. On the other hand, if A gets into defensive mode, they might also get closer to the truth in the process of desperately for evidence of X.

My thesis is this: the human brain is very good at selective searching for evidence. In particular, given some belief that we want to hold onto, we will go to great lengths to justify it, searching for evidence that we otherwise would not have searched for if we were just detached from the debate. It's sort of like the difference between a debate between two people who are assigned their roles by a coin toss, and a debate between people who have spent their entire lives justifying why they are on one side. The first debate is an interesting spectacle, but I expect the second debate to contain much deeper theoretical insight.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-16T18:33:10.834Z · score: 17 (6 votes) · LW · GW

I get the feeling that for AI safety, some people believe that it's crucially important to be an expert in a whole bunch of fields of math in order to make any progress. In the past I took this advice and tried to deeply study computability theory, set theory, type theory -- with the hopes of it someday giving me greater insight into AI safety.

Now, I think I was taking a wrong approach. To be fair, I still think being an expert in a whole bunch of fields of math is probably useful, especially if you want very strong abilities to reason about complicated systems. But, my model for the way I frame my learning is much different now.

I think my main model which describes my current perspective is that I think employing a lazy style of learning is superior for AI safety work. Lazy is meant in the computer science sense of only learning something when it seems like you need to know it in order to understand something important. I will contrast this with the model that one should learn a set of solid foundations first before going any further.

Obviously neither model can be absolutely correct in an extreme sense. I don't, as a silly example, think that people who can't do basic arithmetic should go into AI safety before building a foundation in math. And on the other side of the spectrum, I think it would be absurd to think that one should become a world renowned mathematician before reading their first AI safety paper. That said, even though both models are wrong, I think my current preference is for the lazy model rather than the foundation model.

Here are some points in favor of both, informed by my first-person experience.

Points in favor of the foundations model:

  • If you don't have solid foundations in mathematics, you may not even be aware of things that you are missing.
  • Having solid foundations in mathematics will help you to think rigorously about things rather than having a vague non-reductionistic view of AI concepts.
    • Subpoint: MIRI work is motivated by coming up with new mathematics that can describe error-tolerant agents without relying on fuzzy statements like "machine learning relies on heuristics so we need to study heuristics rather than hard math to do alignment."
  • We should try to learn the math that will be useful for AI safety in the future, rather than what is being used for machine learning papers right now. If your view of AI is that it is at least a few decades away, then it's possible that learning the foundations of mathematics will be more robustly useful no matter where the field shifts.

Points in favor of the lazy model:

  • Time is limited and it usually takes several years to become proficient in the foundations of mathematics. This is time that could have been spent reading actual research directly related to AI safety.
  • The lazy model is better for my motivation, since it makes me feel like I am actually learning about what's important, rather than doing homework.
    • Learning foundational math often looks a lot like just taking a shotgun and learning everything that seems vaguely relevant to agent foundations. Unless you have a very strong passion for this type of mathematics, it would seem outright strange that this type of learning is fun.
  • It's not clear that the MIRI approach is correct. I don't have a strong opinion on this, however
    • Even if the MIRI approach was correct, I don't think it's my comparative advantage to do foundational mathematics.
  • The lazy model will naturally force you to learn the things that are actually relevant, as measured by how much you come in contact with them. By contrast, the foundational model forces you to learn things which might not be relevant at all. Obviously, we won't know what is and isn't relevant beforehand, but I currently err on the side of saying that some things won't be relevant if they don't have a current direct input to machine learning.
  • Even if AI is many decades away, machine learning has been around for a long time, and it seems like the math useful for machine learning hasn't changed much. So, it seems like a safe bet that foundational math won't be relevant for understanding normal machine learning research any time soon.
Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-16T01:05:11.256Z · score: 3 (4 votes) · LW · GW
For example, let's say I set a goal to write a blog post about a topic I'm learning in 4 hours, and half-way through I realize I don't understand one of the key underlying concepts related to the thing I intended to write about.

Interesting, this exact same thing just happened to me a few hours ago. I was testing my technique by writing a post on variational autoencoders. Halfway through I was very confused because I was trying to contrast them to GANs but didn't have enough material or knowledge to know the advantages of either.

During an actual test, the right thing to do would be to do my best given what I know already and finish as many questions as possible. But I'd argue that in the blog post case, I very well may be better off saying, "OK I'm going to go learn about this other thing until I understand it, even if I don't end up finishing the post I wanted to write."

I agree that's probably true. However, this creates a bad incentive where, at least in my case, I will slowly start making myself lazier during the testing phase because I know I can always just "give up" and learn the required concept afterwards.

At least in the case I described above I just moved onto a different topic, because I was kind of getting sick of variational autoencoders. However, I was able to do this because I didn't have any external constraints, unlike the method I described in the parent comment.

The pithy way to say this is that tests are basically pure Goodhardt, and it's dangerous to turn every real life task into a game of maximizing legible metrics.

That's true, although perhaps one could devise a sufficiently complex test such that it matches perfectly with what we really want... well, I'm not saying that's a solved problem in any sense.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-15T23:55:51.849Z · score: 3 (2 votes) · LW · GW

I agree that it is probably too hard to "take a final exam all the time." On the other hand, I feel like I could make a much weaker claim that this is an improvement over a lot of productivity techniques, which often seem to more-or-less be dependent on just having enough willpower to actually learn.

At least in this case, each action you do can be informed directly by whether you actually succeed or fail at the goal (like getting upvotes on a post). Whether or not learning is a good instrumental proxy for getting upvotes in this setting is an open question.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-15T21:57:25.547Z · score: 3 (2 votes) · LW · GW

Yes, the difference is that you are creating an external environment which rewards you for success and punishes you for failure. This is similar to taking a final exam, which is my inspiration.

The problem with committing to work rather than success is that you can always just rationalize something as "Oh I worked hard" or "I put in my best effort." However, just as with a final exam, the only thing that will matter in the end is if you actually do what it takes to get the high score. This incentivizes good consequentialist thinking and disincentivizes rationalization.

I agree there are things out of your control, but the same is true with final exams. For instance, the test-maker could have put something on the test that you didn't study much for. This encourages people to put extra effort into their assigned task to ensure robustness to outside forces.

Comment by matthew-barnett on A Primer on Matrix Calculus, Part 2: Jacobians and other fun · 2019-08-15T20:15:05.732Z · score: 2 (3 votes) · LW · GW

I'm not sure if you're referring to the fact that it is small. If so: apologies. At the time of posting there was (still is?) a bug prohibiting me from resizing images on posts. My understanding is that this is being fixed.

Also yeah, zooming in would be good I think because that means that it's robust to changes (ie. it's going to classify it correctly even if we add noise to the output). I think it isn't actually zooming in, it's just that the decision basin for the input is getting larger.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-15T19:15:55.988Z · score: 16 (6 votes) · LW · GW

I think there are some serious low hanging fruits for making people productive that I haven't seen anyone write about (not that I've looked very hard). Let me just introduce a proof of concept:

Final exams in university are typically about 3 hours long. And many people are able to do multiple finals in a single day, performing well on all of them. During a final exam, I notice that I am substantially more productive than usual. I make sure that every minute counts: I double check everything and think deeply about each problem, making sure not to cut corners unless absolutely required because of time constraints. Also, if I start daydreaming, then I am able to immediately notice that I'm doing so and cut it out. I also believe that this is the experience of most other students in university who care even a little bit about their grade.

Therefore, it seems like we have an example of an activity that can just automatically produce deep work. I can think of a few reasons why final exams would bring out the best of our productivity:

1. We care about our grade in the course, and the few hours in that room are the most impactful to our grade.

2. We are in an environment where distractions are explicitly prohibited, so we can't make excuses to ourselves about why we need to check Facebook or whatever.

3. There is a clock at the front of the room which makes us feel like time is limited. We can't just sit there doing nothing because then time will just slip away.

4. Every problem you do well on benefits you by a little bit, meaning that there's a gradient of success rather than a binary pass or fail (though sometimes it's binary). This means that we care a lot about optimizing every second because we can always do slightly better.

If we wanted to do deep work for some other desired task, all four of these reasons seem like they could be replicable. Here is one idea (related to my own studying), although I'm sure I can come up with a better one if I thought deeply about this for longer:

Set up a room where you are given a limited amount of resources (say, a few academic papers, a computer without an internet connection, and a textbook). Set aside a four hour window where you're not allowed to leave the room except to go to the bathroom (and some person explicitly checks in on you like twice to see whether you are doing what you say you are doing). Make it your goal to write a blog post explaining some technical concept. Afterwards, the blog post gets posted to Lesswrong (conditional on it being at least minimal quality). You set some goal, like it must acheive 30 upvote reputation after 3 days. Commit to paying $1 to a friend for each upvote you score below the target reputation. So, if your blog post is at +15, you must pay $15 to your friend.

I can see a few problems with this design:

1. You are optimizing for upvotes, not clarity or understanding. The two might be correlated but at the very least there's a Goodhart effect.

2. Your "friend" could downvote the post. It can easily be hacked by other people who are interested, and it encourages vote manipulation etc.

Still, I think that I might be on the right track towards something that boosts productivity by a lot.

Comment by matthew-barnett on Dony's Shortform Feed · 2019-08-15T18:50:23.845Z · score: 2 (2 votes) · LW · GW
If, as Dony was originally asking, it were possible to just get into a mental state where you could work productively (including creatively) indefinitely, people would have found it.

Perhaps not indefinitely, but I do think there are people like this already? There are some people who are much more productive than others, even at similar intelligence levels. The simplest explanation is that these people have simply discovered a way to be productive for many hours in a day.

Personally, I know it's at least possible to be productive for a long time (say 10 hours with a few breaks). I also think professional gamers are typically productive for this much most days.

I think the main issue is that it's difficult to transfer insights and motivation to other people.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-15T17:46:34.089Z · score: 1 (1 votes) · LW · GW

Perhaps it says something about the human brain (or just mine) that I did not immediately think of that as a solution.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-14T17:00:09.856Z · score: 7 (4 votes) · LW · GW

The only flaw I find with this is that if I get stuck on an exercise, I reach the following decision: should I look at the answer and move on, or should I keep at it.

If I choose the first option, this makes me feel like I've cheated. I'm not sure what it is about human psychology, but I think that if you've cheated once, you feel less guilty a second time because "I've already done it." So, I start cheating more and more, until soon enough I'm just skipping things and cutting corners again.

If I choose the second option, then I might be stuck for several hours, and this causes me to just abandon the textbook develop an ugh field around it.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-13T18:26:45.300Z · score: 18 (6 votes) · LW · GW

Occasionally, I will ask someone who is very skilled in a certain subject how they became skilled in that subject so that I can copy their expertise. A common response is that I should read a textbook in the subject.

Eight years ago, Luke Muehlhauser wrote,

For years, my self-education was stupid and wasteful. I learned by consuming blog posts, Wikipedia articles, classic texts, podcast episodes, popular books, video lectures, peer-reviewed papers, Teaching Company courses, and Cliff's Notes. How inefficient!
I've since discovered that textbooks are usually the quickest and best way to learn new material.

However, I have repeatedly found that this is not good advice for me.

I want to briefly list the reasons why I don't find sitting down and reading a textbook that helpful for learning. Perhaps, in doing so, someone else might appear and say, "I agree completely. I feel exactly the same way" or someone might appear to say, "I used to feel that way, but then I tried this..." This is what I have discovered:

  • When I sit down to read a long textbook, I find myself subconsciously constantly checking how many pages I have read. For instance, if I have been sitting down for over an hour and I find that I have barely made a dent in the first chapter, much less the book, I have a feeling of hopelessness that I'll ever be able to "make it through" the whole thing.
  • When I try to read a textbook cover to cover, I find myself much more concerned with finishing rather than understanding. I want the satisfaction of being able to say I read the whole thing, every page. This means that I will sometimes cut corners in my understanding just to make it through a difficult part. This ends in disaster once the next chapter requires a solid understanding of the last.
  • Reading a long book feels less like I'm slowly building insights and it feels more like I'm doing homework. By contrast, when I read blog posts it feels like there's no finish line, and I can quit at any time. When I do read a good blog post, I often end up thinking about its thesis for hours afterwards even after I'm done reading it, solidifying the content in my mind. I cannot replicate this feeling with a textbook.
  • Textbooks seem overly formal at points. And they often do not repeat information, instead putting the burden on the reader to re-read things rather than repeating information. This makes it difficult to read in a linear fashion, which is straining.
  • If I don't understand a concept I can get "stuck" on the textbook, disincentivizing me from finishing. By contrast, if I just learned as Muehlhauser described, by "consuming blog posts, Wikipedia articles, classic texts, podcast episodes, popular books, video lectures, peer-reviewed papers, Teaching Company courses, and Cliff's Notes" I feel much less stuck since I can always just move from one source to the next without feeling like I have an obligation to finish.
Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-13T18:03:51.745Z · score: 1 (1 votes) · LW · GW

I'm not saying that I'm proud of this fact. It is mostly that I'm ignorant of it. :)

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-12T22:14:51.199Z · score: 3 (2 votes) · LW · GW

Those are all pretty good. :)

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-12T21:07:14.340Z · score: 1 (1 votes) · LW · GW

Then I will assert that I would in fact appreciate seeing the reasons for disagreement, even as the case may be that it comes down to axiomatic intuitions.

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-12T20:56:27.662Z · score: 1 (1 votes) · LW · GW

I might add that I also consider the development of ethical anti-realism to be another, perhaps more insightful, achievement. But this development is, from what I understand, usually attributed to Hume.

Depending on what you mean by "pleasure" and "pain" it is possible that you merely have a simple conception of the two words which makes this identification incompatible with complexity of value. The robust form of this distinction was provided by John Stuart Mill who identified that some forms of pleasure can be more valuable than others (which is honestly quite similar to what we might find in the fun theory sequence...).

In its modern formulation, I would say that Bentham's contribution was identifying conscious states as being the primary theater for which value can exist. I can hardly disagree, as I struggle to imagine things in this world which could possibly have value outside of conscious experience. Still, I think there are perhaps some, which is why I conceded by using the words "primary source of value" rather than "sole source of value."

To the extent that complexity of value disagrees with what I have written above, I incline to disagree with complexity of value :).

Comment by matthew-barnett on Matthew Barnett's Shortform · 2019-08-12T17:18:04.644Z · score: 6 (5 votes) · LW · GW

Forgive me for cliche scientism, but I recently realized that I can't think of any major philosophical developments in the last two centuries that occurred within academic philosophy. If I were to try to list major philosophical achievements since 1819, these would likely appear on my list, but none of them were from those trained in philosophy:

  • A convincing, simple explanation for the apparent design we find in the living world (Darwin and Wallace).
  • The unification of time and space into one fabric (Einstein)
  • A solid foundation for axiomatic mathematics (Zermelo and Fraenkel).
  • A model of computation, and a plausible framework for explaining mental activity (Turing and Church).

By contrast, if we go back to previous centuries, I don't have much of an issue citing philosophical achievements from philosophers:

  • The identification of the pain-pleasure axis as the primary source of value (Bentham).
  • Advanced notions of causality, reductionism, scientific skepticism (Hume)
  • Extension of moral sympathies to those in the animal kingdom (too many philosophers to name)
  • A highlight of the value of wisdom and learned debate (Socrates, and others)

Of course, this is probably caused my by bias towards Lesswrong-adjacent philosophy. If I had to pick philosophers who have made major contributions, these people would be on my shortlist:

John Stuart Mill, Karl Marx, Thomas Nagel, Derek Parfit, Bertrand Russell, Arthur Schopenhauer.

Comment by matthew-barnett on Why Gradients Vanish and Explode · 2019-08-10T04:25:56.839Z · score: 3 (2 votes) · LW · GW

Interesting. I just re-read it and you are completely right. Well I wonder how that interacts with what I said above.