Matthew Barnett's Shortform

post by Matthew Barnett (matthew-barnett) · 2019-08-09T05:17:47.768Z · score: 7 (5 votes) · LW · GW · 141 comments

I intend to use my shortform feed for two purposes:

1. To post thoughts that I think are worth sharing that I can then reference in the future in order to explain some belief or opinion I have.

2. To post half-finished thoughts about the math or computer science thing I'm learning at the moment. These might be slightly boring and for that I apologize.

141 comments

Comments sorted by top scores.

comment by Matthew Barnett (matthew-barnett) · 2019-09-16T23:06:00.638Z · score: 19 (6 votes) · LW · GW

There's a phenomenon I currently hypothesize to exist where direct attacks on the problem of AI alignment are criticized much more often than indirect attacks.

If this phenomenon exists, it could be advantageous to the field in the sense that it encourages thinking deeply about the problem before proposing solutions. But it could also be bad because it disincentivizes work on direct attacks to the problem (if one is criticism averse and would prefer their work be seen as useful).

I have arrived at this hypothesis from my observations: I have watched people propose solutions only to be met with immediate and forceful criticism from others, while other people proposing non-solutions and indirect analyses are given little criticism at all. If this hypothesis is true, I suggest it is partly or mostly because direct attacks on the problem are easier to defeat via argument, since their assumptions are made plain

If this is so, I consider it to be a potential hindrance on thought, since direct attacks are often the type of thing that leads to the most deconfusion -- not because the direct attack actually worked, but because in explaining how it failed, we learned what definitely doesn't work.

comment by Raemon · 2019-09-16T23:27:30.982Z · score: 6 (3 votes) · LW · GW

Nod. This is part of a general problem where vague things that can't be proven not to work are met with less criticism than "concrete enough to be wrong" things.

A partial solution is a norm wherein "concrete enough to be wrong" is seen as praise, and something people go out of their way to signal respect for.

comment by G Gordon Worley III (gworley) · 2019-09-17T18:19:05.793Z · score: 2 (1 votes) · LW · GW

Did you have some specific cases in mind when writing this? For example, HCH is interesting and not obviously going to fail in the ways that some other proposals I've seen would, and the proposal there seems to have gotten better as more details have been fleshed out even if there's still some disagreement on things that can be tested eventually even if not yet. Against this we've seen lots of things, like various oracle AI proposals, that to my mind usually have fatal flaws right from the start due to misunderstanding something that they can't easily be salvaged.

I don't want to disincentivize thinking about solving AI alignment directly when I criticize something, but I also don't want to let pass things that to me have obvious problems that the authors probably didn't think about or thought about from different assumptions that maybe are wrong (or maybe I will converse with them and learn that I was wrong!). It seems like an important part of learning in this space is proposing things and seeing why they don't work so you can better understand the constraints of the problem space to work within them to find solutions.

comment by Matthew Barnett (matthew-barnett) · 2019-08-13T18:26:45.300Z · score: 18 (6 votes) · LW · GW

Occasionally, I will ask someone who is very skilled in a certain subject how they became skilled in that subject so that I can copy their expertise. A common response is that I should read a textbook in the subject.

Eight years ago, Luke Muehlhauser wrote [LW · GW],

For years, my self-education was stupid and wasteful. I learned by consuming blog posts, Wikipedia articles, classic texts, podcast episodes, popular books, video lectures, peer-reviewed papers, Teaching Company courses, and Cliff's Notes. How inefficient!
I've since discovered that textbooks are usually the quickest and best way to learn new material.

However, I have repeatedly found that this is not good advice for me [LW · GW].

I want to briefly list the reasons why I don't find sitting down and reading a textbook that helpful for learning. Perhaps, in doing so, someone else might appear and say, "I agree completely. I feel exactly the same way" or someone might appear to say, "I used to feel that way, but then I tried this..." This is what I have discovered:

  • When I sit down to read a long textbook, I find myself subconsciously constantly checking how many pages I have read. For instance, if I have been sitting down for over an hour and I find that I have barely made a dent in the first chapter, much less the book, I have a feeling of hopelessness that I'll ever be able to "make it through" the whole thing.
  • When I try to read a textbook cover to cover, I find myself much more concerned with finishing rather than understanding. I want the satisfaction of being able to say I read the whole thing, every page. This means that I will sometimes cut corners in my understanding just to make it through a difficult part. This ends in disaster once the next chapter requires a solid understanding of the last.
  • Reading a long book feels less like I'm slowly building insights and it feels more like I'm doing homework. By contrast, when I read blog posts it feels like there's no finish line, and I can quit at any time. When I do read a good blog post, I often end up thinking about its thesis for hours afterwards even after I'm done reading it, solidifying the content in my mind. I cannot replicate this feeling with a textbook.
  • Textbooks seem overly formal at points. And they often do not repeat information, instead putting the burden on the reader to re-read things rather than repeating information. This makes it difficult to read in a linear fashion, which is straining.
  • If I don't understand a concept I can get "stuck" on the textbook, disincentivizing me from finishing. By contrast, if I just learned as Muehlhauser described, by "consuming blog posts, Wikipedia articles, classic texts, podcast episodes, popular books, video lectures, peer-reviewed papers, Teaching Company courses, and Cliff's Notes" I feel much less stuck since I can always just move from one source to the next without feeling like I have an obligation to finish.
comment by ryan_b · 2019-08-13T21:13:44.600Z · score: 15 (5 votes) · LW · GW

I used to feel similarly, but then a few things changed for me and now I am pro-textbook. There are caveats - namely that I don't work through them continuously.

Textbooks seem overly formal at points

This is a big one for me, and probably the biggest change I made is being much more discriminating in what I look for in a textbook. My concerns are invariably practical, so I only demand enough formality to be relevant; otherwise I am concerned with a good reputation for explaining intuitions, graphics, examples, ease of reading. I would go as far as to say that style is probably the most important feature of a textbook.

As I mentioned, I don't work through them front to back, because that actually is homework. Instead I treat them more like a reference-with-a-hook; I look at them when I need to understand the particular thing in more depth, and then get out when I have what I need. But because it is contained in a textbook, this knowledge now has a natural link to steps before and after, so I have obvious places to go for regression and advancement.

I spend a lot of time thinking about what I need to learn, why I need to learn it, and how it relates to what I already know. This does an excellent job of helping things stick, and also of keeping me from getting too stuck because I have a battery of perspectives ready to deploy. This enables the reference approach.

I spend a lot of time what I have mentally termed triangulating, which is deliberately using different sources/currents of thought when I learn a subject. This winds up necessitating the reference approach, because I always wind up with questions that are neglected or unsatisfactorily addressed in a given source. Lately I really like founding papers and historical review papers right out of the gate, because these are prone to explaining motivations, subtle intuitions, and circumstances in a way instructional materials are not.

comment by An1lam · 2019-08-16T00:35:14.989Z · score: 8 (5 votes) · LW · GW

I've also been reading textbooks more and experiencied some frustration, but I've found two things that, so far, help me get less stuck and feel less guilt.

After trying to learn math from textbooks on my own for a month or so, I started paying a tutor (DM me for details) with whom I meet once a week. Like you, I struggle with getting stuck on hard exercises and/or concepts I don't understand, but having a tutor makes it easier for me to move on knowing I can discuss my confusions with them in our next session. Unfortunately, a paying a tutor requires actually having $ to spare on an ongoing basis, but I also suspect for some people it just "feels weird". If someone reading this is more deterred by this latter reason, consider that basically everyone who wants to seriously improve at any physical activity gets 1-on-1 instruction, but for some reason doing the same for mental activities as an adult is weirdly uncommon (and perhaps a little low status).

I've also started to follow MIT OCW courses for things I want to learn rather than trying to read entire textbooks. Yes, this means I may not cover as much material, but it has helped me better gauge how much time to spend on different topics and allow me to feel like I'm progressing. The major downside of this strategy is that I have to remind myself that even though I'm learning based on a course's materials, my goal is to learn the material in a way that's useful to me, not to memorize passwords [LW · GW]. Also, because I know how long the courses would take in a university context, I do occasionally feel guilt if I fall behind due to spending more time on a specific topic. Still, on net, using courses as loose guides has been working better for me than just trying to 100 percent entire math textbooks.

comment by cousin_it · 2019-08-14T06:27:29.596Z · score: 5 (3 votes) · LW · GW

When I try to read a textbook cover to cover, I find myself much more concerned with finishing rather than understanding. I want the satisfaction of being able to say I read the whole thing, every page. This means that I will sometimes cut corners in my understanding just to make it through a difficult part. This ends in disaster once the next chapter requires a solid understanding of the last.

When I read a textbook, I try to solve all exercises at the end of each chapter (at least those not marked "super hard") before moving to the next. That stops me from cutting corners.

comment by Matthew Barnett (matthew-barnett) · 2019-08-14T17:00:09.856Z · score: 7 (4 votes) · LW · GW

The only flaw I find with this is that if I get stuck on an exercise, I reach the following decision: should I look at the answer and move on, or should I keep at it.

If I choose the first option, this makes me feel like I've cheated. I'm not sure what it is about human psychology, but I think that if you've cheated once, you feel less guilty a second time because "I've already done it." So, I start cheating more and more, until soon enough I'm just skipping things and cutting corners again.

If I choose the second option, then I might be stuck for several hours, and this causes me to just abandon the textbook develop an ugh field around it.

comment by cousin_it · 2019-08-15T08:50:50.784Z · score: 4 (2 votes) · LW · GW

Maybe commit to spending at least N minutes on any exercise before looking up the answer?

comment by Matthew Barnett (matthew-barnett) · 2019-08-15T17:46:34.089Z · score: 1 (1 votes) · LW · GW

Perhaps it says something about the human brain (or just mine) that I did not immediately think of that as a solution.

comment by eigen · 2019-08-13T23:03:09.827Z · score: 4 (3 votes) · LW · GW

I was of the very same mind that you are now. I was somewhat against textbooks, but now textbooks are my only way of learning, not only for strong knowledge but also fast.

I think there are several important things in changing to textbooks only, first I have replaced my habit of completionism: not finishing a particular book in some field but change, it if I don't feel like it's helping me or a if things seem confusing, by another textbook in the same field. lukeprog's post is very handy here [LW · GW].

The idea of changing text-books has helped me a lot, sometimes I just thought I did not understand something but apparently I was only needing another explanation.

Two other important things, is that I take quite a lot of notes as I'm reading. I believe that if someone is just reading a text-book, that person is doing it wrong and a disservice to themselves. So I fill as much as I can in my working memory, be it three, four paragraphs of content and I transcribe those myself in my notes. Coupled with this is making my own questions and answers and then putting them on Anki (space-repetition memory program).

This allows me to learn vast amounts of knowledge in low amounts of time, assuring myself that I will remember everything I've learned. I believe textbooks are key component for this.

comment by Matthew Barnett (matthew-barnett) · 2019-08-16T18:33:10.834Z · score: 17 (6 votes) · LW · GW

I get the feeling that for AI safety, some people believe that it's crucially important to be an expert in a whole bunch of fields of math in order to make any progress. In the past I took this advice and tried to deeply study computability theory, set theory, type theory -- with the hopes of it someday giving me greater insight into AI safety.

Now, I think I was taking a wrong approach. To be fair, I still think being an expert in a whole bunch of fields of math is probably useful, especially if you want very strong abilities to reason about complicated systems. But, my model for the way I frame my learning is much different now.

I think my main model which describes my current perspective is that I think employing a lazy style of learning is superior for AI safety work. Lazy is meant in the computer science sense of only learning something when it seems like you need to know it in order to understand something important. I will contrast this with the model that one should learn a set of solid foundations first before going any further.

Obviously neither model can be absolutely correct in an extreme sense. I don't, as a silly example, think that people who can't do basic arithmetic should go into AI safety before building a foundation in math. And on the other side of the spectrum, I think it would be absurd to think that one should become a world renowned mathematician before reading their first AI safety paper. That said, even though both models are wrong, I think my current preference is for the lazy model rather than the foundation model.

Here are some points in favor of both, informed by my first-person experience.

Points in favor of the foundations model:

  • If you don't have solid foundations in mathematics, you may not even be aware of things that you are missing.
  • Having solid foundations in mathematics will help you to think rigorously about things rather than having a vague non-reductionistic view of AI concepts.
    • Subpoint: MIRI work is motivated by coming up with new mathematics that can describe error-tolerant agents without relying on fuzzy statements like "machine learning relies on heuristics so we need to study heuristics rather than hard math to do alignment."
  • We should try to learn the math that will be useful for AI safety in the future, rather than what is being used for machine learning papers right now. If your view of AI is that it is at least a few decades away, then it's possible that learning the foundations of mathematics will be more robustly useful no matter where the field shifts.

Points in favor of the lazy model:

  • Time is limited and it usually takes several years to become proficient in the foundations of mathematics. This is time that could have been spent reading actual research directly related to AI safety.
  • The lazy model is better for my motivation, since it makes me feel like I am actually learning about what's important, rather than doing homework.
    • Learning foundational math often looks a lot like just taking a shotgun and learning everything that seems vaguely relevant to agent foundations. Unless you have a very strong passion for this type of mathematics, it would seem outright strange that this type of learning is fun.
  • It's not clear that the MIRI approach is correct. I don't have a strong opinion on this, however
    • Even if the MIRI approach was correct, I don't think it's my comparative advantage to do foundational mathematics.
  • The lazy model will naturally force you to learn the things that are actually relevant, as measured by how much you come in contact with them. By contrast, the foundational model forces you to learn things which might not be relevant at all. Obviously, we won't know what is and isn't relevant beforehand, but I currently err on the side of saying that some things won't be relevant if they don't have a current direct input to machine learning.
  • Even if AI is many decades away, machine learning has been around for a long time, and it seems like the math useful for machine learning hasn't changed much. So, it seems like a safe bet that foundational math won't be relevant for understanding normal machine learning research any time soon.
comment by G Gordon Worley III (gworley) · 2019-08-19T06:35:41.759Z · score: 3 (2 votes) · LW · GW

I happened to be looking at something else and saw this comment thread [LW · GW] from about a month ago that is relevant to your post.

comment by G Gordon Worley III (gworley) · 2019-08-16T23:43:22.866Z · score: 3 (2 votes) · LW · GW

I'm somewhat sympathetic to this. You probably don't need the ability, prior to working on AI safety, to already be familiar with a wide variety of mathematics used in ML, by MIRI, etc.. To be specific, I wouldn't be much concerned if you didn't know category theory, more than basic linear algebra, how to solve differential equations, how to integrate together probability distributions, or even multivariate calculus prior to starting on AI safety work, but I would be concerned if you didn't have deep experience with writing mathematical proofs beyond high school geometry (although I hear these days they teach geometry differently than I learned it—by re-deriving everything in Elements), say the kind of experience you would get from studying graduate level algebra, topology, measure theory, combinatorics, etc..

This might also be a bit of motivated reasoning on my part, to reflect Dagon's comments, since I've not gone back to study category theory since I didn't learn it in school and I haven't had specific need for it, but my experience has been that having solid foundations in mathematical reasoning and proof writing is what's most valuable. The rest can, as you say, be learned lazily, since your needs will become apparent and you'll have enough mathematical fluency to find and pursue those fields of mathematics you may discover you need to know.

comment by Dagon · 2019-08-16T21:57:33.226Z · score: 3 (2 votes) · LW · GW

Beware motivated reasoning. There's a large risk that you have noticed that something is harder for you than it seems for others, and instead of taking that as evidence that you should find another avenue to contribute, you convince yourself that you can take the same path but do the hard part later ( and maybe never ).

But you may be on to something real - it's possible that the math approach is flawed, and some less-formal modeling (or other domain of formality) can make good progress. If your goal is to learn and try stuff for your own amusement, pursuing that seems promising. If your goals include getting respect (and/or payment) from current researchers, you're probably stuck doing things their way, at least until you establish yourself.

comment by Matthew Barnett (matthew-barnett) · 2019-08-16T22:17:14.055Z · score: 6 (3 votes) · LW · GW

That's a good point about motivated reasoning. I should distinguish arguments that the lazy approach is better for people and arguments that it's better for me. Whether it's better for people more generally depends on the reference class we're talking about. I will assume people who are interested in the foundations of mathematics as a hobby outside of AI safety should take my advise less seriously.

However, I still think that it's not exactly clear that going the foundational route is actually that useful on a per-unit time basis. The model I proposed wasn't as simple as "learn the formal math" versus "think more intuitively." It was specifically a question of whether we should learn the math on an as-needed basis. For that reason, I'm still skeptical that going out and reading textbooks on subjects that are only vaguely related to current machine learning work is valuable for the vast majority of people who want to go into AI safety as quickly as possible.

Sidenote: I think there's a failure mode of not adequately optimizing time, or being insensitive to time constraints. Learning an entire field of math from scratch takes a lot of time, even for the brightest people alive. I'm worried that, "Well, you never know if subject X might be useful" is sometimes used as a fully general counterargument. The question is not, "Might this be useful?" The question is, "Is this the most useful thing I could learn in the next time interval?"

comment by Dagon · 2019-08-16T23:05:20.622Z · score: 2 (1 votes) · LW · GW

A lot depends on your model of progress, and whether you'll be able to predict/recognize what's important to understand, and how deeply one must understand it for the project at hand.

Perhaps you shouldn't frame it as "study early" vs "study late", but "study X" vs "study Y". If you don't go deep on math foundations behind ML and decision theory, what are you going deep on instead? It seems very unlikely for you to have significant research impact without being near-expert in at least some relevant topic.

I don't want to imply that this is the only route to impact, just the only route to impactful research.
You can have significant non-research impact by being good at almost anything - accounting, management, prototype construction, data handling, etc.

comment by TurnTrout · 2019-08-17T00:02:12.592Z · score: 3 (2 votes) · LW · GW

I don't want to imply that this is the only route to impact, just the only route to impactful research.

“Only” seems a little strong, no? To me, the argument seems to be better expressed as: if you want to build on existing work where there’s unlikely to be low-hanging fruit, you should be an expert. But what if there’s a new problem, or one that’s incorrectly framed? Why should we think there isn’t low-hanging conceptual fruit, or exploitable problems to those with moderate experience?

comment by Dagon · 2019-08-17T15:15:06.639Z · score: 2 (1 votes) · LW · GW

I like your phrasing better than mine. "only" is definitely too strong. "most likely path to"?

comment by Matthew Barnett (matthew-barnett) · 2019-08-16T23:37:16.820Z · score: 3 (2 votes) · LW · GW
Perhaps you shouldn't frame it as "study early" vs "study late", but "study X" vs "study Y".

My point was that these are separate questions. If you begin to suspect that understanding ML research requires an understanding of type theory, then you can start learning type theory. Alternatively, you can learn type theory before researching machine learning -- ie. reading machine learning papers -- in the hopes that it builds useful groundwork.

But what you can't do is learn type theory and read machine learning research papers at the same time. You must make tradeoffs. Each minute you spend learning type theory is a minute you could have spent reading more machine learning research.

The model I was trying to draw was not one where I said, "Don't learn math." I explicitly said it was a model where you learn math as needed.

My point was not intended to be about my abilities. This is a valid concern, but I did not think that was my primary argument. Even conditioning on having outstanding abilities to learn every subject, I still think my argument (weakly) holds.

Note: I also want to say I'm kind of confused because I suspect that there's an implicit assumption that reading machine learning research is inherently easier than learning math. I side with the intuition that math isn't inherently difficult, it just requires memorizing a lot of things and practicing. The same is true for reading ML papers, which makes me confused why this is being framed as a debate over whether people have certain abilities to learn and do research.

comment by Chris_Leong · 2019-08-16T23:52:38.184Z · score: 2 (1 votes) · LW · GW

I'm trying to find a balance here. I think that there has to be a direct enough relation to a problem that you're trying to solve to prevent the task expanding to the point where it takes forever, but you also have to be willing to engage in exploration

comment by Matthew Barnett (matthew-barnett) · 2019-08-15T19:15:55.988Z · score: 16 (6 votes) · LW · GW

I think there are some serious low hanging fruits for making people productive that I haven't seen anyone write about (not that I've looked very hard). Let me just introduce a proof of concept:

Final exams in university are typically about 3 hours long. And many people are able to do multiple finals in a single day, performing well on all of them. During a final exam, I notice that I am substantially more productive than usual. I make sure that every minute counts: I double check everything and think deeply about each problem, making sure not to cut corners unless absolutely required because of time constraints. Also, if I start daydreaming, then I am able to immediately notice that I'm doing so and cut it out. I also believe that this is the experience of most other students in university who care even a little bit about their grade.

Therefore, it seems like we have an example of an activity that can just automatically produce deep work. I can think of a few reasons why final exams would bring out the best of our productivity:

1. We care about our grade in the course, and the few hours in that room are the most impactful to our grade.

2. We are in an environment where distractions are explicitly prohibited, so we can't make excuses to ourselves about why we need to check Facebook or whatever.

3. There is a clock at the front of the room which makes us feel like time is limited. We can't just sit there doing nothing because then time will just slip away.

4. Every problem you do well on benefits you by a little bit, meaning that there's a gradient of success rather than a binary pass or fail (though sometimes it's binary). This means that we care a lot about optimizing every second because we can always do slightly better.

If we wanted to do deep work for some other desired task, all four of these reasons seem like they could be replicable. Here is one idea (related to my own studying), although I'm sure I can come up with a better one if I thought deeply about this for longer:

Set up a room where you are given a limited amount of resources (say, a few academic papers, a computer without an internet connection, and a textbook). Set aside a four hour window where you're not allowed to leave the room except to go to the bathroom (and some person explicitly checks in on you like twice to see whether you are doing what you say you are doing). Make it your goal to write a blog post explaining some technical concept. Afterwards, the blog post gets posted to Lesswrong (conditional on it being at least minimal quality). You set some goal, like it must acheive 30 upvote reputation after 3 days. Commit to paying $1 to a friend for each upvote you score below the target reputation. So, if your blog post is at +15, you must pay $15 to your friend.

I can see a few problems with this design:

1. You are optimizing for upvotes, not clarity or understanding. The two might be correlated but at the very least there's a Goodhart effect.

2. Your "friend" could downvote the post. It can easily be hacked by other people who are interested, and it encourages vote manipulation etc.

Still, I think that I might be on the right track towards something that boosts productivity by a lot.

comment by Raemon · 2019-08-15T21:56:14.007Z · score: 8 (6 votes) · LW · GW

These seem like reasonable things to try, but I think this is making an assumption that you could take a final exam all the time and have it work out fine. I have some sense that people go through phases of "woah I could just force myself to work hard all the time" and then it totally doesn't work that way.

comment by Matthew Barnett (matthew-barnett) · 2019-08-15T23:55:51.849Z · score: 3 (2 votes) · LW · GW

I agree that it is probably too hard to "take a final exam all the time." On the other hand, I feel like I could make a much weaker claim that this is an improvement over a lot of productivity techniques, which often seem to more-or-less be dependent on just having enough willpower to actually learn.

At least in this case, each action you do can be informed directly by whether you actually succeed or fail at the goal (like getting upvotes on a post). Whether or not learning is a good instrumental proxy for getting upvotes in this setting is an open question.

comment by An1lam · 2019-08-16T00:23:35.637Z · score: 5 (3 votes) · LW · GW

From my own experience going through a similar realization and trying to apply it to my own productivity, I found that certain things I tried actually helped me sustainably work more productively but others did not.

What has worked for me based on my experience with exam-like situations is having clear goals and time boxes for work sessions, e.g. the blog post example you described. What hasn't worked for me is trying to impose aggressively short deadlines on myself all the time to incentivize myself to focus more intensely. Personally, the level of focus I have during exams is driven by an unsustainable level of stress, which, if applied continuously, would probably lead to burnout and/or procrastination binging. That said, occasionally artificially imposing deadlines has helped me engage exam-style focus when I need to do something that might otherwise be boring because it mostly involves executing known strategies rather than doing more open, exploratory thinking. For hard thinking though, I've actually found that giving myself conservatively long time boxes helps me focus better by allowing me to relax and take my time. I saw you mentioned struggling with reading textbooks above, and while I still struggle trying to read them too, I have found that not expecting miraculous progress helps me get less frustrated when I read them.

Related to all this, you used the term "deep work" a few times so you may already be familiar with Cal Newport's work. But, if you're not I recommend a few of his relevant posts (1, 2) describing how he produces work artifacts that act as a forcing function for learning the right stuff and staying focused.

comment by Viliam · 2019-08-15T21:51:39.546Z · score: 4 (2 votes) · LW · GW

This seems similar to "pomodoro", except instead of using your willpower to keep working during the time period, you set up the environment in a way that doesn't allow you to do anything else.

The only part that feels wrong is the commitment part. You should commit to work, not to achieve success, because the latter adds of problems (not completely under your control, may discourage experimenting, a punishment creates aversion against the entire method, etc.).

comment by Matthew Barnett (matthew-barnett) · 2019-08-15T21:57:25.547Z · score: 3 (2 votes) · LW · GW

Yes, the difference is that you are creating an external environment which rewards you for success and punishes you for failure. This is similar to taking a final exam, which is my inspiration.

The problem with committing to work rather than success is that you can always just rationalize something as "Oh I worked hard" or "I put in my best effort." However, just as with a final exam, the only thing that will matter in the end is if you actually do what it takes to get the high score. This incentivizes good consequentialist thinking and disincentivizes rationalization.

I agree there are things out of your control, but the same is true with final exams. For instance, the test-maker could have put something on the test that you didn't study much for. This encourages people to put extra effort into their assigned task to ensure robustness to outside forces.

comment by An1lam · 2019-08-16T00:53:19.152Z · score: 3 (2 votes) · LW · GW

I personally try to balance keeping myself honest by having some goal outside but also trusting myself enough to know when I should deprioritize the original goal in favor of something else.

For example, let's say I set a goal to write a blog post about a topic I'm learning in 4 hours, and half-way through I realize I don't understand one of the key underlying concepts related to the thing I intended to write about. During an actual test, the right thing to do would be to do my best given what I know already and finish as many questions as possible. But I'd argue that in the blog post case, I very well may be better off saying, "OK I'm going to go learn about this other thing until I understand it, even if I don't end up finishing the post I wanted to write."

The pithy way to say this is that tests are basically pure Goodhardt, and it's dangerous to turn every real life task into a game of maximizing legible metrics.

comment by Matthew Barnett (matthew-barnett) · 2019-08-16T01:05:11.256Z · score: 3 (4 votes) · LW · GW
For example, let's say I set a goal to write a blog post about a topic I'm learning in 4 hours, and half-way through I realize I don't understand one of the key underlying concepts related to the thing I intended to write about.

Interesting, this exact same thing just happened to me a few hours ago. I was testing my technique by writing a post on variational autoencoders. Halfway through I was very confused because I was trying to contrast them to GANs but didn't have enough material or knowledge to know the advantages of either.

During an actual test, the right thing to do would be to do my best given what I know already and finish as many questions as possible. But I'd argue that in the blog post case, I very well may be better off saying, "OK I'm going to go learn about this other thing until I understand it, even if I don't end up finishing the post I wanted to write."

I agree that's probably true. However, this creates a bad incentive where, at least in my case, I will slowly start making myself lazier during the testing phase because I know I can always just "give up" and learn the required concept afterwards.

At least in the case I described above I just moved onto a different topic, because I was kind of getting sick of variational autoencoders. However, I was able to do this because I didn't have any external constraints, unlike the method I described in the parent comment.

The pithy way to say this is that tests are basically pure Goodhardt, and it's dangerous to turn every real life task into a game of maximizing legible metrics.

That's true, although perhaps one could devise a sufficiently complex test such that it matches perfectly with what we really want... well, I'm not saying that's a solved problem in any sense.

comment by Dagon · 2019-08-16T18:40:06.480Z · score: 2 (3 votes) · LW · GW

I think you might be goodhearting a bit (mistaking the measure for the goal) when you claim that final exam performance is productive. The actual product is the studying and prep for the exam, not the exam itself. The time limits and isolated environment is helpful in proctoring (it ensures the output is limited enough to be able to grade, and ensures that no outside sources are being used), not for productivity.

That's not to say that these elements (isolation, concentration, time awareness, expectation of a grading/scoring rubric) aren't important, just that they're not necessarily sufficient nor directly convertible from an exam setting.

comment by Matthew Barnett (matthew-barnett) · 2019-08-09T20:42:33.571Z · score: 15 (6 votes) · LW · GW

Related to: The Lottery of Fascinations, other posts probably

When you are older, you will learn that the first and foremost thing which any ordinary person does is nothing.

I will occasionally come across someone who I consider to be extraordinarily productive, and yet when I ask what they did on a particular day they will respond, "Oh I basically did nothing." This is particularly frustrating. If they did nothing, then what was all that work that I saw!

I think this comes down to what we mean by doing nothing. There's a literal meaning to doing nothing. It could mean sitting in a chair, staring blankly at a wall, without moving a muscle.

More practically, what people mean by doing nothing is that they are doing something unrelated to their stated task, such as checking Facebook, chatting with friends, browsing Reddit etc.

When productive people say that they are "doing nothing" it could just be that they are modest, and don't want to signal how productive they really are. On the other hand, I think that there is a real sense in which these productive people truly believe that they are doing nothing. Even if their "doing nothing" was your "doing work", to them it's still a "doing nothing" because they weren't doing the thing they explicitly set out to do.

I think, therefore, there is something of a "do nothing" differential, which helps explain why some people are more productive than others. For some people who are less productive than me, their "doing nothing" might just be playing video games. For me, my "doing nothing" is watching people debate the headline of a Reddit news article (and I'm not proud of this).

For those more productive than me, perhaps their "doing nothing" is reading blog posts that are tangentially related to what they are working on. For people more productive still, it might be obsessively re-reading articles directly applicable to their work. And for Terence Tao, his "doing nothing" might be reading math papers in fields other than the one he is supposed to be currently working in.

comment by Matthew Barnett (matthew-barnett) · 2019-09-27T03:04:20.583Z · score: 10 (6 votes) · LW · GW

The case for studying mesa optimization

Early elucidations of the alignment problem focused heavily on value specification. That is, they focused on the idea that given a powerful optimizer, we need some way of specifying our values so that the powerful optimizer can create good outcomes.

Since then, researchers have identified a number of additional problems besides value specification. One of the biggest problems is that in a certain sense, we don't even know how to optimize for anything, much less a perfect specification of human values.

Let's assume we could get a utility function containing everything humanity cares about. How would we go about optimizing this utility function?

The default mode of thinking about AI right now is to train a deep learning model that performs well on some training set. But even if we were able to create a training environment for our model that reflected the world very well, and rewarded it each time it did something good, exactly in proportion to how good it really was in our perfect utility function... this still would not be guaranteed to yield a positive artificial intelligence.

This problem is not a superficial one either -- it is intrinsic to the way that machine learning is currently accomplished. To be more specific, the way we constructed our AI was by searching over some class of models , and selecting those models which tended to do well on the training set. Crucially, we know almost nothing about the model which eventually gets selected. The most we can say is that our AI , but since was such a broad class, this provides us very little information about what the model is actually doing.

This is similar to the mistake evolution made when designing us. Unlike evolution, we can at least put some hand-crafted constraints, like a regularization penalty, in order to guide our AI into safe regions of . We can also open up our models and see what's inside, and in principle simulate every aspect of their internal operations.

But now this still isn't looking very good, because we barely know anything about what type of computations are safe. What would we even look for? To make matters worse, our current methods for ML transparency are abysmally ill equipped to the task of telling us what is going on inside.

The default outcome of all of this is that eventually, as grows larger with compute becoming cheaper and budgets getting bigger, gradient descent is bound to hit powerful optimizers who do not share our values.

comment by Matthew Barnett (matthew-barnett) · 2019-09-07T22:23:10.828Z · score: 10 (5 votes) · LW · GW

When I look back at things I wrote a while ago, say months back, or years ago, I tend to cringe at how naive many of my views were. Faced with this inevitable progression, and the virtual certainty that I will continue to cringe at views I now hold, it is tempting to disconnect from social media and the internet and only comment when I am confident that something will look good in the future.

At the same time, I don't really think this is a good attitude for several reasons:

  • Writing things up forces my thoughts to be more explicit, improving my ability to think about things
  • Allowing my ideas to be critiqued allows for a quicker transition towards correct beliefs
  • I tend to learn a lot when writing things
comment by Zack_M_Davis · 2019-09-08T20:16:19.735Z · score: 10 (5 votes) · LW · GW

People who don't understand the concept of "This person may have changed their mind in the intervening years", aren't worth impressing. I can imagine scenarios where your economic and social circumstances are so precarious that the incentives [LW · GW] leave you with no choice but to let your speech and your thought be ruled by unthinking mob social-punishment mechanisms. But you should at least check whether you actually live in that world before surrendering.

comment by Viliam · 2019-09-08T19:41:46.272Z · score: 4 (2 votes) · LW · GW

In real world, people usually forget what you said 10 years ago. And even if they don't, saying "Matthew said this 10 years ago" doesn't have the same power as you saying the thing now.

But the internet remembers forever, and your words from 10 years ago can be retweeted and become alive as if you said them now.

A possible solution would be to use a nickname... and whenever you notice you grew up so much that you no longer identify with the words of your nickname, pick up a new one. Also new accounts on social networks, and re-friend only those people you still consider worthy. Well, in this case the abrupt change would be the unnatural thing, but perhaps you could still keep using your previous account for some time, but mostly passively. As your real-life new self would have different opinions, different hobbies, and different friends than your self from 10 years ago, so would your online self.

Unfortunately, this solution goes against "terms of service" of almost all major website. On the advertisement-driven web, advertisers want to know your history, and they are the real customers... you are only a product.

comment by Matthew Barnett (matthew-barnett) · 2019-09-13T03:23:35.477Z · score: 9 (7 votes) · LW · GW

I agree with Wei Dai that we should use our real names [LW · GW] for online forums, including Lesswrong. I want to briefly list some benefits of using my real name,

  • It means that people can easily recognize me across websites, for example from Facebook and Lesswrong simultaneously.
  • Over time my real name has been stable whereas my usernames have changed quite a bit over the years. For some very old accounts, such as those I created 10 years ago, this means that I can't remember my account name. Using my real name would have averted this situation.
  • It motivates me to put more effort into my posts, since I don't have any disinhibition from being anonymous.
  • It often looks more formal than a silly username, and that might make people take my posts more seriously than they otherwise would have.
  • Similar to what Wei Dai said, it makes it easier for people to recognize me in person, since they don't have to memorize a mapping from usernames to real names in their heads.

That said, there are some significant downsides, and I sympathize with people who don't want to use their real names.

  • It makes it much easier for people to dox you. There are some very bad ways that this can manifest.
  • If you say something stupid, your reputation is now directly on the line. Some people change accounts every few years, as they don't want to be associated with the stupid person they were a few years ago.
  • Sometimes disinhibition from being anonymous is a good way to spur creativity. I know that I was a lot less careful in my previous non-real-name accounts, and my writing style was different -- perhaps in a way that made my writing better.
  • Your real name might sound boring, whereas your online username can sound awesome.
comment by Viliam · 2019-09-13T22:02:16.848Z · score: 5 (4 votes) · LW · GW

These days my reason for not using full name is mostly this: I want to keep my professional and private lives separate. And I have to use my real name at job, therefore I don't use it online.

What I probably should have done many years ago, is make up a new, plausibly-sounding full name (perhaps keep my first name and just make up a new surname?), and use it consistently online. Maybe it's still not too late; I just don't have any surname ideas that feel right.

comment by jp · 2019-09-13T16:14:15.462Z · score: 4 (3 votes) · LW · GW

Sometimes you need someone to give the naive view, but doing so hurts the reputation of the person stating it.

For example suppose X is the naive view and Y is a more sophisticated view of the same subject. For sake of argument suppose X is correct and contradicts Y.

Given 6 people, maybe 1 of them starts off believing Y. 2 people are uncertain, and 3 people think X. In the world where people have their usernames attached. The 3 people who believe X now have a coordination problem. They each face a local disincentive to state the case for X, although they definitely want _someone_ to say it. The equilibrium here is that no one makes the case for X and the two uncertain people get persuaded to view Y.

However if someone is anonymous and doesn't care that much about their reputation, they may just go ahead and state the case for X, providing much better information to the undecided people.

This makes me happy there are some smart people posting under pseudonyms. I claim it is a positive factor for the epistemics of LessWrong.

comment by JustMaier · 2019-09-13T06:05:27.318Z · score: 2 (2 votes) · LW · GW

Another issue I'd add is that real names are potentially too generic. Basically, if everyone used their real name, how many John Smiths would there be? Would it be confusing?

The rigidity around 1 username/alias per person on most platforms forces people to adopt mostly memorable names that should distinguish them from the crowd.


comment by Matthew Barnett (matthew-barnett) · 2019-08-09T05:17:47.907Z · score: 8 (6 votes) · LW · GW

I've often wished that conversation norms shifted towards making things more consensual. The problem is that when two people are talking, it's often the case that one party brings up a new topic without realizing that the other party didn't want to talk about that, or doesn't want to hear it.

Let me provide an example: Person A and person B are having a conversation about the exam that they just took. Person A bombed the exam, so they are pretty bummed. Person B, however, did great and wants to tell everyone. So then person B comes up to person A and asks "How did you do?" fully expecting to brag the second person A answers. On it's own, this question is benign. This happens frequently without question. On the other hand, if person B had said, "Do you want to talk about the exam?" person A might have said "No."

This problem can be alleviated by simply asking people whether they want to talk about certain things. For sensitive topics, like politics and religion, this is already the norm in some places. I think it can be taken further. I suggest the following boundaries, and could probably think of more if pressed:

  • Ask someone before sharing something that puts you in a positive light. Make it explicit that you are bragging. For example, ask "Can I brag about something?" before doing so.
  • Ask someone before talking about something that you know there's a high variance of difficulty and success. This applies to a lot of things: school, jobs, marathon running times.
comment by mr-hire · 2019-08-09T15:56:34.538Z · score: 5 (3 votes) · LW · GW

Have you read the posts on ask, tell, and guess culture? They feel highly related to this idea.

comment by Raemon · 2019-08-09T18:38:53.123Z · score: 5 (2 votes) · LW · GW

Malcolm Ocean eventually reframed Tell Culture as Reveal Culture, which I found to be an improvement.

comment by Matthew Barnett (matthew-barnett) · 2019-08-09T17:42:35.265Z · score: 1 (3 votes) · LW · GW

Hmm, I saw those a while ago and never read them. I'll check them out.

comment by Dagon · 2019-08-09T20:22:39.442Z · score: 2 (1 votes) · LW · GW

The problem is, if a conversational topic can be hurtful, the meta-topic can be too. "do you want to talk about the test" could be as bad or worse than talking about the test, if it's taken as a reference to a judgement-worthy sensitivity to the topic. And "Can I ask you if you want to talk about whether you want to talk about the test" is just silly.

Mr-hire's comment is spot-on - there are variant cultural expectations that may apply, and you can't really unilaterally decide another norm is better (though you can have opinions and default stances).

The only way through is to be somewhat aware of the conversational signals about what topics are welcome and what should be deferred until another time. You don't need prior agreement if you can take the hint when an unusually-brief non-response is given to your conversational bid. If you're routinely missing hints (or seeing hints that aren't), and the more direct discussions are ALSO uncomfortable for them or you, then you'll probably have to give up on that level of connection with that person.


comment by Matthew Barnett (matthew-barnett) · 2019-08-09T21:15:17.350Z · score: 1 (7 votes) · LW · GW
"do you want to talk about the test" could be as bad or worse than talking about the test, if it's taken as a reference to a judgement-worthy sensitivity to the topic

I agree. Although if you are known for asking those types of questions maybe people will learn to understand you never mean it as a judgement.

And "Can I ask you if you want to talk about whether you want to talk about the test" is just silly.

True, although I'll usually take silly over judgement any day. :)

comment by Matthew Barnett (matthew-barnett) · 2019-09-20T23:56:23.832Z · score: 7 (4 votes) · LW · GW

Signal boosting a Lesswrong-adjacent author from the late 1800s and early 1900s

Via a friend, I recently discovered the zoologist, animal rights advocate, and author J. Howard Moore. His attitudes towards the world reflect contemporary attitudes within effective altruism about science, the place of humanity in nature, animal welfare, and the future. Here are some quotes which readers may enjoy,

Oh, the hope of the centuries and the centuries and centuries to come! It seems sometimes that I can almost see the shining spires of that Celestial Civilisation that man is to build in the ages to come on this earth—that Civilisation that will jewel the land masses of this planet in that sublime time when Science has wrought the miracles of a million years, and Man, no longer the savage he now is, breathes Justice and Brotherhood to every being that feels.

But we are a part of Nature, we human beings, just as truly a part of the universe of things as the insect or the sea. And are we not as much entitled to be considered in the selection of a model as the part 'red in tooth and claw'? At the feet of the tiger is a good place to study the dentition of the cat family, but it is a poor place to learn ethics.

Nature is the universe, including ourselves. And are we not all the time tinkering at the universe, especially the garden patch that is next to us—the earth? Every time we dig a ditch or plant a field, dam a river or build a town, form a government or gut a mountain, slay a forest or form a new resolution, or do anything else almost, do we not change and reform Nature, make it over again and make it more acceptable than it was before? Have we not been working hard for thousands of years, and do our poor hearts not almost faint sometimes when we think how far, far away the millennium still is after all our efforts, and how long our little graves will have been forgotten when that blessed time gets here?

The defect in this argument is that it assumes that the basis of ethics is life, whereas ethics is concerned, not with life, but with consciousness. The question ever asked by ethics is not, Does the thing live? but. Does it feel? It is impossible to do right and wrong to that which is incapable of sentient experience. Ethics arises with consciousness and is coextensive with it. We have no ethical relation to the clod, the molecule, or the scale sloughed off from our skin on the back of our hand, because the clod, the molecule, and the scale have no feeling, no soul, no anything rendering them capable of being affected by us [...] The fact that a thing is an organism, that it has organisation, has in itself no more ethical significance than the fact that it has symmetry, or redness, or weight.

In the ideal universe the life and happiness of no being are contingent on the suffering and death of any other, and the fact that in this world of ours life and happiness have been and are to-day so commonly maintained by the infliction of misery and death by some beings on others is the most painful fact that ever entered an enlightened mind.
comment by Matthew Barnett (matthew-barnett) · 2019-09-06T17:49:16.956Z · score: 7 (4 votes) · LW · GW

I keep wondering why many AI alignment researchers aren't using the alignmentforum. I have met quite a few people who are working on alignment who I've never encountered online. I can think of a few reasons why this might be,

  • People find it easier to iterate on their work without having to write things up
  • People don't want to share their work, potentially because they think a private-by-default policy is better.
  • It is too cumbersome to interact with other researchers through the internet. In-person interactions are easier
  • They just haven't even considered from a first person perspective whether it would be worth it
comment by Matthew Barnett (matthew-barnett) · 2019-08-12T17:18:04.644Z · score: 6 (5 votes) · LW · GW

Forgive me for cliche scientism, but I recently realized that I can't think of any major philosophical developments in the last two centuries that occurred within academic philosophy. If I were to try to list major philosophical achievements since 1819, these would likely appear on my list, but none of them were from those trained in philosophy:

  • A convincing, simple explanation for the apparent design we find in the living world (Darwin and Wallace).
  • The unification of time and space into one fabric (Einstein)
  • A solid foundation for axiomatic mathematics (Zermelo and Fraenkel).
  • A model of computation, and a plausible framework for explaining mental activity (Turing and Church).

By contrast, if we go back to previous centuries, I don't have much of an issue citing philosophical achievements from philosophers:

  • The identification of the pain-pleasure axis as the primary source of value (Bentham).
  • Advanced notions of causality, reductionism, scientific skepticism (Hume)
  • Extension of moral sympathies to those in the animal kingdom (too many philosophers to name)
  • A highlight of the value of wisdom and learned debate (Socrates, and others)

Of course, this is probably caused my by bias towards Lesswrong-adjacent philosophy. If I had to pick philosophers who have made major contributions, these people would be on my shortlist:

John Stuart Mill, Karl Marx, Thomas Nagel, Derek Parfit, Bertrand Russell, Arthur Schopenhauer.

comment by jessicata (jessica.liu.taylor) · 2019-08-12T22:07:59.867Z · score: 10 (6 votes) · LW · GW

I would name the following:

comment by Adam Scholl (adam_scholl) · 2019-08-12T22:53:47.579Z · score: 8 (5 votes) · LW · GW

My impression is that academic philosophy has historically produced a lot of good deconfusion work in metaethics (e.g. this and this), as well as some really neat negative results like the logical empiricists' failed attempt to construct a language in which verbal propositions could be cached out/analyzed in terms of logic or set theory in a way similar to how one can cache out/analyze Python in terms of machine code. In recent times there's been a lot of (in my opinion) great academic philosophy done at FHI.

comment by Matthew Barnett (matthew-barnett) · 2019-08-12T22:14:51.199Z · score: 3 (2 votes) · LW · GW

Those are all pretty good. :)

comment by TAG · 2019-08-13T12:18:39.113Z · score: 8 (6 votes) · LW · GW

Wow! You left out the whole of analytical philosophy!

comment by Matthew Barnett (matthew-barnett) · 2019-08-13T18:03:51.745Z · score: 1 (1 votes) · LW · GW

I'm not saying that I'm proud of this fact. It is mostly that I'm ignorant of it. :)

comment by jacobjacob · 2019-08-13T18:01:57.861Z · score: 3 (2 votes) · LW · GW
  • The development of modern formal logic (predicate logic, modal logic, the equivalence of higher-order logics and set-theory, etc.), which is of course deeply related to Zermelo, Fraenkel, Turing and Church, but which involved philosophers like Quine, Putnam, Russell, Kripke, Lewis and others.
  • The model of scientific progress as proceeding via pre-paradigmatic, paradigmatic, and revolutionary stages (from Kuhn, who wrote as a philosopher, though trained as a physicist)
comment by habryka (habryka4) · 2019-08-12T20:11:25.661Z · score: 2 (1 votes) · LW · GW
The identification of the pain-pleasure axis as the primary source of value (Bentham).

I will mark that I think this is wrong, and if anything I would describe it as a philosophical dead-end. Complexity of value and all of that. So listing it as a philosophical achievement seems backwards to me.

comment by Matthew Barnett (matthew-barnett) · 2019-08-12T20:56:27.662Z · score: 1 (1 votes) · LW · GW

I might add that I also consider the development of ethical anti-realism to be another, perhaps more insightful, achievement. But this development is, from what I understand, usually attributed to Hume.

Depending on what you mean by "pleasure" and "pain" it is possible that you merely have a simple conception of the two words which makes this identification incompatible with complexity of value. The robust form of this distinction was provided by John Stuart Mill who identified that some forms of pleasure can be more valuable than others (which is honestly quite similar to what we might find in the fun theory sequence...).

In its modern formulation, I would say that Bentham's contribution was identifying conscious states as being the primary theater for which value can exist. I can hardly disagree, as I struggle to imagine things in this world which could possibly have value outside of conscious experience. Still, I think there are perhaps some, which is why I conceded by using the words "primary source of value" rather than "sole source of value."

To the extent that complexity of value disagrees with what I have written above, I incline to disagree with complexity of value :).

comment by Raemon · 2019-08-12T21:04:25.278Z · score: 3 (1 votes) · LW · GW

(I think you and habryka in fact disagree pretty deeply here)

comment by Matthew Barnett (matthew-barnett) · 2019-08-12T21:07:14.340Z · score: 1 (1 votes) · LW · GW

Then I will assert that I would in fact appreciate seeing the reasons for disagreement, even as the case may be that it comes down to axiomatic intuitions.

comment by Matthew Barnett (matthew-barnett) · 2019-09-12T02:36:16.911Z · score: 5 (3 votes) · LW · GW

I've heard a surprising number of people criticize parenting recently using some pretty harsh labels. I've seen people call it a form of "Stockholm syndrome" and a breach of liberty, morally unnecessary etc. This seems kind of weird to me, because it doesn't really match my experience as a child at all.

I do agree that parents can sometimes violate liberty, and so I'd prefer a world where children could break free from their parents without penalties. But I also think that most children genuinely love their parents and so wouldn't want to do so. I think if you deride this as merely "Stockholm syndrome" then you are unfairly undervaluing the genuine nature of the relationship in most cases, and I disagree with you here.

As an individual, I would totally let an intent aligned AGI manage most of my life, and give me suggestions. Of course, if I disagreed with a course of action it suggested, I would want it to give a non-manipulative argument to persuade me that it knows best, rather than simply forcing me into the alternative. In other words, I'd want some sort of weak paternalism on the part of an AGI.

So, as a person who wants this type of thing, I can really see the merits of having parents who care for children. In some ways they are intent aligned GIs. Now, some parents are much more strict, and freedom restricting, and less transparent than what we would want in a full blown guardian superintelligence -- but this just seems like an argument that there exist bad parents, not that this type of paternalism is bad.

comment by cousin_it · 2019-09-12T06:50:35.956Z · score: 6 (4 votes) · LW · GW

Yeah, that's one argument for tradition: it's simply not the pit of misery that its detractors claim it to be. But for parenting in particular, I think I can give an even stronger argument. Children aren't little seeds of goodness that just need to be set free. They are more like little seeds of anything. If you won't shape their values, there's no shortage of other forces in the world that would love to shape your children's values, without having their interests at heart.

comment by Matthew Barnett (matthew-barnett) · 2019-09-12T08:07:28.213Z · score: 2 (2 votes) · LW · GW
Children aren't little seeds of goodness that just need to be set free. They are more like little seeds of anything

Toddlers, yes. If we're talking about people over the age of say, 8, then it becomes less true. By the time they are a teen, it becomes pretty false. And yet people still say that legal separation at 18 is good.

If you are merely making the argument that we should limit their exposure to things that could influence them in harmful directions, then I'd argue that this never stops being a powerful force, including for people well into adulthood and in old age.

comment by cousin_it · 2019-09-12T09:39:35.570Z · score: 7 (3 votes) · LW · GW

Huh? Most 8 year olds can't even make themselves study instead of playing Fortnite, and certainly don't understand the issues with unplanned pregnancies. I'd say 16-18 is about the right age where people can start relying on internal structure instead of external. Many take even longer, and need to join the army or something.

comment by Matthew Barnett (matthew-barnett) · 2019-08-16T20:17:52.934Z · score: 5 (4 votes) · LW · GW

Sometimes people will propose ideas, and then those ideas are met immediately after with harsh criticism. A very common tendency for humans is to defend our ideas and work against these criticisms, which often gets us into a state that people refer to as "defensive."

According to common wisdom, being in a defensive state is a bad thing. The rationale here is that we shouldn't get too attached to our own ideas. If we do get attached, we become liable to become crackpots [LW · GW]who can't give an idea up because it would make them look bad if we did. Therefore, the common wisdom advocates treating ideas as being handed to us by a tablet from the clouds rather than a product of our brain's thinking habits. Taking this advice allows us to detach ourselves from our ideas so that we don't confuse criticism with insults.

However, I think the exact opposite failure mode is not often enough pointed out and guarded against. Specifically, the failure mode is being too willing to abandon beliefs based on surface level counterarguments. To alleviate this I suggest we shouldn't be so ready to give up our ideas in the face of criticism.

This might sound irrational -- why should we get attached to our beliefs? I'm certainly not advocating that we should actually associate criticism with insults to our character or intelligence. Instead, my argument is that the process of defensively defending against criticism generates a productive adversarial structure.

Consider two people. Person A desperately wants to believe proposition X, and person B desperately wants to believe not X. If B comes up to A and says, "Your belief in X is unfounded. Here are the reasons..." Person A can either admit defeat, or fall into defensive mode. If A admits defeat, they might indeed get closer to the truth. On the other hand, if A gets into defensive mode, they might also get closer to the truth in the process of desperately for evidence of X.

My thesis is this: the human brain is very good at selective searching for evidence. In particular, given some belief that we want to hold onto, we will go to great lengths to justify it, searching for evidence that we otherwise would not have searched for if we were just detached from the debate. It's sort of like the difference between a debate between two people who are assigned their roles by a coin toss, and a debate between people who have spent their entire lives justifying why they are on one side. The first debate is an interesting spectacle, but I expect the second debate to contain much deeper theoretical insight.

comment by Wei_Dai · 2019-08-16T21:33:15.552Z · score: 6 (3 votes) · LW · GW

A couple of relevant posts/threads that come to mind:

comment by Viliam · 2019-08-17T16:48:23.620Z · score: 5 (3 votes) · LW · GW

Just like an idea can be wrong, so can be criticism. It is bad to give up the idea, just because..

  • someone rounded it up to the nearest cliche, and provided the standard cached answer;
  • someone mentioned a scientific article (that failed to replicate) that disproves your idea (or something different, containing the same keywords);
  • someone got angry because it seems to oppose their political beliefs;
  • etc.

My "favorite" version of wrong criticism is when someone experimentally disproves a strawman version of your hypothesis. Suppose your hypothesis is "eating vegetables is good for health", and someone makes an experiment where people are only allowed to eat carrots, nothing more. After a few months they get sick, and the author of the experiment publishes a study saying "science proves that vegetables are actually harmful for your health". (Suppose, optimistically, that the author used sufficiently large N, and did the statistics properly, so there is nothing to attack from the methodological angle.) From now on, whenever you mention that perhaps a diet containing more vegetables could benefit someone, someone will send you a link to the article that "debunks the myth" and will consider the debate closed.

So, when I hear about research proving that parenting / education / exercise / whatever doesn't cause this or that, my first reaction is to wonder how specifically did the researchers operationalize such a general word, and whether the thing they studied even resembles my case.

(And yes, I am aware that the same strategy could be used to refute any inconvenient statement, such as "astrology doesn't work" -- "well, I do astrology a bit differently than the people studied in that experiment, therefore the conclusion doesn't apply to me".)

comment by Matthew Barnett (matthew-barnett) · 2019-08-30T22:00:17.649Z · score: 4 (2 votes) · LW · GW

I think that human level capabilities in natural language processing (something like GPT-2 but much more powerful) is likely to occur in some software system within 20 years.

Since human level capabilities in natural language processing is a very rich real-world task, I would consider a system with those capabilities to be adequately described as a general intelligence, though it would likely not be very dangerous due to its lack of world-optimization capabilities.

This belief of mine is based on a few heuristics. Below I have collected a few claims which I consider to be relatively conservative, and which collectively combine to weakly imply my thesis. Since this is a short-form post I will not provide very specific lines of evidence. Still, I think that each of my claims could be substantially expanded upon and/or steelmanned by adding detail from historical trends and evidence from current ML research.

Claim 1: Current techniques, given enough compute, are sufficient to perform par-human at natural language processing tasks. This is in some sense trivially true since sufficiently complicated RNNs are Turing complete. In a more practical sense, I think that there is enough evidence that current techniques are sufficient to perform rudimentary

  • Summarization of text
  • Auto-completion of paragraphs
  • Q&A
  • Natural conversation

Given more compute and more data, I don't see why there would be a fundamental stumbling block for current ML models to scale to human level on the above tasks. Therefore, I think that human level natural language processes could be created today with enough funding.

Claim 2: Given historical data and assumptions about future progress, it is quite likely that the cost for training ML systems will continue to go down in the next decades by significant amounts (more specifically: an order of magnitude). I don't have much more to add to this other than the fact that I have personally followed hardware trends on websites like videocardbenchmark.net and my guess is that creating neural-network specific hardware will continue this trend in ML.

Claim 3: Creating a system with human level capabilities in natural language processing will require a modest amount of funding, relative to the amount of money large corporations and governments have at their disposal. To be more specific, I estimate that it would cost less than five billion dollars in hardware costs in 2019 inflation adjusted dollars, and perhaps even less than one billion dollars. Here's a rough sketch for an argument for this proposition:

  • The cost of replicating GPT-2 was $50k. This is likely to be a large overestimate, given that the post noted that intrinsic costs are much lower.
  • Given claim 2, this cost can be predicted to go down to about $5k within 20 years.
  • While the cost for ML systems does not scale linearly in the number of parameters, the parallelizability of architectures like the Transformer allow for near-linear scaling. This is my impression from reading posts like this one.
  • Given the above three statements, the cost of running a Transformer with the same number of parameters as the high estimate for the number of synapses in a human brain would naively cost about one billion dollars.

Claim 4: There is sufficient economic incentive such that producing a human-level system in the domain of natural language is worth a multi-billion dollar investment. To me this seems quite plausible, given just how many jobs require writing papers, memos, or summarizing text. Compare this to a space-race type scenario where there becomes enough public hype surrounding AI such that governments are throwing around one hundred fifty billion dollars, which is what they did for the ISS. And relative to space, AI at least has very direct real world benefits!


I understand there's a lot to justify these claims. And I haven't done much work to justify them. But, I'm not presently interested in justifying these claims to a bunch of judges intent on finding flaws. My main concern is that they all seem likely to me, and there's also a lot of current work in out-competing companies to be first on the natural language benchmarks. It just adds up to me.

Am I missing something? If not, then this argument at least pushes back on claims that there is a negligible chance of general intelligence emerging within the next few decades.

comment by Gurkenglas · 2019-08-30T22:19:03.208Z · score: 1 (1 votes) · LW · GW

I expect that human-level language processing is enough to construct human-level programming and mathematical research ability. Aka, complete a research diary the way a human would, by matching with patterns it has previously seen, just as human mathematicians do. That should be capability enough to go as foom as possible.

comment by Matthew Barnett (matthew-barnett) · 2019-08-30T22:23:22.821Z · score: 1 (1 votes) · LW · GW

If AI is limited by hardware rather than insight, I find it unlikely that a 300 trillion parameter Transformer trained to reproduce math/CS papers would be able to "go foom." In other words, while I agree that the system I have described would likely be able to do human-level programming (though it would still make mistakes, just like human programmers!) I doubt that this would necessarily cause it to enter a quick transition to superintelligence of any sort.

I suspect the system that I have described above would be well suited for automating some types of jobs, but would not necessarily alter the structure of the economy by a radical degree.

comment by Gurkenglas · 2019-08-30T23:15:15.152Z · score: 1 (1 votes) · LW · GW

It wouldn't necessarily cause such a quick transition, but it could easily be made to. A human with access to this tool could iterate designs very quickly, and he could take himself out of the loop by letting the tool predict and execute his actions as well, or by piping its code ideas directly into a compiler, or some other way the tool thinks up.

comment by Matthew Barnett (matthew-barnett) · 2019-08-30T23:17:47.977Z · score: 1 (1 votes) · LW · GW

My skepticism is mainly that this would be quicker than normal human iteration, or that this would substantially improve upon the strategy of simply buying more hardware. However, as we see in the recent case of eg. roBERTa, there are a few insights which substantially improve upon a single AI system. I just remain skeptical that a single human-level AI system would produce these insights faster than a regular human team of experts.

In other words, my opinion of recursive self improvement in this narrow case is that it isn't a fundamentally different strategy from human oversight and iteration. It can be used to automate some parts of the process, but I don't think that foom is necessarily implied in any strong sense.

comment by Gurkenglas · 2019-08-31T02:20:50.267Z · score: 1 (1 votes) · LW · GW

The default argument that such a development would lead to a foom is that an insight-based regular doubling of speed mathematically reaches a singularity in finite time when the speed increases pay insight dividends. You can't reach that singularity with a fleshbag in the loop (though it may be unlikely to matter if with him in the loop, you merely double every day).

For certain shapes of how speed increases depend on insight and oversight, there may be a perverse incentive to cut yourself out of your loop before the other guy cuts himself out.

comment by Matthew Barnett (matthew-barnett) · 2019-08-22T01:41:14.398Z · score: 4 (2 votes) · LW · GW

I generally agree with the heuristic that we should "live on the mainline", meaning that we should mostly plan for events which capture the dominant share of our probability. This heuristic causes me to have a tendency to do some of the following things

  • Work on projects that I think have a medium-to-high chance of succeeding and quickly abandon things that seem like they are failing.
  • Plan my career trajectory based on where I think I can plausibly maximize my long term values.
  • Study subjects only if I think that I will need to understand them at some point in order to grasp an important concept. See more details here [LW · GW].
  • Avoid doing work that leverages small probabilities of exceptionally bad outcomes. For example, I don't focus my studying on worst-case AI safety risk (although I do think that analyzing worst-case failure modes is useful from the standpoint of a security mindset).

I see a few problems with this heuristic, however, and I'm not sure quite how to resolve them. More specifically, I tend to float freely between different projects because I am quick to abandon things if I feel like they aren't working out (compare this to the mindset that some game developers have when they realize their latest game idea isn't very good).

One case where this shows up is when I change my beliefs about where the most effective ways to spend my time as far as long-term future scenarios are concerned. I will sometimes read an argument about how some line of inquiry is promising and for an entire day believe that this would be a good thing to work on, only for the next day to bring another argument.

And things like my AI timeline predictions vary erratically, much more than I expect most people's: I sometimes wake up and think that AI might be just 10 years away and other days I wake up and wonder if most of this stuff is more like a century away.

This general behavior makes me into someone who doesn't stay consistent on what I try to do. My life therefore resembles a battle between two competing heuristics: on one side there's the heuristic of planning for the mainline, and on the other there's the heuristic of committing to things even if they aren't panning out. I am unsure of the best way to resolve this conflict.

comment by Raemon · 2019-08-22T01:56:06.893Z · score: 7 (4 votes) · LW · GW

Some random thoughts:

  • Startups and pivots. Startups require lots of commitment even when things feel like they're collapsing – only by perservering through those times can you possibly make it. Still, startups are willing to pivot – take their existing infrastructure but change key strategic approaches.
  • Escalating commitment. Early on (in most domains), you should pick shorter term projects, because the focus is on learning. Code a website in a week. Code another website in 2 months. Don't stress too much on multi-year plans until you're reasonably confident you sorta know what you're doing. (Relatedly, relationships: early on it makes sense to date a lot to get some sense of who/what you're looking for in a romantic partner. But eventually, a lot of the good stuff comes when you actually commit to longterm relationships that are capable of weathering periods of strife and doubt)
    • Alternately: Givewell (or maybe OpenPhil?) did mixtures of shallow dives, deep dives and medium dives into cause areas because they learned different sorts of things from each kind of research.
  • Commitment mindset. Sort of how Nate Soares recommends separating the feeling of conviction from the epistemic belief of high-success... you can separate "I'm going to stick with this project for a year or two because it's likely to work" from "I'm going to stick to this project for a year or two because sticking to projects for a year or two is how you learn how projects work on the 1-2 year timescale, including the part where you shift gears and learn from mistakes and become more robust about them.
comment by Gurkenglas · 2019-08-31T02:32:44.511Z · score: 5 (6 votes) · LW · GW

Mathematically, it seems like you should just give your heuristic the better data you already consciously have: If your untrustworthy senses say you aren't on the mainline, the correct move isn't necessarily to believe them, but rather to decide to put effort into figuring it out, because it's important.

It's clear how your heuristic would evolve. To embrace it correctly, you should make sure that your entire life lives in the mainline. If there's a game with negative expected value, where the worst outcome has chance 10%, and you play it 20 times, that's stupid. Budget the probability you are willing to throw away for the rest of your life now.

If you don't think you can stay to your budget, if you know that always, you will tomorrow play another round of that game by the same reasoning as today, then realize that today's reasoning decides today and tomorrow. Realize that the mainline of giving in to the heuristic is losing eventually, and let the heuristic destroy itself immediately.

comment by mr-hire · 2019-08-22T15:07:00.661Z · score: 2 (1 votes) · LW · GW
I see a few problems with this heuristic, however, and I'm not sure quite how to resolve them. More specifically, I tend to float freely between different projects because I am quick to abandon things if I feel like they aren't working out (compare this to the mindset that some game developers have when they realize their latest game idea isn't very good).

There are two big issues with the "living in the mainline" strategy:

1. Most of the highest EV activities are those that have low chance of success but big rewards. I suspect much of your volatile behavior is bouncing between chasing opportunities you see as high value, and then realizing you're not on the mainline and correcting, then realizing there are higher EV opportunities and correcting again.

2. Strategies that work well on the mainline often fail spectacularly in the face of black swans. So they have a high probability of working but also very negative EV in unlikely situations (which you ignore if you're only thinking about the mainline).

Two alternatives to the "living on the mainline" heuristic:

1. The Anti-fragility heuristic:

  • Use the barbell strategy, to split your activities between surefire wins with low upsides and certainty, and risky moonshots with low downsides but lots of uncertainty around upsides.
  • Notice the reasons that things fail, and make them robust to that class of failure in the future.
  • Try lots of things, and stick with the ones that work over time.

2. The Effectuation Heuristic:

  • Go into areas where you have unfair advantages.
  • Spread your downside risk to people or organizations who can handle it.
  • In generally, work to CREATE the mainline where you have an unfair advantage and high upside.

You might get some mileage out of reading the effectuation and anti-fragility sections of this post. [LW · GW]

comment by Matthew Barnett (matthew-barnett) · 2019-08-19T18:06:28.176Z · score: 4 (3 votes) · LW · GW

Related to: Realism about rationality [LW · GW]

I have talked to some people who say that they value ethical reflection, and would prefer that humanity reflected for a very long time before colonizing the stars. In a sense I agree, but at the same time I can't help but think that "reflection" is a vacuous feel-good word that has no shared common meaning.

Some forms of reflection are clearly good. Epistemic reflection is good if you are a consequentialist, since it can help you get what you want. I also agree that narrow forms of reflection can also be good. One example of a narrow form of reflection is philosophical reflection where we compare the details of two possible outcomes and then decide which one is better.

However, there are much broader forms of reflection which I'm less hesitant to endorse. Namely, the vague types of reflection, such as reflecting on whether we really value happiness, or whether we should really truly be worried about animal suffering.

I can perhaps sympathize with the intuition that we should really try to make sure that what we put into an AI is what we really want, rather than just what we superficially want. But fundamentally, I have skepticism that there is any canonical way of doing this type of reflection that leads to non-arbitrariness.

I have heard something along the lines of "I would want a reflective procedure that extrapolates my values as long as the procedure wasn't deceiving me or had some ulterior motive" but I just don't see how this type of reflection corresponds to any natural class. At some point, we will just have to put some arbitrariness into the value system, and there won't be any "right answer" about how the extrapolation is done.

comment by limerott · 2019-08-19T23:22:17.075Z · score: 2 (4 votes) · LW · GW

The vague reflections you are referring to are analogous to somebody saying "I should really exercise more" without ever doing it. I agree that the mere promise of reflection is useless.

But I do think that reflections about the vague topics are important and possible. Actively working through one's experiences, reading relevant books, discussing questions with intelligent people can lead to epiphanies (and eventually life choices), that wouldn't have occurred otherwise.

However, this is not done with a push of a button and these things don't happen randomly -- they will only emerge if you are prepared to invest a lot of time and energy.

All of this happens on a personal level. To use your example, somebody may conclude from his own life experience that living a life of purpose is more important to him than to live a life of happiness. How to formalize this process so that an AI could use a canonical way to achieve it (and infer somebody's real values simply by observing) is beyond me. It would have to know a lot more about us than is comfortable for most of us.

comment by Matthew Barnett (matthew-barnett) · 2019-09-07T00:06:08.885Z · score: 3 (2 votes) · LW · GW

After writing the post on using transparency regularization [LW · GW] to help make neural networks more interpretable, I have become even more optimistic that this is a potentially promising line of research for alignment. This is because I have noticed that there are a few properties about transparency regularization which may allow it to avoid some pitfalls of bad alignment proposals.

To be more specific, in order for a line of research to be useful for alignment, it helps if

  • The line of research doesn't require unnecessarily large amounts of computations to perform. This would allow the technique to stay competitive, reducing the incentive to skip safety protocols.
  • It doesn't require human models to work. This is useful because
    • Human models are blackboxes and are themselves mesa-optimizers
    • We would be limited primarily to theoretical work in the present, since human cognition is expensive to obtain.
  • Each part of the line of research is recursively legible. That is, if we use the technique on our ML model, we should expect that the technique itself can be explained without appealing to some other black box.

Transparency regularization meets these three criterion respectively, because

  • It doesn't need to be astronomically more expensive than more typical forms of regularization
  • It doesn't necessarily require human-level cognitive parts to get working.
  • It is potentially quite simple mathematically, and so definitely meets the recursively legible criterion.
comment by Matthew Barnett (matthew-barnett) · 2019-08-19T18:48:51.145Z · score: 3 (2 votes) · LW · GW

In discussions about consciousness I find myself repeating the same basic argument against the existence of qualia constantly. I don't do this just to be annoying: It is just my experience that

1. People find consciousness really hard to think about and has been known to cause a lot of disagreements.

2. Personally I think that this particular argument dissolved perhaps 50% of all my confusion about the topic, and was one of the simplest, clearest arguments that I've ever seen.

I am not being original either. The argument is the same one that has been used in various forms across Illusionist/Eliminativist literature that I can find on the internet. Eliezer Yudkowsky used a version of it [LW · GW] many years ago. Even David Chalmers, who is quite the formidable consciousness realist, admits in The Meta-Problem of Consciousness that the argument is the best one he can find against his position.

The argument is simply this:

If we are able to explain why you believe in, and talk about qualia without referring to qualia whatsoever in our explanation, then we should reject the existence of qualia as a hypothesis.

This is the standard debunking argument. It has a more general form which can be used to deny the existence of a lot of other non-reductive things: distinct personal identities, gods, spirits, libertarian free will, a mind-independent morality etc. In some sense it's just an extended version of Occam's razor, showing us that qualia don't do anything in our physical theories, and thus can be rejected as things that actually exist out there in any sense.

To me this argument is very clear, and yet I find myself arguing it a lot. I am not sure how else to get people to see my side of it other than sending them a bunch of articles which more-or-less make the exact same argument but from different perspectives.

I think the human brain is built to have a blind spot on a lot of things, and consciousness is perhaps one of them. I think quite a bit how if humanity is not able to think clearly about this thing which we have spent many research years on, then it seems like there might be some other low hanging philosophical fruits still remaining.

Addendum: I am not saying I have consciousness figured out. However, I think it's analogous to how atheists haven't "got religion figured out" yet they have at the very least taken their first steps by actually rejecting religion. It's not a full theory of religious belief, or even a theory at all. It's just the first thing you do if you want to understand the subject. I roughly agree with Keith Frankish's take on the matter.

comment by Raemon · 2019-08-19T23:46:34.878Z · score: 5 (3 votes) · LW · GW

If we are able to explain why you believe in, and talk about qualia without referring to qualia whatsoever in our explanation, then we should reject the existence of qualia as a hypothesis.

And I assume your claim is that we can explain why I believe in Qualia without referring to qualia?

I haven’t thought that hard about this and am open to that argument. But afaict your comments here so far haven’t actually addressed this question yet.

Edit: to be clear, I don't really much why other people talk about qualia. I care why I perceive myself to experience things. If it's an illusion, cool, but then why do I experience the illusion?

comment by Matthew Barnett (matthew-barnett) · 2019-08-19T23:54:45.608Z · score: 1 (1 votes) · LW · GW

If belief is construed as some sort of representation which stands for external reality (as in the case of some correspondence theories of truth), then we can take the claim to be strong prediction of contemporary neuroscience. Ditto for whether we can explain why we talk about qualia.

It's not that I could explain exactly why you in particular talk about qualia. It's that we have an established paradigm for explaining it.

It's similar in the respect that we have an established paradigm for explaining why people report being able to see color. We can model the eye, and the visual cortex, and we have some idea of what neurons do even though we lack the specific information about how the whole thing fits together. And we could imagine that in the limit of perfect neuroscience, we could synthesize this information to trace back the reason why you said a particular thing.

Since we do not have perfect neuroscience, the best analogy would be analyzing the 'beliefs' and predictions of an artificial neural network. If you asked me, "Why does this ANN predict that this image is a 5 with 98% probability" it would be difficult to say exactly why, even with full access to the neural network parameters.

However, we know that unless our conception of neural networks is completely incorrect, in principle we could trace exactly why the neural network made that judgement, including the exact steps that caused the neural network to have the parameters that it has in the first place. And we know that such an explanation requires only the components which make up the ANN, and not any conscious or phenomenal properties.

comment by Raemon · 2019-08-20T02:13:31.067Z · score: 3 (1 votes) · LW · GW

I can't tell whether we're arguing about the same thing.

Like, I assume that I am a neural net predicting things and deciding things and if you had full access to my brain you could (in principle, given sufficient time) understand everything that was going on in there. But, like, one way or another I experience the perception of perceiving things.

(I'd prefer to taboo 'Qualia' in case it has particular connotations I don't share. Just 'that thing where Ray perceives himself perceiving things, and perhaps the part where sometimes Ray has preferences about those perceptions of perceiving because the perceptions have valence.' If that's what Qualia means, cool, and if it means some other thing I'm not sure I care)

My current working model of "how this aspect of my perception works" is described in this comment [LW · GW], I guess easy enough to quote in full:

"Human brains contain two forms of knowledge: - explicit knowledge and weights that are used in implicit knowledge (admittedly the former is hacked on top of the later, but that isn't relevant here). Mary doesn't gain any extra explicit knowledge from seeing blue, but her brain changes some of her implicit weights so that when a blue object activates in her vision a sub-neural network can connect this to the label "blue"."

The reason I care about any of this is that I believe that a "perceptions-having-valence" is probably morally relevant. (or, put in usual terms: suffering and pleasure seem morally relevant).

(I think it's quite possibe that future-me will decide I was confused about this part, but it's the part I care about anyhow)

Are you saying the my perceiving-that-I-perceive-things-with-valence is an illusion, and that I am in fact not doing that? Or some other thing?

(To be clear, I AM open to 'actually Ray yes, the counterintuitive answer is that no, you're not actually perceiving-that-you-perceive-things-and-some-of-the-perceptions-have-valence.' The topic is clearly confusing and behind the veil of epistemic-ignorance it seems quite plausible I'm the confused one here. Just noting that so far that from way you're phrasing things I can't tell whether your claims map onto the things I care about )

comment by Matthew Barnett (matthew-barnett) · 2019-08-20T02:32:26.586Z · score: 1 (1 votes) · LW · GW
Like, I assume that I am a neural net predicting things and deciding things and if you had full access to my brain you could (in principle, given sufficient time) understand everything that was going on in there. But, like, one way or another I experience the perception of perceiving things.

To me this is a bit like the claim of someone who claimed psychic powers but still wanted to believe in physics who would say, "I assume you could perfectly well understand what was going on at a behavioral level within my brain, but there is still a datum left unexplained: the datum of me having psychic powers."

There are a number of ways to respond to the claim:

  • We could redefine psychic powers to include mere physical properties. This has the problem that psychics insist that psychic power is entirely separate from physical properties. Simple re-definition doesn't make the intuition go away and doesn't explain anything.
  • We could alternatively posit new physics which incorporates psychic powers. This has the occasional problem that it violates Occam's razor, since the old physics was completely adequate. Hence the debunking argument I presented above.
  • Or, we could incorporate the phenomenon within a physical model by first denying that it exists and then explaining the mechanism which caused you to believe in it, and talk about it.

In the case of consciousness, the third response amounts to Illusionism, which is the view that I am defending. It has the advantage that it conservatively doesn't promise to contradict known physics, and it also does justice to the intuition that consciousness really exists.

I'd prefer to taboo 'Qualia' in case it has particular connotations I don't share. Just 'that thing where Ray perceives himself perceiving things, and perhaps the part where sometimes Ray has preferences about those perceptions of perceiving because the perceptions have valence.'

To most philosophers who write about it, qualia is defined as the experience of what it's like. Roughly speaking, I agree with thinking of it as a particular form of perception that we experience.

However, it's not just any perception, since some perceptions can be unconscious perceptions. Qualia specifically refer to the qualitative aspects of our experience of the world: the taste of wine, the touch of fabric, the feeling of seeing blue, the suffering associated with physical pain etc. These are said to be directly apprehensible to our 'internal movie' that is playing inside our head. It is this type of property which I am applying the framework of illusionism to.

The reason I care about any of this is that I believe that a "perceptions-having-valence" is probably morally relevant.

I agree. That's why I typically take the view that consciousness is a powerful illusion, and that we should take it seriously. Those who simply re-define consciousness as essentially a synonym for "perception" or "observation" or "information" are not doing justice to the fact that it's the thing I care about in this world. I have a strong intuition that consciousness is what is valuable even despite the fact that I hold an illusionist view. To put it another way, I would care much less if you told me a computer was receiving a pain-signal (labeled in the code as some variable with suffering set to maximum), compared to the claim that a computer was actually suffering in the same way a human does.

Are you saying the my perceiving-that-I-perceive-things-with-valence is an illusion, and that I am in fact not doing that? Or some other thing?

Roughly speaking, yes. I am denying that that type of thing actually exists, including the valence claim.

comment by Raemon · 2019-08-20T03:13:24.012Z · score: 5 (2 votes) · LW · GW
Or, we could incorporate the phenomenon within a physical model by first denying that it exists and then explaining the mechanism which caused you to believe in it, and talk about it.

It still feels very important that you haven't actually explained this.

In the case of psychic powers, I (think?) we actually have pretty good explanations for where perceptions of psychic powers comes from, which makes the perception of psychic powers non-mysterious. (i.e. we know how cold reading works, and how various kinds of confirmation bias play into divination). But, that was something that actually had to be explained.

It feels like you're just changing the name of the confusing thing from 'the fact that I seem conscious to myself' to 'the fact that I'm experiencing an illusion of consciousness.' Cool, but, like, there's still a mysterious thing that seems quite important to actually explain.

comment by Matthew Barnett (matthew-barnett) · 2019-08-20T03:56:16.601Z · score: 1 (1 votes) · LW · GW

Also just in general, I disagree that skepticism is not progress. If I said, "I don't believe in God because there's nothing in the universe with those properties..." I don't think it's fair to say, "Cool, but like, I'm still praying to something right, and that needs to be explained" because I don't think that speaks fully to what I just denied.

In the case of religion, many people have a very strong intuition that God exists. So, is the atheist position not progress because we have not explained this intuition?

comment by Raemon · 2019-08-20T04:05:54.148Z · score: 3 (1 votes) · LW · GW

I agree that skepticism generally can be important progress (I recently stumbled upon this old comment making a similar argument [LW · GW] about how saying "not X" can be useful)

The difference between God and consciousness is that the interesting bit about consciousness *is* my perception of it, full stop. Unlike God or psychic powers, there is no separate thing from my perception of it that I'm interested in.

comment by Matthew Barnett (matthew-barnett) · 2019-08-20T04:09:44.924Z · score: 1 (1 votes) · LW · GW
The difference between God and consciousness is that the interesting bit about consciousness *is* my perception of it, full stop.

If by perception you simply mean "You are an information processing device that takes signals in and outputs things" then this is entirely explicable on our current physical models, and I could dissolve the confusion fairly easily.

However, I think you have something else in mind which is that there is somehow something left out when I explain it by simply appealing to signal processing. In that sense, I think you are falling right into the trap! You would be doing something similar to the person who said, "But I am still praying to God!"

comment by Raemon · 2019-08-20T05:15:31.967Z · score: 3 (1 votes) · LW · GW
However, I think you have something else in mind which is that there is somehow something left out when I explain it by simply appealing to signal processing. In that sense,

I don't have anything else in mind that I know of. "Explained via signal processing" seems basically sufficent. The interesting part is "how can you look at a given signal-processing-system, and predict in advance whether that system is the sort of thing that would talk* about Qualia, if it could talk?"

(I feel like this was all covered in the sequences, basically?)

*where "talk about qualia" is shorthand 'would consider the concept of qualia important enough to have a concept for.'"

comment by Matthew Barnett (matthew-barnett) · 2019-08-20T05:45:15.477Z · score: 2 (2 votes) · LW · GW

I mean, I agree that this was mostly covered in the sequences. But I also think that I disagree with the way that most people frame the debate. At least personally I have seen people who I know have read the sequences still make basic errors. So I'm just leaving this here to explain my point of view.

Intuition: On a first approximation, there is something that it is like to be us. In other words, we are beings who have qualia.

Counterintuition: In order for qualia to exist, there would need to exist entities which are private, ineffable, intrinsic, subjective and this can't be since physics is public, effable, and objective and therefore contradicts the existence of qualia.

Intuition: But even if I agree with you that qualia don't exist, there still seems to be something left unexplained.

Counterintuition: We can explain why you think there's something unexplained because we can explain the cause of your belief in qualia, and why you think they have these properties. By explaining why you believe it we have explained all there is to explain.

Intuition: But you have merely said that we could explain it. You have not have actually explained it.

Counterintuition: Even without the precise explanation, we now have a paradigm for explaining consciousness, so it is not mysterious anymore.

This is essentially the point where I leave.

comment by TAG · 2019-08-30T13:03:26.771Z · score: 6 (4 votes) · LW · GW

physics is public, effable, and objective and therefore contradicts the existence of qualia.

Physics as map is. Note that we can't compare the map directly to the territory.

comment by Slider · 2019-08-20T20:42:35.431Z · score: 3 (2 votes) · LW · GW

We do not telepathically receive experiemnt results when they are performed. In reality you need ot intake the measumrent results from your first-person point of view (use eyes to read led screen or use ears to hear about stories of experiments performed). It seems to be taht experiments are intersubjective in that other observers will report having experiences that resemble my first-hand experiences. For most purposes shorthanding this to "public" is adequate enough. But your point of view is "unpublisable" in that even if you really tried there is no way to provide you private expereience to the public knowledge pool ("directly"). "I now how you feel" is a fiction it doesn't actually happen.

Skeptisim about the experiencing of others is easier but being skeptical about your own experiences would seem to be ludicrous.

comment by Matthew Barnett (matthew-barnett) · 2019-08-20T20:54:57.879Z · score: 1 (1 votes) · LW · GW

I am not denying that humans take in sensory input and process it using their internal neural networks. I am denying that process has any of the properties associated with consciousness in the philosophical sense. And I am making an additional claim which is that if you merely redefine consciousness so that it lacks these philosophical properties, you have not actually explained anything or dissolved any confusion.

The illusionist approach is the best approach because it simultaneously takes consciousness seriously and doesn't contradict physics. By taking this approach we also have an understood paradigm for solving the hard problem of consciousness: namely, the hard problem is reduced to the meta-problem (see Chalmers).

comment by Matthew Barnett (matthew-barnett) · 2019-08-20T03:36:52.785Z · score: 1 (1 votes) · LW · GW
It feels like you're just changing the name of the confusing thing from 'the fact that I seem conscious to myself' to 'the fact that I'm experiencing an illusion of consciousness.' Cool, but, like, there's still a mysterious thing that seems quite important to actually explain.

I don't actually agree. Although I have not fully explained consciousness, I think that I have shown a lot.

In particular, I have shown us what the solution to the hard problem of consciousness would plausibly look like if we had unlimited funding and time. And to me, that's important.

And under my view, it's not going to look anything like, "Hey we discovered this mechanism in the brain that gives rise to consciousness." No, it's going to look more like, "Look at this mechanism in the brain that makes humans talk about things even though the things they are talking about have no real world referent."

You might think that this is a useless achievement. I claim the contrary. As Chalmers points out, pretty much all the leading theories of consciousness fail the basic test of looking like an explanation rather than just sounding confused. Don't believe me? Read Section 3 in this paper.

In short, Chalmers reviews the current state of the art in consciousness explanations. He first goes into Integrated Information Theory (IIT), but then convincingly shows that IIT fails to explain why we would talk about consciousness and believe in consciousness. He does the same for global workspace theories, first order representational theories, higher order theories, consciousness-causes-collapse theories, and panpsychism. Simply put, none of them even approach an adequate baseline of looking like an explanation.

I also believe that if you follow my view carefully you might stop being confused about a lot of things. Like, do animals feel pain? Well it depends on your definition of pain -- consciousness is not real in any objective sense so this is a definition dispute. Same with asking whether person A is happier than person B, or asking whether computers will ever be conscious.

Perhaps this isn't an achievement strictly speaking relative to the standard Lesswrong points of view. But that's only because I think the standard Lesswrong point of view is correct. Yet even so, I still see people around me making fundamentally basic mistakes about consciousness. For instance, I see people treating consciousness as intrinsic, ineffable, private -- or they think there's an objectively right answer to whether animals feel pain and argue over this as if it's not the same as a tree falling in a forest.

comment by TAG · 2019-08-22T08:27:18.533Z · score: 1 (1 votes) · LW · GW

And we know that such an explanation requires only the components which make up the ANN, and not any conscious or phenomenal properties.

That's an argument against dualism not an argument against qualia. If mind brain identity is true, neural activity is causing reports, and qualia, along with the rest of consciousness are identical to neural activity, so qualia are also causing reports.

comment by Matthew Barnett (matthew-barnett) · 2019-08-22T21:19:40.997Z · score: 1 (1 votes) · LW · GW

If you identify qualia as behavioral parts of our physical models, then are you also willing to discard the properties philosophers have associated with qualia, such as

  • Ineffable, as they can't be explained using just words or mathematical sentences
  • Private, as they are inaccessible to outside third-person observers
  • Intrinsic, as they are fundamental to the way we experience the world

If you are willing to discard these properties, then I suggest we stop using the world "qualia" since you have simply taken all the meaning away once you have identified them with things that actually exist. This is what I mean when I say that I am denying qualia.

It is analogous to someone who denies that souls exist by first conceding that we could identify certain physical configurations as examples of souls, but then explaining that this would be confusing to anyone who talks about souls in the traditional sense. Far better in my view to discard the idea altogether.

comment by Raemon · 2019-08-22T22:06:22.268Z · score: 8 (4 votes) · LW · GW

My orientation to this conversation seems more like "hmm, I'm learning that it is possible the word qualia has a bunch of connotations that I didn't know it had", as opposed to "hmm, I was wrong to believe in the-thing-I-was-calling-qualia."

But I'm not yet sure that these connotations are actually universal – the wikipedia article opens with:

In philosophy and certain models of psychology, qualia (/ˈkwɑːliə/ or /ˈkweɪliə/; singular form: quale) are defined as individual instances of subjective, conscious experience. The term qualia derives from the Latinneuter plural form (qualia) of the Latin adjective quālis (Latin pronunciation: [ˈkʷaːlɪs]) meaning "of what sort" or "of what kind" in a specific instance, like "what it is like to taste a specific apple, this particular apple now".
Examples of qualia include the perceived sensation of pain of a headache, the taste of wine, as well as the redness of an evening sky. As qualitative characters of sensation, qualia stand in contrast to "propositional attitudes",[1] where the focus is on beliefs about experience rather than what it is directly like to be experiencing.
Philosopher and cognitive scientist Daniel Dennett once suggested that qualia was "an unfamiliar term for something that could not be more familiar to each of us: the ways things seem to us".[2]
Much of the debate over their importance hinges on the definition of the term, and various philosophers emphasize or deny the existence of certain features of qualia. Consequently, the nature and existence of various definitions of qualia remain controversial because they are not verifiable.

Later on, it notes the three characteristics (ineffable/private/intrinsic) that Dennett listed.

But this looks more like an accident of history than something intrinsic to the term. The opening paragraphs defined qualia the way I naively expected it to be defined.

My impression looking at the various defintions and discussion is not that qualia was defined in this specific fashion, so much as various people trying to grapple with a confusing problem generated various possible definitions and rules for it, and some of those turned out to be false once we came up with better understanding.

I can see where you're coming from with the soul analogy, but I'm not sure if it's more like the soul analogy, or more like "One early philosopher defined 'a human' as a featherless biped, and then a later one said "dude, look at this featherless chicken I just made" and they realized the definition was silly.

I guess my question here is – do you have a suggestion for a replacement word for "the particular kind of observation that gets made by an entity that actually gets to experience the perception"? This still seems importantly different from "just a perception", since very simple robots and thermostats or whatever can be said to have those. I don't really care whether they are inherently private, ineffable or intrinsic, and whether Daniel Dennett was able to eff them seems more like a historical curiosity to me.

The wikipedia article specifically says that they people argue a lot over the definitions:

There are many definitions of qualia, which have changed over time. One of the simpler, broader definitions is: "The 'what it is like' character of mental states. The way it feels to have mental states such as pain, seeing red, smelling a rose, etc."

That definition there is the one I'm generally using, and the one which seems important to have a word for. This seems more like a political/coordination question of "is it easier to invent a new word and gain traction for it, or get everyone on page about 'actually, they're totally in principle effable, you just might need to be a kind of mind different than a current-generation-human to properly eff them.'

comment by Raemon · 2019-08-22T22:17:31.426Z · score: 3 (1 votes) · LW · GW

It does seem to me something like "I expect the sort of mind that is capable of viewing qualia of other people would be sufficiently different from a human mind that it may still be fair to call them 'private/ineffable among humans.'"

comment by Matthew Barnett (matthew-barnett) · 2019-08-22T22:38:45.252Z · score: 1 (1 votes) · LW · GW

Thanks for engaging with me on this thing. :)

I know I'm not being as clear as I could possibly be, and at some points I sort of feel like just throwing "Quining Qualia" or Keith Frankish's articles or a whole bunch of other blog posts at people and say, "Please just read this and re-read it until you have a very distinct intuition about what I am saying." But I know that that type of debate is not helpful.

I think I have a OK-to-good understanding of what you are saying. My model of your reply is something like this,

"Your claim is that qualia don't exist because nothing with these three properties exists (ineffability/private/intrinsic), but it's not clear to me that these three properties are universally identified with qualia. When I go to Wikipedia or other sources, they usually identify qualia with 'what it's like' rather than these three very specific things that Daniel Dennett happened to list once. So, I still think that I am pointing to something real when I talk about 'what it's like' and you are only disputing a perhaps-strawman version of qualia."

Please correct me if this model of you is inaccurate.

I recognize what you are saying, and I agree with the place you are coming from. I really do. And furthermore, I really really agree with the idea that we should go further than skepticism and we should always ask more questions even after we have concluded that something doesn't exist.

However, the place I get off the boat is where you keep talking about how this 'what it's like' thing is actually referring to something coherent in the real world that has a crisp, natural boundary around it. That's the disagreement.

I don't think it's an accident of history either that those properties are identified with qualia. The whole reason Daniel Dennett identified them was because he showed that they were the necessary conclusion of the sort of thought experiments people use for qualia. He spends the whole first several paragraphs justifying them using various intuition pumps in his essay on the matter.

Point being, when you are asked to clarify what 'what it's like' means, you'll probably start pointing to examples. Like, you might say, "Well, I know what it's like to see the color green, so that's an example of a quale." And Daniel Dennett would then press the person further and go, "OK could you clarify what you mean when you say you 'know what it's like to see green'?" and the person would say, "No, I can't describe it using words. And it's not clear to me it's even in the same category of things that can be either, since I can't possibly conceive of an English sentence that would describe the color green to a blind person." And then Daniel Dennett would shout, "Aha! So you do believe in ineffability!"

The point of those three properties (actually he lists 4, I think), is not that they are inherently tied to the definition. It's that the definition is vague, and every time people are pressed to be more clear on what they mean, they start spouting nonsense. Dennett did valid and good deconfusion work where he showed that people go wrong in these four places, and then showed how there's no physical thing that could possibly allow those four things.

These properties also show up all over the various thought experiments that people use when talking about qualia. For example, Nagel uses the private property in his essay "What Is it Like to Be a Bat?" Chalmers uses the intrinsic property when he talks about p-zombies being physically identical to humans in every respect except for qualia. Frank Jackson used the ineffability property when he talked about how Mary the neuroscientist had something missing when she was in the black and white room.

All of this is important to recognize. Because if you still want to say, "But I'm still pointing to something valid and real even if you want to reject this other strawman-entity" then I'm going to treat you like the person who wants to believe in souls even after they've been shown that nothing soul-like exists in this universe.

comment by Slider · 2019-08-30T14:45:32.680Z · score: 1 (1 votes) · LW · GW

Spouting nonsense is different from being wrong. If I say that there are no rectangles with 5 angles that can be processed pretty straght forwardly because the concept of a rectangle is unproblematic. But if you seek why that statement was made and the person points to a pentagon you will find 5 angles. Now there are polygons with 5 angles. If you give a short word for 5 angle rectangle" it's correct to say those don't exists. But if you give an ostensive definition of the shape then it does exist and it's more to the point to say that it's not a rectangle rather that it doesn't exist.

In the details when persons say "what it is like to see green" one could fail to get what they mean or point to. If someone says "look a unicorn" and one has proof that unicorns don't exist that doesn't mean that the unicorn reference is not referencing something or that the reference target does not exist. If you end up in a situation where you point at a horse and say "those things do not exist. Look no horn, doesn't exist" you are not being helpful. If somebody is pointing to a horse and says "look, a unicorn!" and you go "where? I see only horses" you are also not being helpful. Being "motivatedly uncooperative in ostension receiving" is not cool. Say that you made a deal to sell a gold bar in exchange for a unicorn. Then refusing to accept any object as an unicorn woud let you keep your gold bar and you migth be tempted to play dumb.

When people are saying "what it feels like to see green" they are trying to communicate something and failing their assertion by sabotaging their communication doesn't prove anything. Communication is hard yes but doing too much semantics substitution means you start talking past each other.

comment by TAG · 2019-08-25T23:31:30.090Z · score: 1 (1 votes) · LW · GW

I am not suggesting that qualia should be identified with neural activity in a way that loses any aspects of the philosophical definition... bearing in mind that the he philosophical definition does not assert that qualia are non physical.

comment by jessicata (jessica.liu.taylor) · 2019-08-19T21:30:16.845Z · score: 3 (2 votes) · LW · GW

What are you experiencing right now? (E.g. what do you see in front of you? In what sense does it seem to be there?)

comment by Matthew Barnett (matthew-barnett) · 2019-08-19T21:40:50.853Z · score: 1 (1 votes) · LW · GW

I won't lie -- I have a very strong intuition that there's this visual field in front of me, and that I can hear sounds that have distinct qualities, and simultaneously I can feel thoughts rush into my head as if there is an internal speaker and listener. And when I reflect on some visual in the distance, it seems as though the colors are very crisp and exist in some way independent of simple information processing in a computer-type device. It all seems very real to me.

I think the main claim of the illusionist is that these intuitions (at least insofar as the intuitions are making claims about the properties of qualia) are just radically incorrect. It's as if our brains have an internal error in them, not allowing us to understand the true nature of these entities. It's not that we can't see or something like that. It's just that the quality of perceiving the world has essentially an identical structure to what one might imagine a computer with a camera would "see."

Analogy: Some people who claim to have experienced heaven aren't just making stuff up. In some sense, their perception is real. It just doesn't have the properties we would expect it to have at face value. And if we actually tried looking for heaven in the physical world we would find it to be little else than an illusion.

comment by jessicata (jessica.liu.taylor) · 2019-08-19T22:04:23.800Z · score: 4 (2 votes) · LW · GW

What's the difference between making claims about nearby objects and making claims about qualia (if there is one)? If I say there's a book to my left, is that saying something about qualia? If I say I dreamt about a rabbit last night, is that saying something about qualia?

(Are claims of the form "there is a book to my left" radically incorrect?)

That is, is there a way to distinguish claims about qualia from claims about local stuff/phenomena/etc?

comment by Matthew Barnett (matthew-barnett) · 2019-08-19T22:18:07.028Z · score: 3 (2 votes) · LW · GW

Sure. There are a number of properties usually associated with qualia which are the things I deny. If we strip these properties away (something Kieth Frankish refers to as zero qualia) then we can still say that they exist. But it's confusing to say that something exists when its properties are so minimal. Daniel Dennett listed a number of properties that philosophers have assigned to qualia and conscious experience more generally:

(1) ineffable (2) intrinsic (3) private (4) directly or immediately apprehensible

Ineffable because there's something Mary the neuroscientist is missing when she is in the black and white room. And someone who tried explaining color to her would not be able to fully.

Intrinsic because it cannot be reduced to bare physical entities, like electrons (think: could you construct a quale if you had the right set of particles?).

Private because they are accessible to us and not globally available. In this sense, if you tried to find out the qualia that a mouse was experiencing as it fell victim to a trap, you would come up fundamentally short because it was specific to the mouse mind and not yours. Or as Nagel put it, there's no way that third person science could discover what it's like to be a bat.

Directly apprehensible because they are the elementary things that make up our experience of the world. Look around and qualia are just what you find. They are the building blocks of our perception of the world.

It's not necessarily that none of these properties could be steelmanned. It is just that they are so far from being steelmannable that it is better to deny their existence entirely. It is the same as my analogy with a person who claims to have visited heaven. We could either talk about it as illusory or non-illusory. But for practical purposes, if we chose the non-illusory route we would probably be quite confused. That is, if we tried finding heaven inside the physical world, with the same properties as the claimant had proposed, then we would come up short. Far better then, to treat it as a mistake inside of our cognitive hardware.

comment by jessicata (jessica.liu.taylor) · 2019-08-19T22:43:32.790Z · score: 5 (3 votes) · LW · GW

Thanks for the elaboration. It seems to me that experiences are:

  1. Hard-to-eff, as a good-enough theory of what physical structures have which experiences has not yet been discovered, and would take philosophical work to discover.

  2. Hard to reduce to physics, for the same reason.

  3. In practice private due to mind-reading technology not having been developed, and due to bandwidth and memory limitations in human communication. (It's also hard to imagine what sort of technology would allow replicating the experience of being a mouse)

  4. Pretty directly apprehensible (what else would be? If nothing is, what do we build theories out of?)

It seems natural to conclude from this that:

  1. Physical things exist.
  2. Experiences exist.
  3. Experiences probably supervene on physical things, but the supervenience relation is not yet determined, and determining it requires philosophical work.
  4. Given that we don't know the supervenience relation yet, we need to at least provisionally have experiences in our ontology distinct from physical entities. (It is, after all, impossible to do physics without making observations and reporting them to others)

Is there something I'm missing here?

comment by Matthew Barnett (matthew-barnett) · 2019-08-19T23:43:02.722Z · score: 1 (1 votes) · LW · GW

Here's a thought experiment which helped me lose my 'belief' in qualia: would a robot scientist, who was only designed to study physics and make predictions about the world, ever invent qualia as a hypothesis?

Assuming the actual mouth movements we make when we say things like, "Qualia exist" are explainable via the scientific method, the robot scientist could still predict that we would talk and write about consciousness. But would it posit consciousness as a separate entity altogether? Would it treat consciousness as a deep mystery, even after peering into our brains and finding nothing but electrical impulses?

comment by jessicata (jessica.liu.taylor) · 2019-08-20T01:22:04.335Z · score: 2 (1 votes) · LW · GW

Robots take in observations. They make theories that explain their observations. Different robots will make different observations and communicate them to each other. Thus, they will talk about observations.

After making enough observations they make theories of physics. (They had to talk about observations before they made low-level physics theories, though; after all, they came to theorize about physics through their observations). They also make bridge laws explaining how their observations are related to physics. But, they have uncertainty about these bridge laws for a significant time period.

The robots theorize that humans are similar to them, based on the fact that they have functionally similar cognitive architecture; thus, they theorize that humans have observations as well. (The bridge laws they posit are symmetric that way, rather than being silicon-chauvinist)

comment by Matthew Barnett (matthew-barnett) · 2019-08-20T01:39:48.601Z · score: 1 (1 votes) · LW · GW

I think you are using the word "observation" to refer to consciousness. If this is true, then I do not deny that humans take in observations and process them.

However, I think the issue is that you have simply re-defined consciousness into something which would be unrecognizable to the philosopher. To that extent, I don't say you are wrong, but I will allege that you have not done enough to respond to the consciousness-realist's intuition that consciousness is different from physical properties. Let me explain:

If qualia are just observations, then it seems obvious that Mary is not missing any information in her room, since she can perfectly well understand and model the process by which people receive color observations.

Likewise, if qualia are merely observations, then the Zombie argument amounts to saying that p-Zombies are beings which can't observe anything. This seems patently absurd to me, and doesn't seem like it's what Chalmers meant at all when he came up with the thought experiment.

Likewise, if we were to ask, "Is a bat conscious?" then the answer would be a vacuous "yes" under your view, since they have echolocaters which take in observations and process information.

In this view even my computer is conscious since it has a camera on it. For this reason, I suggest we are talking about two different things.

comment by jessicata (jessica.liu.taylor) · 2019-08-20T01:56:55.089Z · score: 2 (1 votes) · LW · GW

Mary's room seems uninteresting, in that robot-Mary can predict pretty well what bit-pattern she's going to get upon seeing color. (To the extent that the human case is different, it's because of cognitive architecture constraints)

Regarding the zombie argument: The robots have uncertainty over the bridge laws. Under this uncertainty, they may believe it is possible that humans don't have experiences, due to the bridge laws only identifying silicon brains as conscious. Then humans would be zombies. (They may have other theories saying this is pretty unlikely / logically incoherent / etc)

Basically, the robots have a primitive entity "my observations" that they explain using their theories. They have to reconcile this with the eventual conclusion they reach that their observations are those of a physically instantiated mind like other minds, and they have degrees of freedom in which things they consider "observations" of the same type as "my observations" (things that could have been observed).

comment by Matthew Barnett (matthew-barnett) · 2019-08-20T02:06:39.763Z · score: 1 (1 votes) · LW · GW

As a qualia denier, I sometimes feel like I side more with the Chalmers side of the argument, which at least admits that there's a strong intuition for consciousness. It's not that I think that the realist side is right, but it's that I see the naive physicalists making statements that seem to completely misinterpret the realist's argument.

I don't mean to single you out in particular. However, you state that Mary's room seems uninteresting because Mary is able to predict the "bit pattern" of color qualia. This seems to me to completely miss the point. When you look at the sky and see blue, is it immediately apprehensible as a simple bit pattern? Or does it at least seem to have qualitative properties too?

I'm not sure how to import my argument onto your brain without you at least seeing this intuition, which is something I considered obvious for many years.

comment by jessicata (jessica.liu.taylor) · 2019-08-20T03:00:20.619Z · score: 8 (3 votes) · LW · GW

There is a qualitative redness to red. I get that intuition.

I think "Mary's room is uninteresting" is wrong; it's uninteresting in the case of robot scientists, but interesting in the case of humans, in part because of what it reveals about human cognitive architecture.

I think in the human case, I would see Mary seeing a red apple as gaining in expressive vocabulary rather than information. She can then describe future things as "like what I saw when I saw that first red apple". But, in the case of first seeing the apple, the redness quale is essentially an arbitrary gensym.

I suppose I might end up agreeing with the illusionist view on some aspects of color perception, then, in that I predict color quales might feel like new information when they actually aren't. Thanks for explaining.

comment by Raemon · 2019-08-20T03:10:02.731Z · score: 6 (3 votes) · LW · GW
I predict color quales might feel like new information when they actually aren't.

I am curious if you disagree with the claim [LW · GW] that (human) Mary is gaining implicit information, in that (despite already knowing many facts about red-ness), her (human) optic system wouldn't have successfully been able to predict the incoming visual data from the apple before seeing it, but afterwards can?

comment by jessicata (jessica.liu.taylor) · 2019-08-20T03:20:25.600Z · score: 2 (1 votes) · LW · GW

That does seem right, actually.

Now that I think about it, due to this cognitive architecture issue, she actually does gain new information. If she sees a red apple in the future, she can know that it's red (because it produces the same qualia as the first red apple), whereas she might be confused about the color if she hadn't seen the first apple.

I think I got confused because, while she does learn something upon seeing the first red apple, it isn't the naive "red wavelengths are red-quale", it's more like "the neurons that detect red wavelengths got wired and associated with the abstract concept of red wavelengths." Which is still, effectively, new information to Mary-the-cognitive-system, given limitations in human mental architecture.

comment by Slider · 2019-08-20T20:31:01.203Z · score: 1 (1 votes) · LW · GW

A physicist might discover that you can make computers out of matter. You can make such computers produce sounds. In processing sounds "homonym" is a perfectly legimate and useful concept. Even if two words are stored in far away hardware locations knowing that they will "sound detection clash" is important information. Even if you slice it a little differently and use different kinds of computer architechtures it woudl still be a real phenomenon.

In technical terms there might be the issue whether its meaningful to differntiate between founded concepts and hypothesis. If hypotheses are required then you could have a physicist that didn't ever talk about temperature.

comment by Matthew Barnett (matthew-barnett) · 2019-08-19T23:04:24.670Z · score: 1 (2 votes) · LW · GW

It seems to me that you are trying to recover the properties of conscious experience in a way that can be reduced to physics. Ultimately, I just feel that this approach is not likely to succeed without radical revisions to what you consider to be conscious experience. :)

Generally speaking, I agree with the dualists who argue that physics is incompatible with the claimed properties of qualia. Unlike the dualists, I see this as a strike against qualia rather than a strike against physics. David Chalmers does a great job in his articles outlining why conscious properties don't fit nicely in our normal physical models.

It's not simply that we are awaiting more data to fill in the details: it's that there seems to be no way even in principle to incorporate conscious experience into physics. Physics is just a different type of beast: it has no mental core, it is entirely made up of mathematical relations, and is completely global. Consciousness as it's described seems entirely inexplicable in that respect, and I don't see how it could possibly supervene on the physical.

One could imagine a hypothetical heaven-believer (someone who claimed to have gone to heaven and back) listing possible ways to incorporate their experience into physics. They could say,

Hard-to-eff, as it's not clear how physics interacts with the heavenly realm. We must do more work to find out where the entry points of heaven and earth are.
In practice private due to the fact that technology hasn't been developed yet that can allow me to send messages back from heaven while I'm there.
Pretty directly apprehensible because how would it even be possible for me to have experienced that without heaven literally being real!

On the other hand, a skeptic could reply that:

Even if mind reading technology isn't good enough yet, our best models say that humans can be described as complicated computers with a particular neural network architecture. And we know that computers can have bugs in them causing them to say things when there is no logical justification.

Also, we know that computers can lack perfect introspection so we know that even if it is utterly convinced that heaven is real, this could just be due to the fact that the computer is following its programming and is exceptionally stubborn.

Heaven has no clear interpretation in our physical models. Yes, we could see that a supervenience is possible. But why rely on that hope? Isn't it better to say that the belief is caused by some sort of internal illusion? The latter hypothesis is at least explicable within our models and doesn't require us to make new fundamental philosophical advances.

comment by jessicata (jessica.liu.taylor) · 2019-08-20T01:30:09.197Z · score: 4 (2 votes) · LW · GW

It seems that doubting that we have observations would cause us to doubt physics, wouldn't it? Since physics-the-discipline is about making, recording, communicating, and explaining observations.

Why think we're in a physical world if our observations that seem to suggest we are are illusory?

This is kind of like if the people saying we live in a material world arrived at these theories through their heaven-revelations, and can only explain the epistemic justification for belief in a material world by positing heaven. Seems odd to think heaven doesn't exist in this circumstance.

(Note, personally I lean towards supervenient neutral monism: direct observation and physical theorizing are different modalities for interacting with the same substance, and mental properties supervene on physical ones in a currently-unknown way. Physics doesn't rule out observation, in fact it depends on it, while itself being a limited modality, such that it is unsurprising if you couldn't get all modalities through the physical-theorizing modality. This view seems non-contradictory, though incomplete.)

comment by Slider · 2019-08-20T20:18:22.819Z · score: 2 (2 votes) · LW · GW

You seem to have similar characteristic in your beliefs I encountered on less wrong before.

https://www.lesswrong.com/posts/TniCuWCDxQeqFSxut/arguments-for-the-existence-of-qualia-1?commentId=Zwyh8Xt5uaZ4ZBYbP [LW · GW]

There is the phenomenon of qualia and then there is the ontological extension. The word does not refer to the ontological extension.

It would be like explaining lightning with lightning. Sure when we dig down there are non-lightning parts. But lightning still zaps people.

Or it would be a category error like saying that if you can explain physics without coordinates by only positing that energy exists you should drop coordinates from your concepts. But coordinates are not a thing to believe in, it's a conceptual tool to specify claims not a hypothesis in itself. When physists believe in a particular field theory they are not agreeing with the greek philosphers that think that the world is made of a type of number.

comment by Matthew Barnett (matthew-barnett) · 2019-08-20T20:30:36.654Z · score: 2 (2 votes) · LW · GW
There is the phenomenon of qualia and then there is the ontological extension. The word does not refer to the ontological extension.

My basic claim is that the way that people use the word qualia implicitly implies the ontological extensions. By using the term, you are either smuggling these extensions in, or you are using the term in a way that no philosopher uses it. Here are some intuitions:

Qualia are private entities which occur to us and can't be inspected via third person science.

Qualia are ineffable; you can't explain them using a sufficiently complex English or mathematical sentence.

Qualia are intrinstic; you can't construct a quale if you had the right set of particles.

etc.

Now, that's not to say that you can't define qualia in such a way that these ontological extensions are avoided. But why do so? If you are simply re-defining the phenomenon, then you have not explained anything. The intuitions above still remain, and there is something still unexplained: namely, why people think that there are entities with the above properties.

That's why I think that instead, the illusionist approach is the correct one. Let me quote Keith Frankish, who I think does a good job explaining this point of view,

Suppose we encounter something that seems anomalous, in the sense of being radically inexplicable within our established scientific worldview. Psychokinesis is an example. We would have, broadly speaking, three options.
First, we could accept that the phenomenon is real and explore the implications of its existence, proposing major revisions or extensions to our science, perhaps amounting to a paradigm shift. In the case of psychokinesis, we might posit previously unknown psychic forces and embark on a major revision of physics to accommodate them.
Second, we could argue that, although the phenomenon is real, it is not in fact anomalous and can be explained within current science. Thus, we would accept that people really can move things with their unaided minds but argue that this ability depends on known forces, such as electromagnetism.
Third, we could argue that the phenomenon is illusory and set about investigating how the illusion is produced. Thus, we might argue that people who seem to have psychokinetic powers are employing some trick to make it seem as if they are mentally influencing objects.

In the case of lightning, I think that the first approach would be correct, since lightning forms a valid physical category under which we can cast our scientific predictions of the world. In the case of the orbit of Uranus, the second approach is correct, since it was adequately explained by appealing to understood Newtonian physics. However, the third approach is most apt for bizarre phenomena that seem at first glance to be entirely incompatible with our physics. And qualia certainly fit the bill in that respect.


comment by gilch · 2019-08-25T00:50:44.021Z · score: 4 (3 votes) · LW · GW

When I say "qualia" I mean individual instances of subjective, conscious experience full stop. These three extensions are not what I mean when I say "qualia".


Qualia are private entities which occur to us and can't be inspected via third person science.

Not convinced of this. There are known neural correlates of consciousness. That our current brain scanners lack the required resolution to make them inspectable does not prove that they are not inspectable in principle.

Qualia are ineffable; you can't explain them using a sufficiently complex English or mathematical sentence.

This seems to be a limitation of human language bandwidth/imagination, but not fundamental to what qualia are. Consider the case of the conjoined twins Krista and Tatiana, who share some brain structure and seem to be able "hear" each other's thoughts and see through each other's eyes.

Suppose we set up a thought experiment. Suppose that they grow up in a room without color, like Mary's room. Now knock out Krista and show Tatiana something red. Remove the red thing before Krista wakes up. Wouldn't Tatiana be able to communicate the experience of red to her sister? That's an effable quale!

And if they can do it, then in principle, so could you, with a future brain-computer interface.

Really, communicating at all is a transfer of experience. We're limited by common ground, sure. We both have to be speaking the same language, and have to have enough experience to be able to imagine the other's mental state.

Qualia are intrinstic; you can't construct a quale if you had the right set of particles.

Again, not convinced. Isn't your brain made of particles? I construct qualia all the time just by thinking about it. (It's called "imagination".) I don't see any reason in principle why this could not be done externally to the brain either.

comment by Slider · 2019-08-25T01:12:26.460Z · score: 1 (1 votes) · LW · GW

The Tatiana and krista experiment is quite interesting but stretches the concept of communication to it's limits. I am inclined to say that having a shared part of your conciousness is not communication in the same way that sharing a house is not traffic. It does strike me that communication involves directed construction of thoughts and it's easy to imagine that the scope of what this construction is capable would be vastly smaller than what goes on in the brain in other processes. Extending the construction to new types of thoughts might be a soft border rather than a hard one. With enough verbal sentences it should be in principle to be able to reconstruct an actual graphical image, but even with overtly descriptive prose this level is not really reached (I presume) but remains within the realm of sentence-like data structures.

In the example Tatiana directs the visual cortex and Krista can just recall the representation later. But in a single conciouness brain nothing can be made "ready" but it must be assembled by the brain itself from sensory inputs. That is cognitive space probably has small funnels and for signficant objects they can't travel them as themselfs but must be chopped off into pieces and reassembled after passing the tube.

comment by gilch · 2019-08-25T04:44:50.223Z · score: 2 (2 votes) · LW · GW

Let's extend the thought experiment a bit. Suppose technology is developed to separate the twins. They rely on their shared brain parts for vital functions, so where we cut nerve connections we replace them with a radio transceiver and electrode array in each twin.

Now they are communicating thoughts via a prosthesis. Is that not communication?


Maybe you already know what it is like to be a hive mind with a shared consciousness, because you are one: cutting the corpus callosum creates a split-brained patient that seems to have two different personalities that don't always agree with each other. Maybe there are some connections left, but the bandwidth has been drastically reduced. And even within hemispheres, the brain seems to be composed of yet smaller modules. Your mind is made of parts that communicate with each other and share experience, and some of it is conscious.

I think the line dividing individual persons is a soft one. A sufficiently high-bandwidth communication interface can blur that boundary, even to the point of fusing consciousness like brain hemispheres. Shared consciousness means shared qualia, even if that connection is later severed, you might still remember what it was like to be the other person. And in that way, qualia could hypothetically be communicated between individuals, or even species.

comment by Slider · 2019-08-25T13:55:21.245Z · score: 1 (1 votes) · LW · GW

If you would copy my brain but make it twice as large that copy would be as "lonely" as I would be and this would remain after arbitrary doublings. Single individuals can be extended in space without communicating with other individuals.

The "extended wire" thought experiement doesn't specify enough how that physical communication line is used. It's plausible that there is no "verbalization" process like there is an step to write an email if one replaces sonic communication with ip-packet communication. With huge relative distance would come speed of light delays, if one twin was on earth and another on the moon there would be a round trip latency of seconds which probably would distort how the combined brain works. (And I guess with doublign in size would need to come with proportionate slowing to have same function).

I think there is a difference between a information system being spatially extended and having two information systems interface with each other. Say that you have 2 routers or 10 routers on the same length of line. It makes sense to make a distinction that each routers functions "independently" even if they have to be able to suggest each other enough that packets flow throught. To the first router the world "downline" seems very similar whether or not intermediate routers exist. I don't count information system internal processing as communicating thus I don't count "thinking" into communicating. Thus the 10 router version does more communicating than the 2 router version.

I think the "verbalization" step does mean that even highbandwidth connection doesn't automatically mean qualia sharing. I am thinking of plugings that allow programming languages to share code. Even if there is a perfect 1-to-1 compatibility between the abstractions of the languages I think still each language only ever manipulates their version of that representation. Cross-using without translation would make it illdefined what would be correct function but if you do translation then it loses the qualities of the originating programming language. A C sharp integer variable will never contain a haskel integer even if a C sharp integer is constructed to represent the haskel integer. (I guess it would be possible to make a super-language that has integer variables that can contain haskel-integers and C-sharp integers but that language would not be C sharp or haskel). By being a spesific kind of cognitive architechture you are locked into certain representation types which are unescaable outside of turning into another kind ot architechture.

comment by gilch · 2019-08-25T16:49:20.251Z · score: 2 (2 votes) · LW · GW

I am assuming that the twins communicating thoughts requires an act of will like speaking does. I do have reasons for this. Watching their faces when they communicate thoughts makes it seem voluntary.

But most of what you are doing when speaking is already subconscious: One can "understand" the rules of grammar well enough to form correct sentences on nearly all attempts, and yet be unable to explain the rules to a computer program (or to a child or ESL student). There is an element of will, but it's only an element.

It may be the case that even with a high-bandwidth direct-brain interface it would take a lot of time and practice to understand another's thoughts. Humans have a common cognitive architecture by virtue of shared genes, but most of our individual connectomes are randomized and shaped by individual experience. Our internal representations may thus be highly idiosyncratic, meaning a direct interface would be ad-hoc and only work on one person. How true this is, I can only speculate without more data.

In your programming language analogy, these data types are only abstractions built on top of a more fundamental CPU architecture where the only data types are bytes. Maybe an implementation of C# could be made that uses exactly the same bit pattern for an int as Haskell does. Human neurons work pretty much the same way across individuals, and even cortical columns seem to use the same architecture.


I don't think the inability to communicate qualia is primarily due to the limitation of language, but due to the limitation of imagination. I can explain what a tesseract is, but that doesn't mean you can visualize it. I could give you analogies with lower dimensions. Maybe you could understand well enough to make a mental model that gives you good predictions, but you still can't visualize it. Similarly, I could explain what it's like to be a tetrachromat, how septarine and octarine are colors distinct from the others, and maybe you can develop a model good enough to make good predictions about how it would work, but again you can't visualize these colors. This failing is not on English.

comment by Slider · 2019-08-25T18:27:27.966Z · score: 2 (2 votes) · LW · GW

Sure the difference between hearing about a tesseract and being able to visualise it is significant but I think the difference might not be an impossibility barrier but just skill level of imagination.

Having learned some echolocation my qualia involved in hearing have changed and it makes it seem possible to be able to make a similar transition from a trichromat visual space into a tetrachromat visual space. The weird thing about it is that my ear receives as much information that it did before but I just pay attention to it differently. Having deficient understanding in the sense of getting things wrong is easy line to draw. But it seems at some point the understanding becomes vivid instead of theorethical.

comment by TAG · 2019-08-30T13:01:58.716Z · score: 1 (1 votes) · LW · GW

Qualia are intrinstic; you can’t construct a quale if you had the right set of particles.

I'm pretty sure that's not what "intrinisc" is supposed to mean. From "The Qualities of Qualia" by David de Leon.

Within philosophy there is a distinction, albeit a contentious one, between intrinsic and extrinsic properties. Roughly speaking “extrinsic” seems to be synonymous with “relational.” The property of being an uncle, for example, is a property which depends on (and consists of) a relation to something else, namely a niece or a nephew. Intrinsic properties, then, are those which do not depend on this kind of relation. That qualia are intrinsic means that their qualitative character can be isolated from everything else going on in the brain (or elsewhere) and is not dependent on relations to other mental states, behaviour or what have you. The idea of the independence of qualia on any such relation may well stem from the conceivability of inverted qualia: we can imagine two physically identical brains having different qualia, or even that qualia are absent from one but not the other.

comment by Slider · 2019-08-20T22:17:18.897Z · score: 1 (1 votes) · LW · GW

I find it important in philosophy to be on the clear what you mean. It is one thing to explain and another to define what you mean. You might point to a yellow object and say yellow and somebody that misunderstood might think that you mean "roundness" by yellow. The accuracy is most important when the views are radical and talk in very different worlds. And "disproving" yellow by not being able to pick it out from ostensive differentation is not an argumentative victory but a communicative failure.

Even if we use some other term I think that meaning is important to have. "Plogiston" might sneak in claims but that is just the more reason to have terms that have as little room for smuggling as possible. And we still need good terms to talk about burning. "oxygen" literally means "black maker" but we nowadays understand it as a term to refer to a element which has definitionally very little to do with the color black.

I think the starting point that generated the word refers to a genuine problem. Having qualia in category three would mean that you claim that I do not have experiences. And if qualia is a bad loaded word to refer to the thing to be explained it would be good to make up a new term that refers to that. But to me qualia was just that word. I word like "dark matter" might experience similar "highjack pressure" by having wild claims thrown around about it. And there having things like "warm dark matter", "wimpy dark matter" makes the classification more fine making the conceptual analysis proceed. But requirements of clear thinking are different from tradition preservance. If you say that "warm dark matter" can't be the answer the question of dark matter still stands. Even if you succesfully argue that "qualia" can't be a attractive concept the issue of me not being a p-zombie still remains and it would be expected that some theorethical bending over backwards would happen.

comment by TAG · 2019-08-20T09:28:19.740Z · score: 2 (2 votes) · LW · GW

If we are able to explain why you believe in, and talk about qualia without referring to qualia whatsoever in our explanation, then we should reject the existence of qualia as a hypothesis

That argument has an inverse: "If we are able to explain why you believe in, and talk about an external without referring to an external world whatsoever in our explanation, then we should reject the existence of an external world as a hypothesis".

People want reductive explanation to be unidirectional,so that you have an A and a B, and clearly it is the B which is redundant and can be replaced with A. But not all explanations work in that convenient way...sometimes A and B are mutually redundant, in the sense that you don't need both.

The moral of the story being to look for the overall best explanation, not just eliminate redundancy.

comment by Chris_Leong · 2019-08-20T14:38:23.308Z · score: 1 (2 votes) · LW · GW

It's a strong argument, but there are strong arguments on the other side as well.

comment by Matthew Barnett (matthew-barnett) · 2019-08-23T00:39:53.860Z · score: 2 (2 votes) · LW · GW

"Immortality is cool and all, but our universe is going to run down from entropy eventually"

I consider this argument wrong for two reasons. The first is the obvious reason, which is that even if immortality is impossible, it's still better to live for a long time.

The second reason why I think this argument is wrong is because I'm currently convinced that literal physical immortality is possible in our universe. Usually when I say this out loud I get an audible "what" or something to that effect, but I'm not kidding.

It's going to be hard to explain my intuitions for why I think real immortality is possible, so bear with me. First, this is what I'm not saying:

  • I'm not saying that we can outlast the heat death of the universe somehow
  • I'm not saying that we just need to shift our conception of immortality to be something like, "We live in the hearts of our countrymen" or anything like that.
  • I'm not saying that I have a specific plan for how to become immortal personally, and
  • I'm not saying that my proposal has no flaws whatsoever and that this is a valid line of research to be conducting at the moment.

So what am I saying?

A typical model of our life as humans is that we are something like a worm in 4 dimensional space. On one side of the worm there's our birth, and on the other side of the worm is our untimely death. We 'live through' this worm, and that is our life. The length of our life is measured by considering the length of the worm in 4 dimensional space, measured just like a yardstick.

Now just change the perspective a little bit. If we could somehow abandon our current way of living, then maybe we can alter the geometry of this worm so that we are immortal. Consider: a circle has no starting point and no end. If someone could somehow 'live through' a circle, then their life would consist of an eternal loop through experiences, repeating endlessly.

The idea is that we somehow construct a physical manifestation of this immortality circle. I think of it like an actual loop in 4 dimensional space because it's difficult to visualize without an analogy. A superintelligence could perhaps predict what type of actions would be necessary to construct this immortal loop. And once it is constructed, it'll be there forever.

From an outside view in our 3d mind's eye, the construction of this loop would look very strange. It could look like something popping into existence suddenly and getting larger, and then suddenly popping out of existence. I don't really know; that's just the intuition.

What matters is that within this loop someone will be living their life on repeat. True Déjà vu. Each moment they live is in their future, and in their past. There are no new experiences and no novelty, but the superintelligence can construct it so that this part is not unenjoyable. There would be no right answer to the question "how old are you." And in my view, it is perfectly valid to say that this person is truly, actually immortal.

Perhaps someone who valued immortality would want one of these loops to be constructed for themselves. Perhaps for some reason constructing one of these things is impossible in our universe (though I suspect that it's not). There are anthropic reasons that I have considered for why constructing it might not be worth it... but that would be too much to go into for this shortform post.

To close, I currently see no knockdown reasons to believe that this sort of scheme is impossible.

comment by Raemon · 2019-08-24T19:50:05.276Z · score: 3 (1 votes) · LW · GW

I don't know of physics rules ruling this out. However, I suspect this doesn't resolve the problems that the people I know who care most about immortality are worried about. (I'm not sure – I haven't heard them express clear preferences about what exactly they prefer on the billions/trillions year timescale. But they seem more concerned running out of ability to have new experiences than not-wanting-to-die-in-particular.)

My impression is many of the people who care about this sort of thing also tend to think that if you have multiple instances of the exact same thing, it just counts as a single instance. (Or, something more complicated about many worlds and increasing your measure)

comment by Matthew Barnett (matthew-barnett) · 2019-08-24T20:34:36.913Z · score: 1 (1 votes) · LW · GW

I agree with the objection. :) Personally I'm not sure whether I'd want to be stuck in a loop of experiences repeating over and over forever.

However, even if we considered "true" immortality, repeat experiences are inevitable simply because there's a finite number of possible experiences. So, we'd have to start repeating things eventually.

comment by gilch · 2019-08-24T20:08:35.308Z · score: 2 (2 votes) · LW · GW

In one scene in Egan's Permutation City, the Peer character experienced "infinity" when he set himself up in an infinite loop such that his later experience matched up perfectly with the start of the loop (walking down the side of an infinitely tall building, if I recall). But he also experienced the loop ending.

comment by gilch · 2019-08-24T19:49:03.134Z · score: 2 (2 votes) · LW · GW

Virtual particles "pop into existence" in matter/antimatter pairs and then "pop out" as they annihilate each other all the time. In one interpretation, an electron positron pair (for example) can be thought of as one electron that loops around and goes back in time. Due to CPS symmetry, this backward path looks like a positron. https://www.youtube.com/watch?v=9dqtW9MslFk

comment by gilch · 2019-08-24T19:32:13.886Z · score: 2 (5 votes) · LW · GW

It sounds like you're talking about time travel. These "worms" are called "worldlines". Spacetime is not simply R^4. You can rotate in the fourth dimension--this is just acceleration. But you can't accelerate enough to turn around and bite your own tail because rotations in the fourth dimension are hyperbolic rather than circular. You can't exceed or even reach light speed. There are solutions to General Relativity that contain closed timelike curves, but it's not clear if they correspond to anything physically realizable.

comment by Slider · 2019-08-24T21:48:48.501Z · score: 1 (1 votes) · LW · GW

I have a previous high impliciation uncertainty about this (that would be a crux?). " you can't accelerate enough to turn around " seems false to me. The mathematical rotation seems like it ought to exist. The prevoius reasons I thought such a mathematical rotation would be impossible I have signficantly less faith in. If I draw a unit sphere analog in spacetime having a visual observation from the space-time diagram drawn on euclid paper is not sufficient to conclude that the future cone is far from past cone. And thinking that a sphere is "all within r distance" it would seem it should be continuous and simply connected under most instances. I think there also should exist a transformation that when repeated enough times returns to the original configuration. And I find it surprising that a boost like transformation would fail to be like that if it is a rotation analog.

I have started to believe that the standrd reasoning why you can't go faster than light relies on a kind of faulty logic. With normal euclidean geometry it would go like: there is a maximum angle you can reach by increasing the y-coordinate and slope is just the ratio of x to y so at that maximum y maximum slope is reached so maximum angle that you can have is 90 degrees. So if you try to go at 100 degrees you have lesser y and are actually going slower. And in a way 90 degrees is kind of the maximum amount you can point in another direction. But normally degrees go up to 180 or 360 degrees.

In the relativity side c is the maximum ratio but that is for coordinate time. If somebodys proper time would start pointing in a direction that would project negatively on the coordinate time axis the comparison between x per coordinate time and x per proper time would become significant.

There is also a trajectory which seems to be timelike in all segments. A=(0,0,0,0),(2,1,0,0),B=(4,2,0,0),(2,3,0,0),C=(0,4,0,0),(2,5,0,0),D=(4,6,0,0). It would seem awfully a lot like the "corner" A B C would be of equal magnitude but opposite sign from B C D. Now I get why physcially such a trajectory would be challenging. But from a mathematical point of view it is hard to understand why it would be ill-defined. It would also be very strange if there is no boost you can make at B to go from direction AB to direction BC. I get why you can't rotate from AB to BD (can't rotate a timelike distance to spacelike distance if rotation preserves length).

I also kind of get why yo woudl need infninte energy make such "impossibly sharp" turns. But as energy is the conserved charge of time translation, the definition of time might depend on which time you choose to derive it from. If you were to gain energy from an external source it would have to be tachyon or going backwards in time (which are either impossible or hard to produce). But if you had a thruster with you with fuel the "proper time energy" might behave differently. That is if you are going at signficant C and the whole universe is frozen and whissing by you should still be able to fire your rockets according to your time (1 second of your engines might take the entire age of the universe to external observers but does that prevent things happening from your perspective?). If acceleration "turns your time direction" and not "increases displacement per spent second" at some finite amount of acceleration experienced you would come full circle or atleast long enough that you are now going to the negative direction that you started in.

comment by Matthew Barnett (matthew-barnett) · 2019-08-24T20:44:15.788Z · score: 1 (1 votes) · LW · GW

I agree I would not be able to actually accomplish time travel. The point is whether we could construct some object in Minkowski space (or whatever General Relativity uses, I'm not a physicist) that we considered to be loop-like. I don't think it's worth my time to figure out whether this is really possible, but I suspect that something like it may be.

Edit: I want to say that I do not have an intuition for physics or spacetime at all. My main reason for thinking this is possible is mainly that I think my idea is fairly minimal: I think you might be able to do this even in R^3.

comment by TAG · 2019-08-23T09:38:18.496Z · score: 1 (1 votes) · LW · GW

Nietzsche go to the there first. https://en.m.wikipedia.org/wiki/Eternal_return

comment by Matthew Barnett (matthew-barnett) · 2019-08-28T22:22:21.006Z · score: 1 (1 votes) · LW · GW

I have discovered recently that while I am generally tired and groggy in the morning, I am well rested and happy after a nap. I am unsure if this matches other people's experiences, and haven't explored much research. Still, I think this is interesting to think about fully.

What is the best way to apply this knowledge? I am considering purposely sabotaging my sleep so that I am tired enough to take a nap by noon, which would refresh me for the entire day. But this plan may have some significant drawbacks, including being excessively tired for a few hours in the morning.

comment by Raemon · 2019-08-28T22:27:48.863Z · score: 3 (1 votes) · LW · GW

I'm assuming from context you're universally groggy in the morning no matter how much sleep you get? (i.e. you've tried the obvious thing of just 'sleep more'?)

comment by Matthew Barnett (matthew-barnett) · 2019-08-28T22:37:24.398Z · score: 1 (1 votes) · LW · GW

Pretty much, yes. Even with 10+ hours of sleep I am not as refreshed as a nap. It's weird, but I think it's a real effect.

comment by eigen · 2019-08-29T00:27:48.854Z · score: 1 (1 votes) · LW · GW

Two easy things you can try to feel less groggy in the morning are:

  • Drinking a full glass of water as soon as you wake up.
  • Listening to music or a podcast (bluetooth earphones work great here!). Music does the trick for me, although I'm usually not in the mood and I prefer a podcast.

About taking naps, while it seems to work for some people, I'm generally against it since it usually impairs my circadian clock greatly (I cannot keep consistent times and meddles with my schedule too much).

At nights, I take melatonin [LW · GW] and it seems to have been of great help to keep consistent times at which I go to sleep (taking it with L-Theanine seems to be better for me somehow). Besides that, I do pay a lot of attention to other zeitgebers such as exercise, eating behavior, light exposure, and coffee. This is to say—regulating your circadian clock may be what you're looking for.

A link of interest is gwern's post about vitamin d experiment and other posts about sleep also.