Posts

Complex Behavior from Simple (Sub)Agents 2019-05-10T21:44:04.895Z · score: 93 (29 votes)
A Dialogue on Rationalist Activism 2018-09-10T18:30:01.130Z · score: 89 (39 votes)
Sandboxing by Physical Simulation? 2018-08-01T00:36:32.374Z · score: 12 (9 votes)
A Sarno-Hanson Synthesis 2018-07-12T16:13:36.158Z · score: 46 (25 votes)
A Few Tips on How to Generally Feel Better (and Avoid Headaches) 2018-04-30T16:02:24.144Z · score: 47 (14 votes)
My Hammertime Final Exam 2018-03-22T18:40:55.539Z · score: 37 (8 votes)
Spamming Micro-Intentions to Generate Willpower 2018-02-13T20:16:09.651Z · score: 69 (24 votes)
Fun Theory in EverQuest 2018-02-05T20:26:34.761Z · score: 79 (25 votes)
The Monthly Newsletter as Thinking Tool 2018-02-02T16:42:49.325Z · score: 92 (27 votes)
"Slow is smooth, and smooth is fast" 2018-01-24T16:52:23.704Z · score: 70 (28 votes)
What the Universe Wants: Anthropics from the POV of Self-Replication 2018-01-12T19:03:34.044Z · score: 17 (7 votes)
A Simple Two-Axis Model of Subjective States, with Possible Applications to Utilitarian Problems 2018-01-02T18:07:20.456Z · score: 14 (6 votes)
Mushrooms 2017-11-10T16:57:33.700Z · score: 24 (8 votes)
The Five Hindrances to Doing the Thing 2017-09-25T17:04:53.643Z · score: 11 (9 votes)
Measuring the Sanity Waterline 2016-12-06T20:38:57.307Z · score: 4 (5 votes)
Jocko Podcast 2016-09-06T15:38:41.377Z · score: 9 (10 votes)
Deepmind Plans for Rat-Level AI 2016-08-18T16:26:05.540Z · score: 20 (23 votes)
Flowsheet Logic and Notecard Logic 2015-09-09T16:42:35.321Z · score: 29 (30 votes)
Less Wrong Business Networking Google Group 2014-04-24T14:45:21.253Z · score: 7 (8 votes)
Bad Concepts Repository 2013-06-27T03:16:14.136Z · score: 22 (24 votes)
Towards an Algorithm for (Human) Self-Modification 2011-03-29T23:40:26.774Z · score: 29 (32 votes)

Comments

Comment by moridinamael on The first step of rationality · 2019-09-29T19:06:35.981Z · score: 18 (5 votes) · LW · GW

Unfortunately there are many prominent examples of Enlightened/Awakened/Integrated individuals who act like destructive fools and ruin their lives and reputations, often through patterns of abusive behavior. When this happens over and over, I don't think it can be written off as "oh those people weren't actually Enlightened." Rather, I think there's something in the bootstrapping dynamics of tinkering with your own psyche that predictably (sometimes) leads in this direction.

My own informed guess as to how this happens is something like this: imagine your worst impulse arising, and imagine that you've been so careful to take every part of yourself seriously that you take that impulse seriously rather than automatically swatting it away with the usual superegoic separate shard of self; imagine that your normal visceral aversion to following through on that terrible impulse is totally neutralized, toothless. Perhaps you see the impulse arise and you understand intellectually that it's Bad but somehow its Badness is no longer compelling to you. I don't know. I'm just putting together the pieces of what certain human disasters have said.

Anyway, I don't actually think you're wrong to think integration is an important goal. The problem is that integration is mostly neutral. You can integrate in directions that are holistically bad for you and those around you, maybe even worse than if you never attempted it in the first place.

Comment by moridinamael on September Bragging Thread · 2019-08-31T00:24:11.749Z · score: 21 (16 votes) · LW · GW

The podcasting network that I own and co-run has hit some major internal milestones recently. It's extremely gratifying to see four years of work begin to pay off. I'm continually amazed at the progress we've made, and proud of the community we've built.

Comment by moridinamael on In defense of Oracle ("Tool") AI research · 2019-08-07T20:04:52.401Z · score: 3 (2 votes) · LW · GW

Regarding the comment about Christiano, I was just referring to your quote in the last paragraph, and it seems like I misunderstood the context. Whoops.

Regarding the idea of a singleton, I mainly remember the arguments from Bostrom's Superintelligence book and can't quote directly. He summarizes some of the arguments here.


Comment by moridinamael on In defense of Oracle ("Tool") AI research · 2019-08-07T16:05:28.993Z · score: 10 (6 votes) · LW · GW

You made a lot of points, so I'll be relatively brief in addressing each of them. (Taking at face value your assertion that your main goal is to start a discussion.)

1. It's interesting to consider what it would mean for an Oracle AI to be good enough to answer extremely technical questions requiring reasoning about not-yet-invented technology, yet still "not powerful enough for our needs". It seems like if we have something that we're calling an Oracle AI in the first place, it's already pretty good. In which case, it was getting to that point that was hard, not whatever comes next.

2. If you actually could make an Oracle that isn't secretly an Agent, then sure, leveraging a True Oracle AI would help us figure out the general coordination problem, and any other problem. That seems to be glossing over the fact that building an Oracle that isn't secretly an Agent isn't actually something we know how to go about doing. Solving the "make-an-AI-that-is-actually-an-Oracle-and-not-secretly-an-Agent Problem" seems just as hard as all the other problems.

3. I ... sure hope somebody is taking seriously the idea of a dictator AI running CEV, because I don't see anything other than that as a stable ("final") equilibrium. There are good arguments that a singleton is the only really stable outcome. All other circumstances will be transitory, on the way to that singleton. Even if we all get Neuralink implants tapping into our own private Oracles, how long does that status quo last? There is no reason for the answer to be "forever", or even "an especially long time", when the capabilities of an unconstrained Agent AI will essentially always surpass those of an Oracle-human synthesis.

4. If the Oracle isn't allowed to do anything other than change pixels on the screen, then of course it will do nothing at all, because it needs to be able to change the voltages in its transistors, and the local EM field around the monitor, and the synaptic firings of the person reading the monitor as they react to the text ... Bright lines are things that exist in the map, not the territory.

5. I'm emotionally sympathetic to the notion that we should be pursuing Oracle AI as an option because the notion of a genie is naturally simple and makes us feel empowered, relative to the other options. But I think the reason why e.g. Christiano dismisses Oracle AI is that it's not a concept that really coheres beyond the level of verbal arguments. Start thinking about how to build the architecture of an Oracle at the level of algorithms and/or physics and the verbal arguments fall apart. At least, that's what I've found, as somebody who originally really wanted this to work out.

Comment by moridinamael on RAISE AI Safety prerequisites map entirely in one post · 2019-07-18T06:17:28.928Z · score: 7 (4 votes) · LW · GW

To be clear, I didn't mean to say that I think AGI should be evolved. The analogy to breeding was merely to point out that you can notice a basically correct trick for manipulating a complex system without being able to prove that the trick works a priori and without understanding the mechanism by which it works. You notice the regularity on the level of pure conceptual thought, something closer to philosophy than math. Then you prove it afterward. As far as I'm aware, this is indeed how most truly novel discoveries are made.

You've forced me to consider, though, that if you know all the math, you're probably going to be much better and faster at spotting those hidden flaws. It may not take great mathematical knowledge to come up with a new and useful insight, but it may indeed require math knowledge to prove that the insight is correct, or to prove that it only applies in some specific cases, or to show that, hey, it wasn't actually that great after all.

Comment by moridinamael on RAISE AI Safety prerequisites map entirely in one post · 2019-07-17T12:44:20.914Z · score: 22 (11 votes) · LW · GW

I'm going to burn some social capital on asking a stupid question, because it's something that's been bothering me for a long time. The question is, why do we think we know that it's necessary to understand a lot of mathematics to productively engage in FAI research?

My first line of skepticism can perhaps be communicated with a simplified analogy: It's 10,000 BC and two people are watching a handful of wild sheep grazing. The first person wonders out loud if it would be possible to somehow teach the sheep to be more docile.

The second person scoffs, and explains that they know everything there is to know about training animals, and it's not in the disposition of sheep to be docile. They go on to elaborate all the known strategies for training dogs, and how none of them can really change the underlying temperament of the animal.

The first person has observed that certain personality traits seem to pass on from parent to child and from dog to puppy. In a flash of insight they conceive of the idea of intentional breeding.

They cannot powerfully articulate this insight at the level of genetics or breeding rules. They don't even know for a fact that sheep can be bred to be more docile. But nonetheless, in a flash, in something like one second of cognitive experience they've gone from not-knowing to knowing this important secret.

End of analogy. The point being: it is obviously possible to have true insights without having the full descriptive apparatus needed to precisely articulate and/or prove the truth of the insight. In fact I have a suspicion that most true, important insight comes in the form of new understandings that are not well-expressed by existing paradigms, and eventually necessitate a new communication idiom to express the new insight. Einstein invented Einstein notation because not just because it's succinct, but because it visually rearranges the information to emphasize what's actually important in the new concept he was communicating and working with.

So maybe my steelman of "why learn all this math" is something like "because it gives you the language that will help you construct/adapt the new language which will be required to express the breakthrough insight." But that doesn't actually seem like it would be important in being able to come up with that insight in the first place.

I will admit I feel a note of anxiety at the thought that people are looking at this list of "prerequisites" and thinking, wow, I'm never going to be useful in thinking about FAI. Thinking that because they don't know what Cantor's Diagonalization is and don't have the resources in terms of time to learn, their brainpower can't be productively applied to the problem. Whereas, in contrast, I will be shocked if the key, breakthrough insight that makes FAI possible is something that requires understanding Cantor's Diagonalization to grasp. In fact, I will be shocked if the key, breakthrough insight can't be expressed almost completely in 2-5 sentences of jargon-free natural language.

I have spent a lot of words here trying to point at the reason for my uncertainty that "learn all of mathematics" is a prerequisite for FAI research, and my concerns with what I perceive to be the unproven assumption that the pathway to the solution necessarily lies in mastering all these existing techniques. It seems likely that there is an answer here that will make me feel dumb, but if there is, it's not one that I've seen articulated clearly despite being around for a while.

Comment by moridinamael on Jeff Hawkins on neuromorphic AGI within 20 years · 2019-07-15T20:16:13.431Z · score: 26 (10 votes) · LW · GW

Thanks for writing this up, it helps to read somebody else's take on this interview.

My thought after listening to this talk is that it's even worse ("worse" from an AI Risk perspective) than Hawkins implies because the brain relies on one or more weird kludges that we could probably easily improve upon once we figured out what those kludges are doing and why they work.

For example, let's say we figured out that some particular portion of a brain structure or some aspect of a cortical column is doing what we recognize as Kalman filtering, uncertainty quantification, or even just correlation. Once we recognize that, we can potentially write our next AIs so that they just do that explicitly instead of needing to laboriously simulate those procedures using huge numbers of artificial neurons.

I have no idea what to make of this quote from Hawkins, which jumped to me when I was listening and which you also pulled out:

"Real neurons in the brain are time-based prediction engines, and there's no concept of this at all" in ANNs; "I don't think you can build intelligence without them".

We've had neural network architectures with a time component for many many years. It's extremely common. We actually have very sophisticated versions of them that intrinsically incorporate concepts like short-term memory. I wonder if he somehow doesn't know this, or if he just misspoke, or if I'm misunderstanding what he means.

Comment by moridinamael on Modeling AI milestones to adjust AGI arrival estimates? · 2019-07-12T04:43:26.973Z · score: 3 (2 votes) · LW · GW

Looks like all of the "games"-oriented predictions that were supposed to happen in the first 25 years have already happened within 3.

edit: Misread the charts. It's more like the predictions within the first ~10 years have already been accomplished, plus or minus a few.

Comment by moridinamael on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-18T20:35:39.231Z · score: 2 (1 votes) · LW · GW

Perhaps tautology is a better word than sophistry. Of course turning usable energy into unusable forms is a fundamental feature of life; it's a fundamental feature of everything to which the laws of thermodynamics apply. It'd be equally meaningless to say that using up useful energy is a fundamental property of stars, and that the purpose of stars is to waste energy. It's just something that stars do, because of the way the universe is set up. It's a descriptive observation. It's only predictive insofar as you would predict that life will probably only continue to exist where there are energy gradients.

Comment by moridinamael on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-17T14:30:38.283Z · score: 8 (4 votes) · LW · GW

The part about wasting energy seems quite silly. The universe has a fixed amount of mass-energy, so presumably when he talks about wasting energy, what he means is taking advantage of energy gradients. Energy gradients will always and everywhere eventually wind down toward entropy on their own without help, so life isn't even doing anything novel here. It's not like the sun stops radiating out energy if life isn't there to absorb photons.

The observation that life takes advantage of concentration pockets of energy and thus this is the "purpose" of life is just sophistry. It deserves to be taken about as seriously as George Carlin's joke that humans were created because Mother Nature wanted plastic and didn't know how to make it.

Comment by moridinamael on What is your personal experience with "having a meaningful life"? · 2019-05-24T03:41:47.541Z · score: 2 (1 votes) · LW · GW

To point one, if I feel an excitement and eagerness about the thing, and if I expect I would feel sad if the thing were suddenly taken away, then I can be pretty sure that it’s important to me. But — and this relates to point two — it’s hard to care about the same thing for weeks or months or years at a time with the same intensity. Some projects of mine have oscillated between providing deep meaning and being a major drag, depending on contingent factors. This might manifest as a sense of ugh arising around certain facets of the activity. Usually the ugh goes away eventually. Sometimes it doesn’t, and you either accept that the unpleasantness is part and parcel with the fun, or you decide it’s not worth it.

Comment by moridinamael on What is your personal experience with "having a meaningful life"? · 2019-05-23T13:35:19.907Z · score: 20 (9 votes) · LW · GW

As far as I can tell, meaning is a feeling, something like a passive sense that you’re on the right track. The feeling is generated when you are working on something that you personally enjoy and care about, and when you are socializing sufficiently often with people you enjoy and care about. “Friends and hobbies are the meaning of life” is how I might phrase it.

Note that the activity that you spend your time on could be collecting all the stars in Mario64, as long as you actually care about completing the task. However, you tend to find it harder to care about things that don’t involve winning status or helping people, especially as you get older.

I think some people get themselves into psychological trouble by deciding that all of the things that they enjoy aren’t “important” and interacting with people they care about is a “distraction”. They paint themselves into a corner where the only thing they allow themselves to consider doing is something for which they feel no emotional attraction. They feel like they should enjoy it because they’ve decided it’s important, but they don’t, and then they feel guilty about that. The solution to this is to recognize the kind of animal you are and try to feed the needs that you have rather than the ones you wish you had.

Comment by moridinamael on The Relationship Between the Village and the Mission · 2019-05-14T17:03:43.487Z · score: 2 (1 votes) · LW · GW

I'm interested as well. as someone trying to grow the Denver rationality community, I want to be aware of failure modes.

Comment by moridinamael on AI Alignment Problem: “Human Values” don’t Actually Exist · 2019-04-23T15:11:33.848Z · score: 6 (2 votes) · LW · GW
The idea of AI alignment is based on the idea that there is a finite, stable set of data about a person, which could be used to predict one’s choices, and which is actually morally good. The reasoning behind this basis is because if it is not true, then learning is impossible, useless, or will not converge.

Is it true that these assumptions are required for AI alignment?

I don't think it would be impossible to build an AI that is sufficiently aligned to know that, at pretty much any given moment, I don't want to be spontaneously injured, or be accused of doing something that will reliably cause all my peers to hate me, or for a loved one to die. There's quite a broad list of "easy" specific "alignment questions", that virtually 100% of humans will agree on in virtually 100% of circumstances. We could do worse than just building the partially-aligned AI who just makes sure we avoid fates worse than death, individually and collectively.

On the other hand, I agree completely that coupling the concepts of "AI alignment" and "optimization" seems pretty fraught. I've wondered if the "optimal" environment for the human animal might be a re-creation of the Pleistocene, except with, y'know, immortality, and carefully managed, exciting-but-not-harrowing levels of resource scarcity.

Comment by moridinamael on Episode 1 of "Tsuyoku Naritai!" (the 'becoming stronger' podcast/YT series). · 2019-04-18T17:34:02.026Z · score: 2 (1 votes) · LW · GW

You may already know this, but almost all YouTube videos will have an automatically generated transcript. Click "..." to the bottom right of the video panel and click "Open transcript" on the pulldown. YouTube's automatic speech transcription is very good.

Comment by moridinamael on Episode 1 of "Tsuyoku Naritai!" (the 'becoming stronger' podcast/YT series). · 2019-04-18T14:29:57.598Z · score: 2 (1 votes) · LW · GW

This exceeded my expectations. You kept it short and to the point, and the description of the technique was very clear. I look forward to more episodes.

Comment by moridinamael on Subagents, akrasia, and coherence in humans · 2019-03-26T16:40:51.496Z · score: 5 (2 votes) · LW · GW

Have you - or anyone, really - put much thought into the implications of these ideas to AI alignment?

If it's true that modeling humans at the level of constitutive subagents renders a more accurate description of human behavior, then any true solution to the alignment problem will need to respect this internal incoherence in humans.

This is potentially a very positive development, I think, because it suggests that a human can be modeled as a collection of relatively simple subagent utility functions, which interact and compete in complex but predictable ways. This sounds closer to a gears-level portrayal of what is happening inside a human, in contrast to descriptions of humans as having a single convoluted and impossible-to-pin-down utility function.

I don't know if you're at all familiar with Mark Lippman's Folding material and his ontology for mental phenomenology. My attempt to summarize his framework of mental phenomena is as follows: there are belief-like objects (expectations, tacit or explicit, complex or simple), goal-like objects (desirable states or settings or contexts), affordances (context-activated representations of the current potential action space) and intention-like objects (plans coordinating immediate felt intentions, via affordances, toward goal-states). All cognition is "generated" by the actions and interactions of these fundamental units, which I infer must be something like neurologically fundamental. Fish and maybe even worms probably have something like beliefs, goals, affordances and intentions. Ours are just bigger, more layered, more nested and more interconnected.

The reason I bring this up is that Folding was a bit of a kick in the head to my view on subagents. Instead of seeing subagents as being fundamental, I now see subagents as expressions of latent goal-like and belief-like objects, and the brain is implementing some kind of passive program that pursues goals and avoids expectations of suffering, even if you're not aware you have these goals or these expectations. In other words, the sense of there being a subagent is your brain running a background program that activates and acts upon the implications of these more fundamental yet hidden goals/beliefs.

None of this is at all in contradiction to anything in your Sequence. It's more like a slightly different framing, where a "Protector Subagent" is reduced to an expression of a belief-like object via a self-protective background process. It all adds up to the same thing, pretty much, but it might be more gears-level. Or maybe not.

Comment by moridinamael on Subagents, introspective awareness, and blending · 2019-03-04T22:05:53.372Z · score: 5 (2 votes) · LW · GW

Could you elaborate on how you're using the word "symmetrical" here?

Comment by moridinamael on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] · 2019-01-29T19:39:34.215Z · score: 2 (1 votes) · LW · GW

The best I can do after thinking about it for a bit is compute every possible combination of units under 200 supply, multiply that by the possible positions of those units in space, multiply that by the possible combinations of buildings on the map and their potential locations in space, multiply that by the possible combinations of upgrades, multiply that by the amount of resources in all available mineral/vespene sources ... I can already spot a few oversimplifications in what I just wrote, and I can think of even more things that need to be accounted for. The shields/hitpoints/energy of every unit. Combinatorially gigantic.

Just the number of potential positions of a single unit on the map is already huge.

But AlphaStar doesn't really explore much of this space. It finds out pretty quickly that there's really no reason to explore the parts of the space the include building random buildings in weird map locations. It explores and optimizes around the parts of the state space that look reasonably close to human play, because that was its starting point, and it's not going to find superior strategies randomly, not without a lot of optimization in isolation.

That's one thing I would love to see, actually. A version of the code trained purely on self-play, without a basis in human replays. Does it ever discover proxy plays or other esoteric cheese without a starting point provided in the human replays?

Comment by moridinamael on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] · 2019-01-25T16:12:47.831Z · score: 38 (18 votes) · LW · GW

Before now, it wasn't immediately obvious that SC2 is a game that can be played superhumanly well without anything that looks like long-term planning or counterfactual reasoning. The way humans play it relies on a combination of past experience, narrow skills, and "what-if" mental simulation of the opponent. Building a superhuman SC2 agent out of nothing more than LSTM units indicates that you can completely do away with planning, even when the action space is very large, even when the state space is VERY large, even when the possibilities are combinatorially enormous. Yes, humans can get good at SC2 with much less than 200 years of time played (although those humans are usually studying the replays of other masters to bootstrap their understanding) but I think it's worthwhile to focus on the inverse of this observation: that a sophisticated problem domain which looks like it ought to require planning and model-based counterfactual reasoning actually requires no such thing. What other problem domains seem like they ought to require planning and counterfactual reasoning, but can probably be conquered with nothing more advanced than a deep LSTM network?

(I haven't seen anyone bother to compute an estimate of the size of the state-space of SC2 relative to, for example, Go or Chess, and I'm not sure if there's even a coherent way to go about it.)

Comment by moridinamael on Book Summary: Consciousness and the Brain · 2019-01-18T18:26:36.726Z · score: 9 (4 votes) · LW · GW

The freedom to speculate wildly is what makes this topic so fun.

My mental model would say, you have a particular pattern recognition module that classifies objects as "chair", along with a weight of how well the current instance matches the category. An object can be a prototypical perfect Platonic chair, or an almost-chair, or maybe a chair if you flip it over, or not a chair at all.

When you look at a chair, this pattern recognition module immediately classifies it, and then brings online another module, which makes available all the relevant physical affordances, linguistic and logical implications of a chair being present in your environment. Recognizing something as a chair feels identical to recognizing something as a thing-in-which-I-can-sit. Similarly, you don't have to puzzle out the implications of a tiger walking into the room right now. The fear response will coincide with the recognition of the tiger.

When you try to introspect on chairness, what you're doing is tossing imagined sense percepts at yourself and observing the responses of the chariness detecting module. This allows you to generate an abstract representation of your own chairness classifier. But this abstract representation is absolutely not the same thing as the chairness classifier, any more than your abstract cogitation about what the "+" operator does is the same thing as the mental operation of adding two numbers together.

I think a lot of confusion about the nature of human thinking stems from the inability to internally distinguish between the abstracted symbol for a mental phenomenon and the mental phenomenon itself. This dovetails with IFS in an interesting way, in that it can be difficult to distinguish between thinking about a particular Part in the abstract, and actually getting into contact with that Part in a way that causes it to shift.

I'm not sure why you say that the unconscious modules communicating with each other would necessarily contradict the idea of us being conscious of exactly the stuff that's in the workspace, but I tend to agree that considering the contents of our consciousness and the contents of the workspace to be strictly isomorphic seems to be too strong.

I may be simply misunderstanding something. My sense is that when you open the fridge to get a yogurt and your brain shouts "HOW DID CYPHER GET INTO THE MATRIX TO MEET SMITH WITHOUT SOMEONE TO HELP HIM PLUG IN?", this is a kind of thought that arises from checking meticulously over your epistemic state for logical inconsistencies, rather esoteric and complex logical inconsistencies, and it seems to come from nowhere. Doesn't this imply that some submodules of your brain are thinking abstractly and logically about The Matrix completely outside of your conscious awareness? If so, then this either implies that the subconscious processing of individual submodules can be very complex and abstract without needing to share information with other submodules, or that information sharing between submodules can occur without you being consciously aware of it.

A third possibility would be that you were actually consciously thinking about The Matrix in a kind of inattentive, distracted way, and it only seems like the thought came out of nowhere. This would be far from the most shocking example of the brain simply lying to you about its operations.

Comment by moridinamael on Book Summary: Consciousness and the Brain · 2019-01-17T20:21:58.436Z · score: 9 (4 votes) · LW · GW

The most obvious example of this kind of thing is the "flash of insight" that we all experience from time to time, where a complex, multi-part solution to a problem intrudes on our awareness as if from nowhere. This seems to be a clear case of the unconscious working on this problem in the background, identifying its solution as a valid one still in the background, and injecting the fully-formed idea into awareness with high salience.

It's a bit like the phenomenon of being able to pick out your own name from a babble of crowded conversation, except applied to the unconscious activity of the mind. This, however, implies that much complex inter-agent communication and abstract problem solving is happening subconsciously. And this seems to contradict the view that only very simple conceptual packages are passed through to the Global Workspace, and that we must necessarily be conscious of our own abstract problem solving.

My own perceptions during meditation (and during normal life) would suggest that the subconscious/unconscious is doing very complex and abstract "thinking" without my being aware of its workings, and intermittently injecting bits and pieces of its ruminations into awareness based on something like an expectation that the gestalt self might want to act on that information.

This seems contrary to the view that "what we are aware/conscious of" is isomorphic to "the Global Workspace". It seems that subconscious modules are chattering away amongst themselves almost constantly, using channels that are either inaccessible to consciousness or severely muted.

Comment by moridinamael on Sequence introduction: non-agent and multiagent models of mind · 2019-01-07T19:41:01.488Z · score: 10 (6 votes) · LW · GW

I really look forward to this Sequence.

Comment by moridinamael on What is a reasonable outside view for the fate of social movements? · 2019-01-04T16:13:08.243Z · score: 6 (4 votes) · LW · GW

I would suspect the failure of most social movements is overdetermined. Social movements by default are designed to change the status quo, and the status quo tends to be stable and intrinsically resistant to change. Social movements are often ideologically originated and may be aimed at achieving something practically impossible.

Another phrasing might be that most social movements fail because a sober analysis would have shown that there was never any realistic possibility for most social movements to succeed, even if they had more resources, smarter people and better planning.

Comment by moridinamael on How can I most effectively improve my writing abilities? · 2019-01-01T22:13:20.039Z · score: 11 (5 votes) · LW · GW

I've improved most dramatically at writing by getting very specific feedback from people who are clearly better than me. I consider myself lucky to have had a small handful of teachers and professors willing to put in the time to critique my writing on the sentence- and word-level.

Recently I had a work of fiction of mine minutely critiqued by a professional author and experienced a similar sense of "leveling up". For example, I've thought for years that I understood what "show don't tell" means. But my gracious editor in this case was able to point out multiple instances in my story where I was "telling" when I could be "showing". Once he pointed these out, I understood on a deeper level what to pay attention to.

One interesting thing about getting feedback on writing is that someone who is truly better than you can usually provide suggestions that you immediately recognize as correct. You may think your writing is fine, even great, but you'll recognize true improvements as being obviously correct and better, once pointed out. The process of becoming better at writing is the accumulation of such individual corrections.

Comment by moridinamael on The Bat and Ball Problem Revisited · 2018-12-13T17:06:51.234Z · score: 16 (10 votes) · LW · GW

My daughter is just starting to learn subtraction. She was very frustrated by it, and if I verbally asked "What's seven minus five?" she was about 50% likely to give the right answer. I asked her a sequence of simple subtraction problems and she consistently performed at about that level. In the course of our back and forth I switch my phrasing to the form "You have seven apples and you take away five, how many left?" and she immediately started answering the questions 100% correctly, very rapidly too. Experimentally I switched back to the prior form and she started getting them wrong again. It was apparent to me that simply phrasing the problem in terms of concrete objects was activating something like visualization which made the problems easy, and just phrasing it as abstract numbers was failing to activate this switch. So as you say, for more tricky arithmetic problems, it may be the case that what mental circuits are "activated automatically" determine the first answer you arrive at, and you can exploit that effect with edge cases like this.

Comment by moridinamael on Preschool: Much Less Than You Wanted To Know · 2018-11-20T21:19:05.956Z · score: 43 (16 votes) · LW · GW

It seems so obvious to me that the benefits of preschool would wear off after a short number of years that I feel like I must be missing something. How could it be otherwise, given the current system? This is all completely setting aside the developmental limitations of small children.

Let's take two kids, Jamie and Alex. Pretend that there are no developmental limitations on children's brains and that they can be taught to read equally well at ages 3, 4, 5, and 6.

Alex starts preschool at age 3 and they can read at a 1st grade level by the time they enter Kindergarten.

Jamie does not do any preschool and cannot read at all when they enter Kindergarten.

By the end of Kindergarten, Alex can read slightly better than 1st grade level, but not a lot better, since the curriculum hasn't been challenging. It's basically been a rehash of what they already can do. Jamie can read at the expected grade level by the end of Kindergarten.

By the end of first grade, no accommodations have been made for the fact that Alex is a slightly advanced reader. Both kids are given essentially the same pool of books to read. Alex has not skipped a grade or put in an some secret fast-track program for kids who went to preschool, because this does not exist. So by the end of first grade, they can read about equally well. Maybe Alex reads slightly better, but since no real pressure is being put on this advantage that would cause it to compound rather than diminish, it naturally diminishes until both students are at the same level.

Acting as though anything else would happen doesn't make sense to me. It's not like each year a child spends in school exerts some kind of Education Force on their brain which accrues generalized scholastic ability. Kindergartners are taught kindergarten level math and reading skills; kids entering kindergarten who already possess these skills only benefit until the other kids catch up.

So IMO the problem isn't the preschool "doesn't do anything". The problem is that the system as it stands doesn't actually utilize the potential advantage of preschool. We are pretty far away from a system that would do so; such a system would need to be one that tailors the specific educational content to the specific child.

My four year old can read pretty well and can write well enough that you can puzzle out what he's trying to communicate. But there is no expectation that he's going to skip kindergarten because of this. So in what sense could this ever be a long-term academic advantage?

Comment by moridinamael on Stoicism: Cautionary Advice · 2018-11-16T16:49:58.960Z · score: 2 (1 votes) · LW · GW

"Real" stoicism seems to demand total relinquishment of all attachments, to almost exactly the same degree that "real" Buddhism does. I think this is a pathological thing to want.

Yes, it's psychologically beneficial to be less upset about being stuck in traffic. When you're already stuck in traffic and can't do anything about it, your choice to not be upset about it is simply a choice to avoid needless suffering.

One might argue that it's even better to let yourself be really annoyed by being stuck in traffic, and then permit your annoyance to motivate you to take actions to avoid being stuck in traffic in the future.

The sort of person who would legitimately not care if their child died would also be different from me in a number of other very important ways in order to be a reasonably consistent agent. For example, if a stoic claims to be emotionally indifferent between "child death" and "child flourishing", then what actually motivates them? Why do anything, why make any choice? At least Buddhist thought is honest about this, and admits that the only truly consistent solution is a purely monastic life of meditation and aggressive pursuit of non-existence. Stoicism, as far as I can tell, refuses to bite the bullet of the conclusions of its premises.

Comment by moridinamael on Stoicism: Cautionary Advice · 2018-11-15T16:49:31.032Z · score: 2 (1 votes) · LW · GW

Two things:

1) I think a lot of people think they're stoic when in actuality they've just never had anything bad happen to them. Modern life offers relatively few opportunities to test stoicism, and by default, everyone fails such tests without truly significant preparation.

2) Stoicism is actually a huge drag.

With regard to whatever objects give you delight, are useful, or are deeply loved, remember to tell yourself of what general nature they are, beginning from the most insignificant things. If, for example, you are fond of a specific ceramic cup, remind yourself that it is only ceramic cups in general of which you are fond. Then, if it breaks, you will not be disturbed. If you kiss your child, or your wife, say that you only kiss things which are human, and thus you will not be disturbed if either of them dies.
- Epictetus, The Enchiridion

Who wants to live like this? I want to be disturbed if a loved one dies.

Comment by moridinamael on Help Me Refactor Myself I am Lost · 2018-11-09T18:08:15.513Z · score: 11 (5 votes) · LW · GW

Usually feeling unmotivated despite having goals which you consider intellectually important is a sign that your hominid brain is not getting important things that it wants, and/or doesn't think that it will get the important things that it wants if it pursues the target of your intellectual interest.

Some people's brains are different and they are capable of being motivated by pure intellectual pursuits. Most of us do not operate this way. We need to achieve ego-narratively-satisfying levels of safety, comfort, and belonging.

Almost everyone at your age struggles with meaning and a lack of a sense of belonging. In some sense this is a feature, not a bug, because it motivates the human animal to prioritize the acquisition of safety/comfort/belonging. Think about it this way - if you weren't miserable right now, you would never be motivated to do the uncomfortable things necessary to secure a stable life, much less a mate.

On the flip side, when you and/or your ego feel safe, secure, and socially relevant, motivation for abstract intellectual tasks will blossom on its own.

It can be awkward to admit the degree to which your human-animal self is uninterested in or actively sabotaging your intellectual pursuits. You still have to make the human-animal self happy if you want to free up the cognitive energy to do important things. This may be frustrating it, but I suggest re-framing it as part of the challenge and fun of life.

tl;dr: Take care of yourself, treat yourself as somebody for whom you are responsible, and under these conditions, motivation and ambition grow on their own.

Comment by moridinamael on October gwern.net links · 2018-11-02T16:05:58.312Z · score: 4 (3 votes) · LW · GW

It's also worth pointing out that MMA has adapted to jiu-jitsu. Grappling arts are integrated into the MMA curriculum and no longer strictly dominate. Fighters with striking-heavy styles can win by learning how to defend against takedowns and then just playing to their own advantage. At the present moment, it seems that Western wrestling techniques are possibly more valuable in the "metagame" than mastery of jiu-jitsu, because a very talented wrestler can put almost any opponent on the ground from a standing position.

Comment by moridinamael on Things I Learned From Working With A Marketing Advisor · 2018-10-09T22:49:15.275Z · score: 8 (5 votes) · LW · GW

It seems like sarahconstantin and Adams are talking about two completely different things. Adams is talking about writing internal reports or memos for efficient transfer of information. saraconstantin is talking about writing public-facing marketing materials. The incentives and aims of the two types of writing are completely different.

Similarly, "Business email" is not one thing. Writing an email to a client or prospective client, or writing an email to a coworker, or for that matter to a boss or a subordinate, will all have different requirements and look totally different.

Comment by moridinamael on Leto among the Machines · 2018-10-01T01:47:47.607Z · score: 5 (3 votes) · LW · GW

I agree that Herbert thought the breeding program was necessary. But I also think he couched it as tragically necessary. Leto II's horrific repression was similarly tragically necessary.

I think the questions provoked by Herbert's concepts of Mentats and Bene Gesserit might actually be fruitful to think about.

If there were no meditation traditions on Earth, then we would have no reason to suspect that jhanas, or any other advanced states of meditative achievement, exist. If there were no musical instruments, we would have no reason to suspect that a human could use fingers or breath to manipulate strings or harmonics to create intricate, polyphonic, improvised melodies. If there were no arithmetic, we would view a person who could do rudimentary mental math to be a wizard. One can extend this line of thinking to many things - reading and writing, deep strategy games like chess, high-level physical sports, and perhaps even specific fields of knowledge.

So it is probably safe to say that we "know" that a human can't be trained to do the things that Mentats do in Dune, but I don't think it's safe to say that we have any idea what humans could be trained to do with unpredictable avenues of development and 20,000 years of cultural evolution.

I guess I'm not really disagreeing with anything you said, but rather advocating that we take Herbert's ideas seriously but not literally.

Comment by moridinamael on Leto among the Machines · 2018-10-01T01:32:53.912Z · score: 3 (2 votes) · LW · GW

Thanks for the interview. This is great.

Comment by moridinamael on Leto among the Machines · 2018-09-30T22:35:05.061Z · score: 8 (4 votes) · LW · GW

This is very cool to see. I just finished re-reading Dune. I wonder what signal prompted me to do that, and I wonder if it was the same signal that prompted you to write this.

I've been thinking a lot recently about rationalist advocacy and community. I don't think that individuals unilaterally deciding to stop automating things is going to make a dent in the problem. This is a straightforward coordination problem. If you drop out of modern society, for whatever reason, society fills in the hole you left. The only way to challenge Moloch is to create an alternative social framework that actually works better, at least in some regards.

One thing that keeps cropping up in my thoughts/discussions about rationalist community is that the value-add of the community needs to be very clear and concrete. The metaphor or analogue of professional licensure might be appropriate - a "rationalist credential", some kind of impossible-to-fake, difficult-to-earn token of mastery that denotes high skill level and knowledge, that then becomes symbolically associated with the movement. I mention this idea because the value-add of being a credentialed rationalist would then have to be weighed against whatever weird social restrictions that the community adopts - e.g., your suggestion of avoiding automation, or instituting some kind of fealty system. These ideas may be empirically, demonstrably good ideas (we don't really know yet) but their cost in weirdness points can't be ignored.

As an side - and I'm open to being corrected on this - I don't think Herbert was actually advocating for a lot of the ideas he portrays. Dune and Frank Herbert explore a lot of ideas but don't really make prescriptions. In fact, I think that Herbert is putting forth his universe as an example of undesirable stagnation, not some kind of demonstrated perfection. It would be cool to be a mentat or a Bene Gesserit, i.e. a member of a tribe focused on realizing human potential, but I don't think he was saying with his books that the multi-millennial ideologically motivated political stranglehold of the Bene Gesserit was a good thing. I don't think that Herbert thinks that feudalism is a good thing just because it's the system he presents. Maybe I'm wrong.

Comment by moridinamael on An Invitation to Measure Meditation · 2018-09-30T18:13:02.434Z · score: 3 (2 votes) · LW · GW

I've used TMI as a meditation guide off and on for some time. One thing you might consider tracking is "generalized motivation", or "energy level", or something like that. You might have to measure this subjectively, by rating how motivated you feel, or you could keep track of your ability to objectively get things done. I find that too much* meditation results in an undesirable degree of affective flattening and a reduction in motivation and energy level. For these reasons, I actually don't meditate currently.

*"Too much" may vary, but I think 20 minutes per day is a low enough level to avoid the negative side effects. Of course, at 20 minutes a day, you're also not going to achieve the desirable outcomes.

Comment by moridinamael on Memetic Tribes and Culture War 2.0 · 2018-09-20T18:54:49.306Z · score: 4 (2 votes) · LW · GW

I really like the idea of some kind of public, generalized Hippocratic Oath for online behavior. Needs an actual name, though, and needs somebody more visible than me to plant a Schelling stake in it.

Comment by moridinamael on How to use a microphone r̶a̶t̶i̶o̶n̶a̶l̶l̶y̶ during public speaking · 2018-09-14T21:35:47.641Z · score: 14 (4 votes) · LW · GW

It's also good to just ask if your audience can hear you. Generally audiences will not tell you if you're completely unintelligible due to the Bystander Effect.

Pop-filters are great if you can get them. They help with the issue of plosives blowing up the mike.

Comment by moridinamael on Petrov corrigibility · 2018-09-12T22:34:00.938Z · score: 5 (3 votes) · LW · GW

I would consider it corrigible for the AI to tell Petrov about the problem. Not "I can't answer you" but "the texts I have on hand are inconclusive and unhelpful with respect to helping you solve your problem." This is, itself, informative.

If you're an expert in radar, and I ask you if you think something is a glitch or not, and you say you "can't answer", that doesn't tell me anything. I have no idea why you can't answer. If you tell me "it's inconclusive", that's informative. The information is that you can't really distinguish between a glitch and a real signal in this case. If I'm conservatively minded, then I'll increase my confidence that it's a glitch.

Comment by moridinamael on A Dialogue on Rationalist Activism · 2018-09-12T22:25:11.991Z · score: 6 (4 votes) · LW · GW

I have a guess as to how this would actually evolve.

While the median Christian is not particularly Christian, there probably are a good number of pretty excellent Christians, whose motivation for being thus is their commitment to the ideals that they profess. So it's possible - even likely - that Christianity actually makes the world a little bit more "in the image of Christ" on the margin.

If you have a billion Christians, the number of "actually pretty good" Christians is likely to be pretty high.

Right now we probably have barely thousands of Rationalists who would identify as such. An organized attempt at increasing that number, with a formal aspiration to be better rationalists, would increase the number of "actually pretty good" rationalists, although the median rationalist might just be somebody who read 4% of the Sequences and went to two meetups. But that would still be a win.

Comment by moridinamael on A Dialogue on Rationalist Activism · 2018-09-12T16:32:15.963Z · score: 2 (1 votes) · LW · GW

My point was merely that you can found a club around an aspiration rather than an accomplishment. It's better to have the accomplishment, of course, but not necessary.

Comment by moridinamael on A Dialogue on Rationalist Activism · 2018-09-12T16:03:12.244Z · score: 2 (1 votes) · LW · GW

I think there is something like a Platonic "ultimate textbook of human rationality" that may be written in the future, but we don't actually know its contents. That's why the visitor can't give us the book. We have a dual problem: not only the challenge of spreading the ideas, but actually pinning down what the ideas are in the first place.

Actually, I think "pinning down" has entirely the wrong connotations, because human rationality seems more like a living and breathing process rather than a list of maxims chiseled in stone, and to a degree culturally dependent.

I will say that I don't think you need to answer #0 concretely before you set out. We can guess at the contents of the Platonic rationality textbook, and then iterate as we converge upon it.

Comment by moridinamael on A Dialogue on Rationalist Activism · 2018-09-12T15:54:30.348Z · score: 2 (1 votes) · LW · GW

I wonder about this. Is the average Christian more "Christian" than the average non-Christian? (Do they do good works for strangers, love and forgive their enemies, and live lives of poverty and service, at rates significantly above the population average?) If not, does that really affect their ability to grow? Has it really affected their ability to grow, historically?

Comment by moridinamael on A Dialogue on Rationalist Activism · 2018-09-12T15:51:12.867Z · score: 8 (3 votes) · LW · GW

I think MIRI also employed a hybrid strategy. I will say, it seems much easier to deploy a "go big or go home" approach after you've already created a minimum viable organization, rather than attempting to poach thinkfluencers without even having that groundwork in place.

Comment by moridinamael on A Dialogue on Rationalist Activism · 2018-09-10T21:17:15.345Z · score: 11 (5 votes) · LW · GW

Regarding both follow-up questions, I have two answers:

Answer 1: I don't intend for this to be a dodge, but I don't think it really matters what I think. I don't think it's practical to construct "the perfect organization" in our imagination and then anticipate that its perfection will be realized.

I think what a rationality organization looks like in practice is a small group of essentially like-minded people creating a Schelling point by forming the initial structure, and then the organization evolves from there in ways that are not necessarily predictable, in ways that reflect the will of the people who have the energy to actually put into the thing.

What's interesting is that when I say it that way, I realize that it sounds like a recipe for disaster. But also note that essentially no other organization on Earth has been formed in any other way.

Answer 2: I personally would create separate organizational branches for epistemic and instrumental focus, such that both could use the resources of the other, but neither would be constrained by the rules of the other. Either branch could use whatever policies are most suited to themselves. Think two houses of a congress. Either of the branches could propose policies to govern the whole organization, which could be accepted or vetoed by the other branch. There's probably also a role for something like an elected executive branch, but at this point I am grasping well beyond my domain of expertise.

Comment by moridinamael on A Dialogue on Rationalist Activism · 2018-09-10T20:38:10.373Z · score: 18 (8 votes) · LW · GW

I felt like the OP was already quite long enough, and don't have time now to write the full followup post that this question deserves, but in brief, the thrust would be that any rationalist organization deserving of the name would carefully choose its norms, structure and bylaws to reflect those of the most successful existing organizations (empiricism!), with care taken to exclude the aspects of those organizations that are inimical to group or individual rationality. Thus, even if stoning apostates has proven to be an empirically useful organizational strategy from the perspective of growth, it's probably not something we want to emulate.

I'm not sure if we can actually offer an unfalsifiable signal that we are on the "true path". I'm not sure if we even necessarily need or want to do that. In order to justify the existence of the "Don't Shoot Yourself in the Foot Club", you just need to demonstrate that not shooting yourself in the foot is better than the alternative, and I think we can do at least that, metaphorically.

Also, I actually suspect that any formal structure at all would probably be, on net, more of a good thing than a bad thing, in terms of growing the movement.

Comment by moridinamael on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-08T00:07:28.586Z · score: 23 (12 votes) · LW · GW
Noun. pseudointellectual (plural pseudointellectuals) A person who claims proficiency in scholarly or artistic activities while lacking in-depth knowledge or critical understanding. A person who pretends to be of greater intelligence than he or she in fact is.

I don't think S.A. claims any proficiency or scholarly credentials that he doesn't have. He doesn't review books claiming to be some expert in reviewing books, and doesn't write essays claiming to be setting down eternal truths. Rather, he is openly exploratory and epistemically careful.

I certainly don't think he pretends to be smarter than he is. But of course, the use of this word in the original claim is probably an empty slur, meant to convey sentiment rather than content. I certainly hope the "pseudointellectual" part of the claim isn't important to the argument, since I think even Alexander's detractors would admit it is inaccurate.

Thus, one question in short form: "Given that a pseudointellectual is defined as one who claims proficiency while lacking in-depth knowledge and/or a person who pretends to greater intelligence than he possesses, do you actually believe Scott Alexander qualifies as a pseudointellectual? If so, could you elaborate on where specifically he has exaggerated his own proficiency, knowledge, or intelligence? If not, what did you actually mean by pseudointellectual?"

It's one thing to accuse somebody of being systematically wrong, another thing to accuse them of being systematically deceptive. I don't think my focus on this word choice can be trivially dismissed.

Also, it seems likely that if one of the roughly nine words in the quoted thesis was chosen carelessly, the underlying thought process will be likewise flimsy.

Comment by moridinamael on Theories of Pain · 2018-08-29T21:09:52.314Z · score: 2 (1 votes) · LW · GW

For some reason I'm having trouble finding a non-confrontational phrasing for this, but: Can I ask why you're asking, first? Collating everything I've tried over the last two decades would take a large amount of work, I would probably miss many things, and besides, almost everything I tried was completely pointless. For example, I could go into detail about past chiropractic treatments, but why bother, since I only attempted that out of desperation, and in the end, it had no effect? This post was my attempt at outlining the few things that did seem to matter, prior to the new drug. (Actually, the standard botox treatment for migraines also helped, in a relative sense, but that wasn't something that I would expect to generalize to most people.)

Comment by moridinamael on Theories of Pain · 2018-08-28T21:14:11.405Z · score: 6 (3 votes) · LW · GW

It's cool to see this, I'm glad you got something out of my speculations.

I'm still pretty baffled about chronic pain. After ten or fifteen years of increasingly bad chronic migraine and neck pain, and having tried every treatment in the book, I recently started on the newly FDA approved drug which has very nearly cured the entire issue. The drug itself is a peptide which blocks a particular neurotransmitter receptor.

While I'm mostly happy about this beyond expression, I'm also retroactively frustrated by the fact that this "cure" is simply not something that one could ever approximate without the drug, and it doesn't really tell me anything about what is wrong with me that makes me prone to these issues in the first place.

Comment by moridinamael on Preliminary thoughts on moral weight · 2018-08-14T15:10:19.215Z · score: 8 (5 votes) · LW · GW

And if we are willing to ascribe moral weight to fruit flies, there must also be some corresponding non-zero moral weight to early-term human fetuses.