Comment by moridinamael on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-17T14:30:38.283Z · score: 5 (2 votes) · LW · GW

The part about wasting energy seems quite silly. The universe has a fixed amount of mass-energy, so presumably when he talks about wasting energy, what he means is taking advantage of energy gradients. Energy gradients will always and everywhere eventually wind down toward entropy on their own without help, so life isn't even doing anything novel here. It's not like the sun stops radiating out energy if life isn't there to absorb photons.

The observation that life takes advantage of concentration pockets of energy and thus this is the "purpose" of life is just sophistry. It deserves to be taken about as seriously as George Carlin's joke that humans were created because Mother Nature wanted plastic and didn't know how to make it.

Comment by moridinamael on What is your personal experience with "having a meaningful life"? · 2019-05-24T03:41:47.541Z · score: 2 (1 votes) · LW · GW

To point one, if I feel an excitement and eagerness about the thing, and if I expect I would feel sad if the thing were suddenly taken away, then I can be pretty sure that it’s important to me. But — and this relates to point two — it’s hard to care about the same thing for weeks or months or years at a time with the same intensity. Some projects of mine have oscillated between providing deep meaning and being a major drag, depending on contingent factors. This might manifest as a sense of ugh arising around certain facets of the activity. Usually the ugh goes away eventually. Sometimes it doesn’t, and you either accept that the unpleasantness is part and parcel with the fun, or you decide it’s not worth it.

Comment by moridinamael on What is your personal experience with "having a meaningful life"? · 2019-05-23T13:35:19.907Z · score: 20 (9 votes) · LW · GW

As far as I can tell, meaning is a feeling, something like a passive sense that you’re on the right track. The feeling is generated when you are working on something that you personally enjoy and care about, and when you are socializing sufficiently often with people you enjoy and care about. “Friends and hobbies are the meaning of life” is how I might phrase it.

Note that the activity that you spend your time on could be collecting all the stars in Mario64, as long as you actually care about completing the task. However, you tend to find it harder to care about things that don’t involve winning status or helping people, especially as you get older.

I think some people get themselves into psychological trouble by deciding that all of the things that they enjoy aren’t “important” and interacting with people they care about is a “distraction”. They paint themselves into a corner where the only thing they allow themselves to consider doing is something for which they feel no emotional attraction. They feel like they should enjoy it because they’ve decided it’s important, but they don’t, and then they feel guilty about that. The solution to this is to recognize the kind of animal you are and try to feed the needs that you have rather than the ones you wish you had.

Comment by moridinamael on The Relationship Between the Village and the Mission · 2019-05-14T17:03:43.487Z · score: 2 (1 votes) · LW · GW

I'm interested as well. as someone trying to grow the Denver rationality community, I want to be aware of failure modes.

Complex Behavior from Simple (Sub)Agents

2019-05-10T21:44:04.895Z · score: 91 (27 votes)
Comment by moridinamael on AI Alignment Problem: “Human Values” don’t Actually Exist · 2019-04-23T15:11:33.848Z · score: 4 (1 votes) · LW · GW
The idea of AI alignment is based on the idea that there is a finite, stable set of data about a person, which could be used to predict one’s choices, and which is actually morally good. The reasoning behind this basis is because if it is not true, then learning is impossible, useless, or will not converge.

Is it true that these assumptions are required for AI alignment?

I don't think it would be impossible to build an AI that is sufficiently aligned to know that, at pretty much any given moment, I don't want to be spontaneously injured, or be accused of doing something that will reliably cause all my peers to hate me, or for a loved one to die. There's quite a broad list of "easy" specific "alignment questions", that virtually 100% of humans will agree on in virtually 100% of circumstances. We could do worse than just building the partially-aligned AI who just makes sure we avoid fates worse than death, individually and collectively.

On the other hand, I agree completely that coupling the concepts of "AI alignment" and "optimization" seems pretty fraught. I've wondered if the "optimal" environment for the human animal might be a re-creation of the Pleistocene, except with, y'know, immortality, and carefully managed, exciting-but-not-harrowing levels of resource scarcity.

Comment by moridinamael on Episode 1 of "Tsuyoku Naritai!" (the 'becoming stronger' podcast/YT series). · 2019-04-18T17:34:02.026Z · score: 2 (1 votes) · LW · GW

You may already know this, but almost all YouTube videos will have an automatically generated transcript. Click "..." to the bottom right of the video panel and click "Open transcript" on the pulldown. YouTube's automatic speech transcription is very good.

Comment by moridinamael on Episode 1 of "Tsuyoku Naritai!" (the 'becoming stronger' podcast/YT series). · 2019-04-18T14:29:57.598Z · score: 2 (1 votes) · LW · GW

This exceeded my expectations. You kept it short and to the point, and the description of the technique was very clear. I look forward to more episodes.

Comment by moridinamael on Subagents, akrasia, and coherence in humans · 2019-03-26T16:40:51.496Z · score: 5 (2 votes) · LW · GW

Have you - or anyone, really - put much thought into the implications of these ideas to AI alignment?

If it's true that modeling humans at the level of constitutive subagents renders a more accurate description of human behavior, then any true solution to the alignment problem will need to respect this internal incoherence in humans.

This is potentially a very positive development, I think, because it suggests that a human can be modeled as a collection of relatively simple subagent utility functions, which interact and compete in complex but predictable ways. This sounds closer to a gears-level portrayal of what is happening inside a human, in contrast to descriptions of humans as having a single convoluted and impossible-to-pin-down utility function.

I don't know if you're at all familiar with Mark Lippman's Folding material and his ontology for mental phenomenology. My attempt to summarize his framework of mental phenomena is as follows: there are belief-like objects (expectations, tacit or explicit, complex or simple), goal-like objects (desirable states or settings or contexts), affordances (context-activated representations of the current potential action space) and intention-like objects (plans coordinating immediate felt intentions, via affordances, toward goal-states). All cognition is "generated" by the actions and interactions of these fundamental units, which I infer must be something like neurologically fundamental. Fish and maybe even worms probably have something like beliefs, goals, affordances and intentions. Ours are just bigger, more layered, more nested and more interconnected.

The reason I bring this up is that Folding was a bit of a kick in the head to my view on subagents. Instead of seeing subagents as being fundamental, I now see subagents as expressions of latent goal-like and belief-like objects, and the brain is implementing some kind of passive program that pursues goals and avoids expectations of suffering, even if you're not aware you have these goals or these expectations. In other words, the sense of there being a subagent is your brain running a background program that activates and acts upon the implications of these more fundamental yet hidden goals/beliefs.

None of this is at all in contradiction to anything in your Sequence. It's more like a slightly different framing, where a "Protector Subagent" is reduced to an expression of a belief-like object via a self-protective background process. It all adds up to the same thing, pretty much, but it might be more gears-level. Or maybe not.

Comment by moridinamael on Subagents, introspective awareness, and blending · 2019-03-04T22:05:53.372Z · score: 5 (2 votes) · LW · GW

Could you elaborate on how you're using the word "symmetrical" here?

Comment by moridinamael on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] · 2019-01-29T19:39:34.215Z · score: 2 (1 votes) · LW · GW

The best I can do after thinking about it for a bit is compute every possible combination of units under 200 supply, multiply that by the possible positions of those units in space, multiply that by the possible combinations of buildings on the map and their potential locations in space, multiply that by the possible combinations of upgrades, multiply that by the amount of resources in all available mineral/vespene sources ... I can already spot a few oversimplifications in what I just wrote, and I can think of even more things that need to be accounted for. The shields/hitpoints/energy of every unit. Combinatorially gigantic.

Just the number of potential positions of a single unit on the map is already huge.

But AlphaStar doesn't really explore much of this space. It finds out pretty quickly that there's really no reason to explore the parts of the space the include building random buildings in weird map locations. It explores and optimizes around the parts of the state space that look reasonably close to human play, because that was its starting point, and it's not going to find superior strategies randomly, not without a lot of optimization in isolation.

That's one thing I would love to see, actually. A version of the code trained purely on self-play, without a basis in human replays. Does it ever discover proxy plays or other esoteric cheese without a starting point provided in the human replays?

Comment by moridinamael on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] · 2019-01-25T16:12:47.831Z · score: 38 (18 votes) · LW · GW

Before now, it wasn't immediately obvious that SC2 is a game that can be played superhumanly well without anything that looks like long-term planning or counterfactual reasoning. The way humans play it relies on a combination of past experience, narrow skills, and "what-if" mental simulation of the opponent. Building a superhuman SC2 agent out of nothing more than LSTM units indicates that you can completely do away with planning, even when the action space is very large, even when the state space is VERY large, even when the possibilities are combinatorially enormous. Yes, humans can get good at SC2 with much less than 200 years of time played (although those humans are usually studying the replays of other masters to bootstrap their understanding) but I think it's worthwhile to focus on the inverse of this observation: that a sophisticated problem domain which looks like it ought to require planning and model-based counterfactual reasoning actually requires no such thing. What other problem domains seem like they ought to require planning and counterfactual reasoning, but can probably be conquered with nothing more advanced than a deep LSTM network?

(I haven't seen anyone bother to compute an estimate of the size of the state-space of SC2 relative to, for example, Go or Chess, and I'm not sure if there's even a coherent way to go about it.)

Comment by moridinamael on Book Summary: Consciousness and the Brain · 2019-01-18T18:26:36.726Z · score: 9 (4 votes) · LW · GW

The freedom to speculate wildly is what makes this topic so fun.

My mental model would say, you have a particular pattern recognition module that classifies objects as "chair", along with a weight of how well the current instance matches the category. An object can be a prototypical perfect Platonic chair, or an almost-chair, or maybe a chair if you flip it over, or not a chair at all.

When you look at a chair, this pattern recognition module immediately classifies it, and then brings online another module, which makes available all the relevant physical affordances, linguistic and logical implications of a chair being present in your environment. Recognizing something as a chair feels identical to recognizing something as a thing-in-which-I-can-sit. Similarly, you don't have to puzzle out the implications of a tiger walking into the room right now. The fear response will coincide with the recognition of the tiger.

When you try to introspect on chairness, what you're doing is tossing imagined sense percepts at yourself and observing the responses of the chariness detecting module. This allows you to generate an abstract representation of your own chairness classifier. But this abstract representation is absolutely not the same thing as the chairness classifier, any more than your abstract cogitation about what the "+" operator does is the same thing as the mental operation of adding two numbers together.

I think a lot of confusion about the nature of human thinking stems from the inability to internally distinguish between the abstracted symbol for a mental phenomenon and the mental phenomenon itself. This dovetails with IFS in an interesting way, in that it can be difficult to distinguish between thinking about a particular Part in the abstract, and actually getting into contact with that Part in a way that causes it to shift.

I'm not sure why you say that the unconscious modules communicating with each other would necessarily contradict the idea of us being conscious of exactly the stuff that's in the workspace, but I tend to agree that considering the contents of our consciousness and the contents of the workspace to be strictly isomorphic seems to be too strong.

I may be simply misunderstanding something. My sense is that when you open the fridge to get a yogurt and your brain shouts "HOW DID CYPHER GET INTO THE MATRIX TO MEET SMITH WITHOUT SOMEONE TO HELP HIM PLUG IN?", this is a kind of thought that arises from checking meticulously over your epistemic state for logical inconsistencies, rather esoteric and complex logical inconsistencies, and it seems to come from nowhere. Doesn't this imply that some submodules of your brain are thinking abstractly and logically about The Matrix completely outside of your conscious awareness? If so, then this either implies that the subconscious processing of individual submodules can be very complex and abstract without needing to share information with other submodules, or that information sharing between submodules can occur without you being consciously aware of it.

A third possibility would be that you were actually consciously thinking about The Matrix in a kind of inattentive, distracted way, and it only seems like the thought came out of nowhere. This would be far from the most shocking example of the brain simply lying to you about its operations.

Comment by moridinamael on Book Summary: Consciousness and the Brain · 2019-01-17T20:21:58.436Z · score: 9 (4 votes) · LW · GW

The most obvious example of this kind of thing is the "flash of insight" that we all experience from time to time, where a complex, multi-part solution to a problem intrudes on our awareness as if from nowhere. This seems to be a clear case of the unconscious working on this problem in the background, identifying its solution as a valid one still in the background, and injecting the fully-formed idea into awareness with high salience.

It's a bit like the phenomenon of being able to pick out your own name from a babble of crowded conversation, except applied to the unconscious activity of the mind. This, however, implies that much complex inter-agent communication and abstract problem solving is happening subconsciously. And this seems to contradict the view that only very simple conceptual packages are passed through to the Global Workspace, and that we must necessarily be conscious of our own abstract problem solving.

My own perceptions during meditation (and during normal life) would suggest that the subconscious/unconscious is doing very complex and abstract "thinking" without my being aware of its workings, and intermittently injecting bits and pieces of its ruminations into awareness based on something like an expectation that the gestalt self might want to act on that information.

This seems contrary to the view that "what we are aware/conscious of" is isomorphic to "the Global Workspace". It seems that subconscious modules are chattering away amongst themselves almost constantly, using channels that are either inaccessible to consciousness or severely muted.

Comment by moridinamael on Sequence introduction: non-agent and multiagent models of mind · 2019-01-07T19:41:01.488Z · score: 10 (6 votes) · LW · GW

I really look forward to this Sequence.

Comment by moridinamael on What is a reasonable outside view for the fate of social movements? · 2019-01-04T16:13:08.243Z · score: 6 (4 votes) · LW · GW

I would suspect the failure of most social movements is overdetermined. Social movements by default are designed to change the status quo, and the status quo tends to be stable and intrinsically resistant to change. Social movements are often ideologically originated and may be aimed at achieving something practically impossible.

Another phrasing might be that most social movements fail because a sober analysis would have shown that there was never any realistic possibility for most social movements to succeed, even if they had more resources, smarter people and better planning.

Comment by moridinamael on How can I most effectively improve my writing abilities? · 2019-01-01T22:13:20.039Z · score: 11 (5 votes) · LW · GW

I've improved most dramatically at writing by getting very specific feedback from people who are clearly better than me. I consider myself lucky to have had a small handful of teachers and professors willing to put in the time to critique my writing on the sentence- and word-level.

Recently I had a work of fiction of mine minutely critiqued by a professional author and experienced a similar sense of "leveling up". For example, I've thought for years that I understood what "show don't tell" means. But my gracious editor in this case was able to point out multiple instances in my story where I was "telling" when I could be "showing". Once he pointed these out, I understood on a deeper level what to pay attention to.

One interesting thing about getting feedback on writing is that someone who is truly better than you can usually provide suggestions that you immediately recognize as correct. You may think your writing is fine, even great, but you'll recognize true improvements as being obviously correct and better, once pointed out. The process of becoming better at writing is the accumulation of such individual corrections.

Comment by moridinamael on The Bat and Ball Problem Revisited · 2018-12-13T17:06:51.234Z · score: 15 (9 votes) · LW · GW

My daughter is just starting to learn subtraction. She was very frustrated by it, and if I verbally asked "What's seven minus five?" she was about 50% likely to give the right answer. I asked her a sequence of simple subtraction problems and she consistently performed at about that level. In the course of our back and forth I switch my phrasing to the form "You have seven apples and you take away five, how many left?" and she immediately started answering the questions 100% correctly, very rapidly too. Experimentally I switched back to the prior form and she started getting them wrong again. It was apparent to me that simply phrasing the problem in terms of concrete objects was activating something like visualization which made the problems easy, and just phrasing it as abstract numbers was failing to activate this switch. So as you say, for more tricky arithmetic problems, it may be the case that what mental circuits are "activated automatically" determine the first answer you arrive at, and you can exploit that effect with edge cases like this.

Comment by moridinamael on Preschool: Much Less Than You Wanted To Know · 2018-11-20T21:19:05.956Z · score: 40 (14 votes) · LW · GW

It seems so obvious to me that the benefits of preschool would wear off after a short number of years that I feel like I must be missing something. How could it be otherwise, given the current system? This is all completely setting aside the developmental limitations of small children.

Let's take two kids, Jamie and Alex. Pretend that there are no developmental limitations on children's brains and that they can be taught to read equally well at ages 3, 4, 5, and 6.

Alex starts preschool at age 3 and they can read at a 1st grade level by the time they enter Kindergarten.

Jamie does not do any preschool and cannot read at all when they enter Kindergarten.

By the end of Kindergarten, Alex can read slightly better than 1st grade level, but not a lot better, since the curriculum hasn't been challenging. It's basically been a rehash of what they already can do. Jamie can read at the expected grade level by the end of Kindergarten.

By the end of first grade, no accommodations have been made for the fact that Alex is a slightly advanced reader. Both kids are given essentially the same pool of books to read. Alex has not skipped a grade or put in an some secret fast-track program for kids who went to preschool, because this does not exist. So by the end of first grade, they can read about equally well. Maybe Alex reads slightly better, but since no real pressure is being put on this advantage that would cause it to compound rather than diminish, it naturally diminishes until both students are at the same level.

Acting as though anything else would happen doesn't make sense to me. It's not like each year a child spends in school exerts some kind of Education Force on their brain which accrues generalized scholastic ability. Kindergartners are taught kindergarten level math and reading skills; kids entering kindergarten who already possess these skills only benefit until the other kids catch up.

So IMO the problem isn't the preschool "doesn't do anything". The problem is that the system as it stands doesn't actually utilize the potential advantage of preschool. We are pretty far away from a system that would do so; such a system would need to be one that tailors the specific educational content to the specific child.

My four year old can read pretty well and can write well enough that you can puzzle out what he's trying to communicate. But there is no expectation that he's going to skip kindergarten because of this. So in what sense could this ever be a long-term academic advantage?

Comment by moridinamael on Stoicism: Cautionary Advice · 2018-11-16T16:49:58.960Z · score: 2 (1 votes) · LW · GW

"Real" stoicism seems to demand total relinquishment of all attachments, to almost exactly the same degree that "real" Buddhism does. I think this is a pathological thing to want.

Yes, it's psychologically beneficial to be less upset about being stuck in traffic. When you're already stuck in traffic and can't do anything about it, your choice to not be upset about it is simply a choice to avoid needless suffering.

One might argue that it's even better to let yourself be really annoyed by being stuck in traffic, and then permit your annoyance to motivate you to take actions to avoid being stuck in traffic in the future.

The sort of person who would legitimately not care if their child died would also be different from me in a number of other very important ways in order to be a reasonably consistent agent. For example, if a stoic claims to be emotionally indifferent between "child death" and "child flourishing", then what actually motivates them? Why do anything, why make any choice? At least Buddhist thought is honest about this, and admits that the only truly consistent solution is a purely monastic life of meditation and aggressive pursuit of non-existence. Stoicism, as far as I can tell, refuses to bite the bullet of the conclusions of its premises.

Comment by moridinamael on Stoicism: Cautionary Advice · 2018-11-15T16:49:31.032Z · score: 2 (1 votes) · LW · GW

Two things:

1) I think a lot of people think they're stoic when in actuality they've just never had anything bad happen to them. Modern life offers relatively few opportunities to test stoicism, and by default, everyone fails such tests without truly significant preparation.

2) Stoicism is actually a huge drag.

With regard to whatever objects give you delight, are useful, or are deeply loved, remember to tell yourself of what general nature they are, beginning from the most insignificant things. If, for example, you are fond of a specific ceramic cup, remind yourself that it is only ceramic cups in general of which you are fond. Then, if it breaks, you will not be disturbed. If you kiss your child, or your wife, say that you only kiss things which are human, and thus you will not be disturbed if either of them dies.
- Epictetus, The Enchiridion

Who wants to live like this? I want to be disturbed if a loved one dies.

Comment by moridinamael on Help Me Refactor Myself I am Lost · 2018-11-09T18:08:15.513Z · score: 11 (5 votes) · LW · GW

Usually feeling unmotivated despite having goals which you consider intellectually important is a sign that your hominid brain is not getting important things that it wants, and/or doesn't think that it will get the important things that it wants if it pursues the target of your intellectual interest.

Some people's brains are different and they are capable of being motivated by pure intellectual pursuits. Most of us do not operate this way. We need to achieve ego-narratively-satisfying levels of safety, comfort, and belonging.

Almost everyone at your age struggles with meaning and a lack of a sense of belonging. In some sense this is a feature, not a bug, because it motivates the human animal to prioritize the acquisition of safety/comfort/belonging. Think about it this way - if you weren't miserable right now, you would never be motivated to do the uncomfortable things necessary to secure a stable life, much less a mate.

On the flip side, when you and/or your ego feel safe, secure, and socially relevant, motivation for abstract intellectual tasks will blossom on its own.

It can be awkward to admit the degree to which your human-animal self is uninterested in or actively sabotaging your intellectual pursuits. You still have to make the human-animal self happy if you want to free up the cognitive energy to do important things. This may be frustrating it, but I suggest re-framing it as part of the challenge and fun of life.

tl;dr: Take care of yourself, treat yourself as somebody for whom you are responsible, and under these conditions, motivation and ambition grow on their own.

Comment by moridinamael on October links · 2018-11-02T16:05:58.312Z · score: 4 (3 votes) · LW · GW

It's also worth pointing out that MMA has adapted to jiu-jitsu. Grappling arts are integrated into the MMA curriculum and no longer strictly dominate. Fighters with striking-heavy styles can win by learning how to defend against takedowns and then just playing to their own advantage. At the present moment, it seems that Western wrestling techniques are possibly more valuable in the "metagame" than mastery of jiu-jitsu, because a very talented wrestler can put almost any opponent on the ground from a standing position.

Comment by moridinamael on Things I Learned From Working With A Marketing Advisor · 2018-10-09T22:49:15.275Z · score: 8 (5 votes) · LW · GW

It seems like sarahconstantin and Adams are talking about two completely different things. Adams is talking about writing internal reports or memos for efficient transfer of information. saraconstantin is talking about writing public-facing marketing materials. The incentives and aims of the two types of writing are completely different.

Similarly, "Business email" is not one thing. Writing an email to a client or prospective client, or writing an email to a coworker, or for that matter to a boss or a subordinate, will all have different requirements and look totally different.

Comment by moridinamael on Leto among the Machines · 2018-10-01T01:47:47.607Z · score: 5 (3 votes) · LW · GW

I agree that Herbert thought the breeding program was necessary. But I also think he couched it as tragically necessary. Leto II's horrific repression was similarly tragically necessary.

I think the questions provoked by Herbert's concepts of Mentats and Bene Gesserit might actually be fruitful to think about.

If there were no meditation traditions on Earth, then we would have no reason to suspect that jhanas, or any other advanced states of meditative achievement, exist. If there were no musical instruments, we would have no reason to suspect that a human could use fingers or breath to manipulate strings or harmonics to create intricate, polyphonic, improvised melodies. If there were no arithmetic, we would view a person who could do rudimentary mental math to be a wizard. One can extend this line of thinking to many things - reading and writing, deep strategy games like chess, high-level physical sports, and perhaps even specific fields of knowledge.

So it is probably safe to say that we "know" that a human can't be trained to do the things that Mentats do in Dune, but I don't think it's safe to say that we have any idea what humans could be trained to do with unpredictable avenues of development and 20,000 years of cultural evolution.

I guess I'm not really disagreeing with anything you said, but rather advocating that we take Herbert's ideas seriously but not literally.

Comment by moridinamael on Leto among the Machines · 2018-10-01T01:32:53.912Z · score: 3 (2 votes) · LW · GW

Thanks for the interview. This is great.

Comment by moridinamael on Leto among the Machines · 2018-09-30T22:35:05.061Z · score: 8 (4 votes) · LW · GW

This is very cool to see. I just finished re-reading Dune. I wonder what signal prompted me to do that, and I wonder if it was the same signal that prompted you to write this.

I've been thinking a lot recently about rationalist advocacy and community. I don't think that individuals unilaterally deciding to stop automating things is going to make a dent in the problem. This is a straightforward coordination problem. If you drop out of modern society, for whatever reason, society fills in the hole you left. The only way to challenge Moloch is to create an alternative social framework that actually works better, at least in some regards.

One thing that keeps cropping up in my thoughts/discussions about rationalist community is that the value-add of the community needs to be very clear and concrete. The metaphor or analogue of professional licensure might be appropriate - a "rationalist credential", some kind of impossible-to-fake, difficult-to-earn token of mastery that denotes high skill level and knowledge, that then becomes symbolically associated with the movement. I mention this idea because the value-add of being a credentialed rationalist would then have to be weighed against whatever weird social restrictions that the community adopts - e.g., your suggestion of avoiding automation, or instituting some kind of fealty system. These ideas may be empirically, demonstrably good ideas (we don't really know yet) but their cost in weirdness points can't be ignored.

As an side - and I'm open to being corrected on this - I don't think Herbert was actually advocating for a lot of the ideas he portrays. Dune and Frank Herbert explore a lot of ideas but don't really make prescriptions. In fact, I think that Herbert is putting forth his universe as an example of undesirable stagnation, not some kind of demonstrated perfection. It would be cool to be a mentat or a Bene Gesserit, i.e. a member of a tribe focused on realizing human potential, but I don't think he was saying with his books that the multi-millennial ideologically motivated political stranglehold of the Bene Gesserit was a good thing. I don't think that Herbert thinks that feudalism is a good thing just because it's the system he presents. Maybe I'm wrong.

Comment by moridinamael on An Invitation to Measure Meditation · 2018-09-30T18:13:02.434Z · score: 3 (2 votes) · LW · GW

I've used TMI as a meditation guide off and on for some time. One thing you might consider tracking is "generalized motivation", or "energy level", or something like that. You might have to measure this subjectively, by rating how motivated you feel, or you could keep track of your ability to objectively get things done. I find that too much* meditation results in an undesirable degree of affective flattening and a reduction in motivation and energy level. For these reasons, I actually don't meditate currently.

*"Too much" may vary, but I think 20 minutes per day is a low enough level to avoid the negative side effects. Of course, at 20 minutes a day, you're also not going to achieve the desirable outcomes.

Comment by moridinamael on Memetic Tribes and Culture War 2.0 · 2018-09-20T18:54:49.306Z · score: 4 (2 votes) · LW · GW

I really like the idea of some kind of public, generalized Hippocratic Oath for online behavior. Needs an actual name, though, and needs somebody more visible than me to plant a Schelling stake in it.

Comment by moridinamael on How to use a microphone r̶a̶t̶i̶o̶n̶a̶l̶l̶y̶ during public speaking · 2018-09-14T21:35:47.641Z · score: 14 (4 votes) · LW · GW

It's also good to just ask if your audience can hear you. Generally audiences will not tell you if you're completely unintelligible due to the Bystander Effect.

Pop-filters are great if you can get them. They help with the issue of plosives blowing up the mike.

Comment by moridinamael on Petrov corrigibility · 2018-09-12T22:34:00.938Z · score: 5 (3 votes) · LW · GW

I would consider it corrigible for the AI to tell Petrov about the problem. Not "I can't answer you" but "the texts I have on hand are inconclusive and unhelpful with respect to helping you solve your problem." This is, itself, informative.

If you're an expert in radar, and I ask you if you think something is a glitch or not, and you say you "can't answer", that doesn't tell me anything. I have no idea why you can't answer. If you tell me "it's inconclusive", that's informative. The information is that you can't really distinguish between a glitch and a real signal in this case. If I'm conservatively minded, then I'll increase my confidence that it's a glitch.

Comment by moridinamael on A Dialogue on Rationalist Activism · 2018-09-12T22:25:11.991Z · score: 6 (4 votes) · LW · GW

I have a guess as to how this would actually evolve.

While the median Christian is not particularly Christian, there probably are a good number of pretty excellent Christians, whose motivation for being thus is their commitment to the ideals that they profess. So it's possible - even likely - that Christianity actually makes the world a little bit more "in the image of Christ" on the margin.

If you have a billion Christians, the number of "actually pretty good" Christians is likely to be pretty high.

Right now we probably have barely thousands of Rationalists who would identify as such. An organized attempt at increasing that number, with a formal aspiration to be better rationalists, would increase the number of "actually pretty good" rationalists, although the median rationalist might just be somebody who read 4% of the Sequences and went to two meetups. But that would still be a win.

Comment by moridinamael on A Dialogue on Rationalist Activism · 2018-09-12T16:32:15.963Z · score: 2 (1 votes) · LW · GW

My point was merely that you can found a club around an aspiration rather than an accomplishment. It's better to have the accomplishment, of course, but not necessary.

Comment by moridinamael on A Dialogue on Rationalist Activism · 2018-09-12T16:03:12.244Z · score: 2 (1 votes) · LW · GW

I think there is something like a Platonic "ultimate textbook of human rationality" that may be written in the future, but we don't actually know its contents. That's why the visitor can't give us the book. We have a dual problem: not only the challenge of spreading the ideas, but actually pinning down what the ideas are in the first place.

Actually, I think "pinning down" has entirely the wrong connotations, because human rationality seems more like a living and breathing process rather than a list of maxims chiseled in stone, and to a degree culturally dependent.

I will say that I don't think you need to answer #0 concretely before you set out. We can guess at the contents of the Platonic rationality textbook, and then iterate as we converge upon it.

Comment by moridinamael on A Dialogue on Rationalist Activism · 2018-09-12T15:54:30.348Z · score: 2 (1 votes) · LW · GW

I wonder about this. Is the average Christian more "Christian" than the average non-Christian? (Do they do good works for strangers, love and forgive their enemies, and live lives of poverty and service, at rates significantly above the population average?) If not, does that really affect their ability to grow? Has it really affected their ability to grow, historically?

Comment by moridinamael on A Dialogue on Rationalist Activism · 2018-09-12T15:51:12.867Z · score: 8 (3 votes) · LW · GW

I think MIRI also employed a hybrid strategy. I will say, it seems much easier to deploy a "go big or go home" approach after you've already created a minimum viable organization, rather than attempting to poach thinkfluencers without even having that groundwork in place.

Comment by moridinamael on A Dialogue on Rationalist Activism · 2018-09-10T21:17:15.345Z · score: 11 (5 votes) · LW · GW

Regarding both follow-up questions, I have two answers:

Answer 1: I don't intend for this to be a dodge, but I don't think it really matters what I think. I don't think it's practical to construct "the perfect organization" in our imagination and then anticipate that its perfection will be realized.

I think what a rationality organization looks like in practice is a small group of essentially like-minded people creating a Schelling point by forming the initial structure, and then the organization evolves from there in ways that are not necessarily predictable, in ways that reflect the will of the people who have the energy to actually put into the thing.

What's interesting is that when I say it that way, I realize that it sounds like a recipe for disaster. But also note that essentially no other organization on Earth has been formed in any other way.

Answer 2: I personally would create separate organizational branches for epistemic and instrumental focus, such that both could use the resources of the other, but neither would be constrained by the rules of the other. Either branch could use whatever policies are most suited to themselves. Think two houses of a congress. Either of the branches could propose policies to govern the whole organization, which could be accepted or vetoed by the other branch. There's probably also a role for something like an elected executive branch, but at this point I am grasping well beyond my domain of expertise.

Comment by moridinamael on A Dialogue on Rationalist Activism · 2018-09-10T20:38:10.373Z · score: 18 (8 votes) · LW · GW

I felt like the OP was already quite long enough, and don't have time now to write the full followup post that this question deserves, but in brief, the thrust would be that any rationalist organization deserving of the name would carefully choose its norms, structure and bylaws to reflect those of the most successful existing organizations (empiricism!), with care taken to exclude the aspects of those organizations that are inimical to group or individual rationality. Thus, even if stoning apostates has proven to be an empirically useful organizational strategy from the perspective of growth, it's probably not something we want to emulate.

I'm not sure if we can actually offer an unfalsifiable signal that we are on the "true path". I'm not sure if we even necessarily need or want to do that. In order to justify the existence of the "Don't Shoot Yourself in the Foot Club", you just need to demonstrate that not shooting yourself in the foot is better than the alternative, and I think we can do at least that, metaphorically.

Also, I actually suspect that any formal structure at all would probably be, on net, more of a good thing than a bad thing, in terms of growing the movement.

A Dialogue on Rationalist Activism

2018-09-10T18:30:01.130Z · score: 86 (36 votes)
Comment by moridinamael on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-08T00:07:28.586Z · score: 24 (11 votes) · LW · GW
Noun. pseudointellectual (plural pseudointellectuals) A person who claims proficiency in scholarly or artistic activities while lacking in-depth knowledge or critical understanding. A person who pretends to be of greater intelligence than he or she in fact is.

I don't think S.A. claims any proficiency or scholarly credentials that he doesn't have. He doesn't review books claiming to be some expert in reviewing books, and doesn't write essays claiming to be setting down eternal truths. Rather, he is openly exploratory and epistemically careful.

I certainly don't think he pretends to be smarter than he is. But of course, the use of this word in the original claim is probably an empty slur, meant to convey sentiment rather than content. I certainly hope the "pseudointellectual" part of the claim isn't important to the argument, since I think even Alexander's detractors would admit it is inaccurate.

Thus, one question in short form: "Given that a pseudointellectual is defined as one who claims proficiency while lacking in-depth knowledge and/or a person who pretends to greater intelligence than he possesses, do you actually believe Scott Alexander qualifies as a pseudointellectual? If so, could you elaborate on where specifically he has exaggerated his own proficiency, knowledge, or intelligence? If not, what did you actually mean by pseudointellectual?"

It's one thing to accuse somebody of being systematically wrong, another thing to accuse them of being systematically deceptive. I don't think my focus on this word choice can be trivially dismissed.

Also, it seems likely that if one of the roughly nine words in the quoted thesis was chosen carelessly, the underlying thought process will be likewise flimsy.

Comment by moridinamael on Theories of Pain · 2018-08-29T21:09:52.314Z · score: 2 (1 votes) · LW · GW

For some reason I'm having trouble finding a non-confrontational phrasing for this, but: Can I ask why you're asking, first? Collating everything I've tried over the last two decades would take a large amount of work, I would probably miss many things, and besides, almost everything I tried was completely pointless. For example, I could go into detail about past chiropractic treatments, but why bother, since I only attempted that out of desperation, and in the end, it had no effect? This post was my attempt at outlining the few things that did seem to matter, prior to the new drug. (Actually, the standard botox treatment for migraines also helped, in a relative sense, but that wasn't something that I would expect to generalize to most people.)

Comment by moridinamael on Theories of Pain · 2018-08-28T21:14:11.405Z · score: 6 (3 votes) · LW · GW

It's cool to see this, I'm glad you got something out of my speculations.

I'm still pretty baffled about chronic pain. After ten or fifteen years of increasingly bad chronic migraine and neck pain, and having tried every treatment in the book, I recently started on the newly FDA approved drug which has very nearly cured the entire issue. The drug itself is a peptide which blocks a particular neurotransmitter receptor.

While I'm mostly happy about this beyond expression, I'm also retroactively frustrated by the fact that this "cure" is simply not something that one could ever approximate without the drug, and it doesn't really tell me anything about what is wrong with me that makes me prone to these issues in the first place.

Comment by moridinamael on Preliminary thoughts on moral weight · 2018-08-14T15:10:19.215Z · score: 8 (5 votes) · LW · GW

And if we are willing to ascribe moral weight to fruit flies, there must also be some corresponding non-zero moral weight to early-term human fetuses.

Comment by moridinamael on A Sarno-Hanson Synthesis · 2018-08-13T17:57:05.385Z · score: 2 (1 votes) · LW · GW

I don't really have much confidence in any of my assertions in the OP, but I will point out some possible questions that might challenge the "narrow" view.

1. Can you always/sometimes/ever subjectively correlate individual, transient spikes of pain/discomfort to particular external stimuli, and if so, what if those stimuli turn out to be extremely variable, as in, not reducible to one specific thing like "factual beliefs about pain's source"?

2. How does your theory explain ulcers, which don't have any obvious relationship to muscular tension?

3. How does your theory explain the fact that I can make pain/discomfort dissipate in the span of 10 seconds by asking, specifically, "What are you trying to tell me, Mr. Subconscious?" but not by simply thinking, "Oh, this pain is probably psychosomatic in origin."?

To comment further on question #1: when I'm in a particularly sensitive physical state, i.e. when my chronic pain is flaring up, I can walk down a street and simply take observations. I feel a flare of pain, and ask myself what's happening in my immediate surroundings. Perhaps I just walked by a homeless person. I didn't feel anything particular about the homeless person, but perhaps that's because I am "suppressing" some kind of disgust/annoyance for the homeless person, an emotional reaction which would be unacceptable to my conscious mind. Such an explanation feels post hoc and tenuous, but if I keep this monitoring process up for half an hour or so, I can't help but collate a list of pain flareups associated with events which specifically seem like they could be the kind of thing that my Id hates but my Superego is stifling.

Again, this whole topic is weird and confusing to me, I place low confidence in any particular beliefs expressed here.

Comment by moridinamael on Sandboxing by Physical Simulation? · 2018-08-01T15:00:23.220Z · score: 2 (1 votes) · LW · GW

I agree that there is practically no purpose to using this kind of method if you are just going to give the AI information about our reality anyway.

Sandboxing by Physical Simulation?

2018-08-01T00:36:32.374Z · score: 12 (9 votes)
Comment by moridinamael on A Sarno-Hanson Synthesis · 2018-07-12T18:30:49.985Z · score: 6 (4 votes) · LW · GW

I've read a couple of meditation books (TMI and a couple others, haven't finished MCTB yet) and am a novice meditator, and I will remark that I consciously stopped meditating a while back because ... the act of meditating was causing incredible physical tension and eventually pain. This was very frustrating obviously but I wasn't convinced that "meditating through it" would work and couldn't figure out any other way through.

In retrospect, in this provisional Sarno framework, it seems possible that the pain and unpleasantness of meditation was a subconscious reaction to really not wanting to meditate, subconsciously. Knowing myself, paying attention to something that I don't want to pay attention to breeds a high level of resentment, and meditation is the purified and crystallized form of that. I'll tentatively try meditating again and see if the tension returns, with this perspective.

Also, I do really need to read MCTB.

Comment by moridinamael on A Sarno-Hanson Synthesis · 2018-07-12T18:17:36.686Z · score: 4 (3 votes) · LW · GW

Could you point me to a good resource/overview on coherence therapy? I've looked into it before but never quite got a handle on it.

I actually agree completely that the subconscious is often in the right. I think that question of which subagent is actually right is somewhat orthogonal to the chronic pain phenomenon, but I would be interested to learn more. For example, you may actually be doing far more than your share of work in the community without garnering any recognition for it, creating a net drain on your resources, and you continue to do so because that's what a Good Person would do, so your subconscious tries to intervene in way that is fundamentally self-protective but quite messy in execution. In this case the analysis by the subconscious is "right" but the cure is worse than the disease.

A Sarno-Hanson Synthesis

2018-07-12T16:13:36.158Z · score: 45 (22 votes)
Comment by moridinamael on OpenAI releases functional Dota 5v5 bot, aims to beat world champions by August · 2018-06-27T19:02:33.639Z · score: 30 (7 votes) · LW · GW
OpenAI Five plays 180 years worth of games against itself every day, learning via self-play. It trains using a scaled-up version of Proximal Policy Optimization running on 256 GPUs and 128,000 CPU cores — a larger-scale version of the system we built to play the much-simpler solo variant of the game last year. Using a separate LSTM for each hero and no human data, it learns recognizable strategies. This indicates that reinforcement learning can yield long-term planning with large but achievable scale — without fundamental advances, contrary to our own expectations upon starting the project.

RL researchers (including ourselves) have generally believed that long time horizons would require fundamentally new advances, such as hierarchical reinforcement learning. Our results suggest that we haven’t been giving today’s algorithms enough credit — at least when they’re run at sufficient scale and with a reasonable way of exploring.

From a Hacker News comment by one of the researchers:

We are very encouraged by the algorithmic implication of this result — in fact, it mirrors closely the story of deep learning (existing algorithms at large scale solve otherwise unsolvable problems). If you have a very hard problem for which you have a simulator, our results imply there is a real, practical path towards solving it. This still needs to be proven out in real-world domains, but it will be very interesting to see the full ramifications of this finding.

In other words: Current algorithms do seem to be able to tackle levels of sophistication (long time horizons, imperfect information, high-dimensional option space) that even experienced researchers wouldn't have predicted, if you give them enough compute. And this person suggests that they could tackle even more sophisticated problems as long as you have a simulator for the problem domain.

Comment by moridinamael on The Monthly Newsletter as Thinking Tool · 2018-06-22T15:54:04.782Z · score: 7 (1 votes) · LW · GW

Awesome! May I ask how you're going about it? Sending to a small group of friends or keeping it to yourself, what software you're using, how long the entries end up being? Just curious.

Comment by moridinamael on The Berkeley Community & The Rest Of Us: A Response to Zvi & Benquo · 2018-05-21T12:27:17.038Z · score: 30 (7 votes) · LW · GW

So its hard to tell people to refrain from moving to Berkeley

I apologize for possibly/probably twisting your words a bit here, but I never have trouble telling people to refrain from moving to the Bay/Berkeley. I tell them I lived there for a few years and it’s a pretty unpleasant place, objectively, along any of ten different metrics relevant to comfort and peace of mind. I tell them I never actually developed any sense of belonging with the local Rationalist Community, so it’s not gauranteed that that will happen. I tell them I make a pretty good amount of money in many cities, but since I’m not a Comp Sci grad that doesn’t translate to a decent living in Berkeley. I tell them on top of that, Berkeley is one of the most expensive places to live in the world and if there were some kind of objective ratio of cost of living divided by objective comfort/quality/value-of-a-dollar, Berkeley would be near the top worldwide.

I also don’t find the proposition that you have to literally move to an expensive unpleasant overcrowded dystopian city in order to be rational to be particularly, uh, rational.

Comment by moridinamael on Duncan Sabien: "In Defense of Punch Bug" · 2018-05-03T16:02:40.974Z · score: 66 (22 votes) · LW · GW

A: (punches B) Slug bug!

B: (immediately punches A back, roughly equally hard)

A: Hey! You don't get to hit me back. That's the rules.

B: I understand. However, I was actually playing Iterated Prisoner's Dilemma. And so were you, by the way. Furthermore, I wasn't aware that we were playing Slug Bug, so my prior has to be that you were actually just demonstrating or testing your physical dominance over me.

A: We're friends! We've known each other for forty years! You're godfather to my children! I'm married to your sister! Why would I be demonstrating physical dominance over you?

B: I guess this is one of those better-safe-than-sorry situations. I actually do trust you to lay down your life for mine, but I don't trust you to perfectly and continuously control your human status impulses. You stepped over a line, I checked you on it. Now we can proceed with neither of us wondering if that punch had any hidden implications to our relationship. By the way, now that you've announced we're playing Slug Bug, I'm game for it.

A: But I wasn't playing this "status-regulation-prisoner's-dilemma" that you're describing, I was playing Slug Bug. Under my rules, I should just punch you a second time!

B: Do what you must. Be aware that I will just punch you back, again.

A: You're so insecure! I bodily carried you through a minefield and on a separate occasion threw myself onto a grenade to save you.

B: I consider the fact that you obviously love and cherish me to be completely separate from the issue of instinctive dominance and status behavior. Put it this way: if you expect the dynamics our relationship, and its base of sacred trust, to permit you to punch me for no apparent reason, then you must symmetrically expect our relationship to permit me applying a very mild, exactly proportionate corrective measure. In fact, I'm far more concerned that you're surprised and angry that I punched you back, than I was concerned that you punched me in the first place. Your concern implies that you did assume an asymmetry in our positions. I'll put it even more clearly: You can't punch somebody, expect them not to make a big deal of it, and then proceed to make a big deal of it when they punch you, regardless of what game you think you're playing in your own mind.

A: (punches B) Slug bug! You weren't paying attention.

B: Dammit.

Comment by moridinamael on A Few Tips on How to Generally Feel Better (and Avoid Headaches) · 2018-05-01T21:03:42.550Z · score: 5 (1 votes) · LW · GW

Drinking enough water is important. I’ve started adding a pinch of salt to my water to make sure I’m not messing up my osmotic balance. But that’s one of those “maybe it helps a little bit?” type interventions that I intentionally withheld from adding to the post. I wanted to only include things I was really sure about.

A Few Tips on How to Generally Feel Better (and Avoid Headaches)

2018-04-30T16:02:24.144Z · score: 47 (14 votes)

My Hammertime Final Exam

2018-03-22T18:40:55.539Z · score: 37 (8 votes)

Spamming Micro-Intentions to Generate Willpower

2018-02-13T20:16:09.651Z · score: 67 (22 votes)

Fun Theory in EverQuest

2018-02-05T20:26:34.761Z · score: 77 (23 votes)

The Monthly Newsletter as Thinking Tool

2018-02-02T16:42:49.325Z · score: 89 (25 votes)

"Slow is smooth, and smooth is fast"

2018-01-24T16:52:23.704Z · score: 66 (24 votes)

What the Universe Wants: Anthropics from the POV of Self-Replication

2018-01-12T19:03:34.044Z · score: 17 (7 votes)

A Simple Two-Axis Model of Subjective States, with Possible Applications to Utilitarian Problems

2018-01-02T18:07:20.456Z · score: 14 (6 votes)


2017-11-10T16:57:33.700Z · score: 24 (8 votes)

The Five Hindrances to Doing the Thing

2017-09-25T17:04:53.643Z · score: 11 (9 votes)

Measuring the Sanity Waterline

2016-12-06T20:38:57.307Z · score: 4 (5 votes)

Jocko Podcast

2016-09-06T15:38:41.377Z · score: 9 (10 votes)

Deepmind Plans for Rat-Level AI

2016-08-18T16:26:05.540Z · score: 20 (23 votes)

Flowsheet Logic and Notecard Logic

2015-09-09T16:42:35.321Z · score: 29 (30 votes)

Less Wrong Business Networking Google Group

2014-04-24T14:45:21.253Z · score: 7 (8 votes)

Bad Concepts Repository

2013-06-27T03:16:14.136Z · score: 22 (24 votes)

Towards an Algorithm for (Human) Self-Modification

2011-03-29T23:40:26.774Z · score: 29 (32 votes)