Posts

Mental subagent implications for AI Safety 2021-01-03T18:59:50.315Z
New Eliezer Yudkowsky interview on We Want MoR, the HPMOR Podcast 2020-12-21T17:21:43.895Z
An Invitation to the Guild of Servants — Alpha 2020-08-14T19:03:23.973Z
We Want MoR (HPMOR Discussion Podcast) Completes Book One 2020-02-19T00:34:15.864Z
moridinamael's Shortform 2019-11-25T03:16:22.857Z
Literature on memetics? 2019-11-07T19:18:05.748Z
Complex Behavior from Simple (Sub)Agents 2019-05-10T21:44:04.895Z
A Dialogue on Rationalist Activism 2018-09-10T18:30:01.130Z
Sandboxing by Physical Simulation? 2018-08-01T00:36:32.374Z
A Sarno-Hanson Synthesis 2018-07-12T16:13:36.158Z
A Few Tips on How to Generally Feel Better (and Avoid Headaches) 2018-04-30T16:02:24.144Z
My Hammertime Final Exam 2018-03-22T18:40:55.539Z
Spamming Micro-Intentions to Generate Willpower 2018-02-13T20:16:09.651Z
Fun Theory in EverQuest 2018-02-05T20:26:34.761Z
The Monthly Newsletter as Thinking Tool 2018-02-02T16:42:49.325Z
"Slow is smooth, and smooth is fast" 2018-01-24T16:52:23.704Z
What the Universe Wants: Anthropics from the POV of Self-Replication 2018-01-12T19:03:34.044Z
A Simple Two-Axis Model of Subjective States, with Possible Applications to Utilitarian Problems 2018-01-02T18:07:20.456Z
Mushrooms 2017-11-10T16:57:33.700Z
The Five Hindrances to Doing the Thing 2017-09-25T17:04:53.643Z
Measuring the Sanity Waterline 2016-12-06T20:38:57.307Z
Jocko Podcast 2016-09-06T15:38:41.377Z
Deepmind Plans for Rat-Level AI 2016-08-18T16:26:05.540Z
Flowsheet Logic and Notecard Logic 2015-09-09T16:42:35.321Z
Less Wrong Business Networking Google Group 2014-04-24T14:45:21.253Z
Bad Concepts Repository 2013-06-27T03:16:14.136Z
Towards an Algorithm for (Human) Self-Modification 2011-03-29T23:40:26.774Z

Comments

Comment by moridinamael on What trade should we make if we're all getting the new COVID strain? · 2020-12-26T01:59:32.961Z · LW · GW

Interesting. The market has not increased much since the announcement of the Moderna and Pfizer vaccines, so I'd have a hard time causally connecting the market to the vaccine announcement.

My feeling was that the original sell-off in February and early March was due to the fact that we were witnessing and unprecedented-in-our-lifetimes event, anything could happen. A more contagious form of the same virus will only trigger mass selloff if and only if investors believe that other investors believe that the news of the new strain is bad enough to trigger a panic selloff.

There are too many conflicting things going on for me to make confident claims about timelines and market moves, but I really do doubt the story that the market is up ~13% relative to January of this year simple because investors anticipate a quick return to normal.

Comment by moridinamael on What trade should we make if we're all getting the new COVID strain? · 2020-12-25T22:21:38.506Z · LW · GW

People are both buying more equities and selling less, because (1) their expenses are low, due to lockdowns and impossibility of travel and (2) the market is going up very fast, retail investors don't want to sell out of a bull market. There's obviously more going on than just this, retail investors are a minority of all invested capital, but even the big firms appear to have nowhere else to put their money right now. So as long as the lockdowns persist, both household and corporate expenses will be flatlined.

Even if my entire previous paragraph is wrong and dumb, you can simply observed that the market has soared ever since the original panic-crash, and ask why the virus increasing in its virulence would cause a different consequence than what we've already seen.

Comment by moridinamael on What trade should we make if we're all getting the new COVID strain? · 2020-12-25T19:42:20.092Z · LW · GW

Continued lockdowns will likely drive the markets higher, right? A more infectious strain might tend to increase the rate of lockdowns, even as the vaccine continues the rollout. So I would just buy-and-hold and then magically know exactly the right time to bail, when lockdowns look like they’re about to end.

Comment by moridinamael on New Eliezer Yudkowsky interview on We Want MoR, the HPMOR Podcast · 2020-12-22T20:35:33.056Z · LW · GW

A theme is by definition an idea that appears repeatedly, so the easiest method is to just sit back and notice what ideas are cropping up more than once. The first things you notice will by default be superficial, but after reflection you can often hone in on a more concise and deep statement of what the themes are.

For example, a first pass of HPMOR might pick out "overconfidence" as a theme, because Harry (and other characters) are repeatedly overconfident in ways that lead to costly errors. But continued consideration would show that the concept is both more specific and deeper than just "overconfidence", and ties into a whole thesis about Rationality, what Rationality is and isn't (as Eliezer says, providing positive and negative examples), and why it's a good idea.

Another strategy is to simply observe any particular thing that appears in the book and ask "why did the author do that?" The answer, for fiction with any degree of depth, is almost never going to be "because it was entertaining." Even a seemingly shallow gag like Ron Weasley's portrayal in HPMOR is still articulating something.

If this is truly a thing you're interested in getting better at, I would suggest reading books that don't even have cool powerful characters. For example, most things but Ursula Le Guin are going to feel very unsatisfying if you read them with the attitude that you're supposed to be watching a cool dude kick ass, but her books are extremely rewarding in other ways. Most popular genre fare is full of wish-fulfillment narratives, but there's still a lot of genre fiction that doesn't indulge itself in this way. And there's nothing intrinsically wrong with reading that way.

I'm not sure if I can name any podcast that has exclusively "definitive", or, author-intended readings/interpretations, but my own podcast typically goes into themes.

Comment by moridinamael on Pain is not the unit of Effort · 2020-11-25T16:07:09.047Z · LW · GW

I very recently realized something was wrong with my mental stance when I realized I was responding to work agenda items with some variation of the phrase, "Sure, that shouldn't be too painful." Clearly the first thing that came to mind when contemplating a task wasn't how long it would take, what resources would be needed, or how to do it, but rather how much suffering I would have to go through to accomplish it. This actually motivated some deeper changes in my lifestyle. Seeing this post here was extremely useful and timely for me.

Comment by moridinamael on Why are young, healthy people eager to take the Covid-19 vaccine? · 2020-11-22T19:52:28.091Z · LW · GW

Is there some additional reason to be concerned about side effects even after the point at which the vaccine has passed all the required trials, relative to the level of concern you should have about any other new vaccine?

Comment by moridinamael on the scaling “inconsistency”: openAI’s new insight · 2020-11-07T16:56:49.823Z · LW · GW

I really appreciated the degree of clarity and the organization of this post.

I wonder how much the slope of L(D) is a consequence of the structure of the dataset, and whether we have much power to meaningfully shift the nature of L(D) for large datasets. A lot of the structure of language is very repetitive, and once it is learned, the model doesn't learn much from seeing more examples of the same sort of thing.  But, within the dataset are buried very rare instances of important concept classes. (In other words, the Common Crawl data has a certain perplexity, and that perplexity is a function of both how much of the dataset is easy/broad/repetitive/generic and how much is hard/narrow/unique/specific.) For example: I can't, for the life of me, get GPT-3 to give correct answers on the following type of prompt:

You are facing north. There is a house straight ahead of you. To your left is a mountain. In what cardinal direction is the mountain?

No matter how much priming I give or how I reframe the question, GPT-3 tends to either give a basically random cardinal direction, or just repeat whatever direction I mentioned in the prompt. If you can figure out how to do it, please let me know, but as far as I can tell, GPT-3 really doesn't understand how to do this. I think this is just an example of the sort of thing which simply occurs so infrequently in the dataset that it hasn't learned the abstraction. However, I fully suspect that if there were some corner of the Internet where people wrote a lot about the cardinal directions of things relative to a specified observer, GPT-3 would learn it.

It also seems that one of the important things that humans do but transformers do not, is actively seek out more surprising subdomains of the learning space. The big breakthrough in transformers was attention, but currently the attention is only within-sequence, not across-dataset. What does L(D) look like if the model is empowered to notice, while training, that its loss on sequences involving words like "west" and "cardinal direction" is bad, and then to search for and prioritize other sequences with those tokens, rather than simply churning through the next 1000 examples of sequences from which it has essentially already extracted the maximum amount of information. At a certain point, you don't need to train it on "The man woke up and got out of {bed}", it knew what the last token was going to be long ago.

It would be good to know if I'm completely missing something here.

Comment by moridinamael on Is Stupidity Expanding? Some Hypotheses. · 2020-10-19T03:37:17.838Z · LW · GW

By “meme” I mean Dawkins’ original definition. A meme is just any idea to which Darwinian selection forces apply. For example, a good idea will be gradually stripped of nuance and accuracy as it passes through the communication network, and eventually becomes dumb.

Comment by moridinamael on Is Stupidity Expanding? Some Hypotheses. · 2020-10-15T21:26:58.620Z · LW · GW

We've built a bunch of tools for instant mind-to-mind communication, with built in features that amplify communiques that are short, simple and emotional. Over the last ten years an increasingly large fraction of all interpersonal communication has passed through these "dumbpass filter" communication systems. This process has systematically favored memes that are stupid. When everyone around you appears to be stupid, it makes you stupid. Even if you aren't on these communication platforms, your friends are, and their brains are being filled up with finely-honed, evolutionarily optimized stupidity. 

Comment by moridinamael on Rationality and Climate Change · 2020-10-07T14:43:42.809Z · LW · GW

Not sure that I disagree with you at all on any specific point.

It's just that "Considering the possibility that a technological fix will not be available" actually looks like staring down the barrel of a civilizational gun. There is no clever policy solution that dodges the bullet. 

If you impose a large carbon tax, or other effective global policy of austerity that reduces fossil fuel use without replacing that energy somehow, you're just making the whole world poor, as our electricity, food, transportation and medical bills go up above even their currently barely affordable levels, and the growth of the developing world is completely halted, and probably reversed. If your reason for imposing a carbon tax is not "to incentivize tech development" but instead "punish people for using energy", then people will revolt. There were riots in France because of a relatively modest gasoline tax. An actual across-the-board policy implementation of "austerity" in some form would either be repealed quickly, would lead to civilizational collapse and mass death, or both. If you impose a small carbon tax (or some other token gesture at austerity and conservation) it will simply not be adequate to address the issue. It will at best impose a very slight damping on the growth function. This is what I mean when I say there is no practical policy proposal that addresses the problem. It is technology, or death. If you know of a plan that persuasively and quantitatively argues otherwise, I'd love to see it.

Comment by moridinamael on Rationality and Climate Change · 2020-10-05T15:13:52.325Z · LW · GW

Epistemic status: You asked, so I'm answering, though I'm open to having my mind changed on several details if my assumptions turn out to be wrong. I probably wouldn't have written something like this without prompting. If it's relevant, I'm the author of at least one paper commissioned by the EPA on climate-related concerns.

I don't like the branding of "Fighting Climate Change" and would like to see less of it. The actual goal is providing energy to sustain the survival and flourishing of 7.8+ billion people, fueling a technologically advanced global civilization, while simultaneously reducing the negative externalities of energy generation. In other words, we're faced with a multi-dimensional optimization problem, while the rhetoric of "Fighting Climate Change" almost universally only addresses the last dimension, reducing externalities. Currently 80% of worldwide energy comes from fossil fuels and only 5% comes from renewables. So, simplistically, renewables need to generate 16x as much energy as they do right now. This number is "not so bad" if you assume that technology will continue to develop, putting renewables on an exponential curve, and "pretty bad" if you assume that renewables continue to be implemented at about the current rate.

And we need more energy generating capacity than we have now. A lot more. Current energy generation capacity only really provides a high standard of living for a small percentage of the world population. Everybody wants to lift Africa out of poverty, but nobody seems interested in asking many new power plants that will require. These power plants will be built with whatever technology is cheapest. We cannot dictate policy in power plant construction in the developing world; all we can do is try to make sure that better technologies exist when those plants are built.

I have seen no realistic policy proposal that meaningfully addresses climate change through austerity (voluntary reduced consumption) or increased energy usage efficiency. These sorts of things can help on the margins, but any actual solution will involve technology development. Direct carbon capture is also a possible target for technological breakthrough.

Comment by moridinamael on Three car seats? · 2020-10-01T20:23:46.628Z · LW · GW

https://www.multimac.com/p/multimac_1320_4_seater

£1599.00 =)

It's pretty cool, but hardly a slam-dunk rejoinder if the whole issue in question is whether having a 3rd of 4th child is discontinuously costly due to sedan width.

Personally, I just ended up buying a minivan.

Comment by moridinamael on Three car seats? · 2020-10-01T16:21:12.253Z · LW · GW

It qualifies as a trivial inconvenience. We had to essentially buy three new car seats when we had our third, because the two that we were using for the first two kids took up too much space, and needed to be replaced with thinner versions.

It does seem like having four children would pose more serious difficulties, since you can no longer fit four young children in a sedan no matter what you do.

Comment by moridinamael on moridinamael's Shortform · 2020-09-01T14:54:01.167Z · LW · GW

I'm writing an effortpost on this general topic but wanted to gauge reactions to the following thoughts, so I can tweak my approach.

I was first introduced to rationality about ten years ago and have been a reasonably dedicated practitioner of this discipline that whole time. The first few years saw me making a lot of bad choices. I was in the Valley of Bad Rationality; I didn't have experience with these powerful tools, and I made a number of mistakes.

My own mistakes had a lot to do with overconfidence in my ability to model and navigate complex situations. My ability to model and understand myself was particularly lacking.

In the more proximal part of this ten year period -- say, in the last five years -- I've actually gotten a lot better. And I got better, in my opinion, because I kept on thinking about the world in a fundamentally rationalist way. I kept making predictions, trying to understand what happened when my predictions went wrong, and updating both my world-model and my meta-model of how I should be thinking about predictions and models.

Centrally, I acquired an intuitive, gut level sense of how to think about situations where I could only see a certain angle, where I was either definitely or probably missing information, or situations involving human psychology. You could also classify another major improvement as being due generally to "actually multiplying probabilities semi-explicitly instead of handwaving", e.g. it's pretty unlikely that two things with independent 30% odds of being true, are both true. You could say through trial and error I came to understand why no wise person attempts a plan where more than one thing has to happen "as planned".

I think if you had asked me at the 5 year mark if this rationality thing was all it was cracked up to be, I very well might have said that it had led me to make a lot of bad decisions and execute bad plans, but after 10 years, and especially the last year or three, it has started working for me in a way that it didn't before.

Comment by moridinamael on Building brain-inspired AGI is infinitely easier than understanding the brain · 2020-06-02T15:00:39.008Z · LW · GW

FWIW I have come to similar conclusions along similar lines. I've said that I think human intelligence minus rat intelligence is probably easier to understand and implement than rat intelligence alone. Rat intelligence requires a long list of neural structures fine-tuned by natural selection, over tens of millions of years, to enable the rat to do very specific survival behaviors right out of the womb. How many individual fine-tuned behaviors? Hundreds? Thousands? Hard to say. Human intelligence, by contrast, cannot possibly be this fine tuned, because the same machinery lets us learn and predict almost arbitrarily different* domains.

I also think that recent results in machine learning have essentially proven the conjecture that moar compute regularly and reliably leads to moar performance, all things being equal. The human neocortical algorithm probably wouldn't work very well if it were applied in a brain 100x smaller because, by its very nature, it requires massive amounts of parallel compute to work. In other words, the neocortex needs trillions of synapses to do what it does for much the same reason that GPT-3 can do things that GPT-2 can't. Size matters, at least for this particular class of architectures.

*I think this is actually wrong - I don't think we can learn arbitrarily domains, not even close. Humans are not general. Yann LeCun has repeatedly said this and I'm inclined to trust him. But I think that the human intelligence architecture might be general. It's just that natural selection stopped seeing net fitness advantage at the current brain size.

Comment by moridinamael on What are your greatest one-shot life improvements? · 2020-05-17T15:55:01.378Z · LW · GW

I grew up in warm climates and tend to suffer a lot in cold weather. I moved to a colder climate a few years ago and discovered scarves. Wearing scarves eliminated 90% of this suffering. Scarves are not exactly a bold and novel invention, but people from warm climates may underestimate their power.

Comment by moridinamael on LessWrong Coronavirus Agenda · 2020-03-18T12:26:48.380Z · LW · GW

Scaling up testing seems to be critical. With easy, fast and ubiquitous testing, huge numbers of individuals could be tested as a matter of routine, and infected people could begin self-isolating before showing symptoms. With truly adequate testing policies, the goal of true "containment" could potentially be achieved, without the need to resort to complete economic lockdown, which causes its own devastating consequences in the long term.

Cheap, fast, free testing, possibly with an incentive to get tested regularly even if you don't feel sick, could move us beyond flattening the curve and into actual containment.

Even a test with relatively poor accuracy helps, in terms of flattening the curve, provided it is widely distributed.

So I might phrase this as a set of questions:

  • Should I get tested, if testing is available?
  • How do we best institute wide-scale testing?
  • How do we most quickly enact wide-scale testing?
Comment by moridinamael on How Do You Convince Your Parents To Prep? To Quarantine? · 2020-03-16T15:52:23.171Z · LW · GW

As my brother pointed out to me, arguments are not won in real time. Give them information in packets and calmly deal with objections as they come up, then disengage and let them process.

Comment by moridinamael on What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? · 2020-02-27T00:58:45.347Z · LW · GW

Perhaps there’s some obvious way in which I’m misunderstanding, but if10% of people contract the virus over a shortish time frame then won’t essentially everyone contract it eventually? Why would it reach a 10% penetration and then stop? Isn’t this like asking what happens if 10% of people contract influenza? Maybe in a given year your odds of getting the flu are X% but your odds of getting it once in 10 years are roughly 10*X%. Am I missing something that implies the virus will be corralled and gotten under control at a certain point?

Comment by moridinamael on The two-layer model of human values, and problems with synthesizing preferences · 2020-01-27T17:48:03.610Z · LW · GW

Fantastic post; I'm still processing.

One bite-sized thought the occurs to me is that maybe this coupling of the Player and the Character is one of the many things accomplished by dreaming. The mind-system confabulates bizarre and complex scenarios, drawn in some sense from the distribution of possible but not highly probable sensory experiences. The Player provides an emotional reaction to these scenarios - you're naked in school, you feel horrifying levels of embarrassment in the dream, and the Character learns to avoid situations like this one without ever having to directly experience it.

I think that dreaming does this sort of thing in a general way, by simulating scenarios and using those simulations to propagate learning through the hierarchy, but in particular it would seem that viewing the mind in terms of Player/Character gives you a unique closed-loop situation that really bootstraps the ability of the Character to intuitively understand the Player's wishes.

Comment by moridinamael on How Doomed are Large Organizations? · 2020-01-22T19:07:49.137Z · LW · GW

I would love to see an answer to or discussion of this question. The premise of the OP that large companies would be better off if split into much much smaller companies is a shocking and bold claim. If conglomeration and growth of large firms were a purely Molochian and net-negative proposition, then the world would look different than it does.

Comment by moridinamael on Reality-Revealing and Reality-Masking Puzzles · 2020-01-17T16:05:58.610Z · LW · GW

I'm reminded of the post Purchase Fuzzies and Utilons Separately.

The actual human motivation and decision system operates by something like "expected valence" where "valence" is determined by some complex and largely unconscious calculation. When you start asking questions about "meaning" it's very easy to decouple your felt motivations (actually experienced and internally meaningful System-1-valid expected valence) from what you think your motivations ought to be (something like "utility maximization", where "utility" is an abstracted, logical, System-2-valid rationalization). This is almost guaranteed to make you miserable, unless you're lucky enough that your System-1 valence calculation happens to match your System-2 logical deduction of the correct utilitarian course.

Possible courses of action include:

1. Brute forcing it, just doing what System-2 calculates is correct. This will involve a lot of suffering, since your System-1 will be screaming bloody murder the whole time, and I think most people will simply fail to achieve this. They will break.

2. Retraining your System-1 to find different things intrinsically meaningful. This can also be painful because System-1 generally doesn't enjoy being trained. Doing it slowly, and leveraging your social sphere to help warp reality for you, can help.

3. Giving up, basically. Determining that you'd rather just do things that don't make you miserable, even if you're being a bad utilitarian. This will cause ongoing low-level dissonance as you're aware that System-2 has evaluated your actions as being suboptimal or even evil, but at least you can get out of bed in the morning and hold down a job.

There are probably other options. I think I basically tried option 1, collapsed into option 3, and then eventually found my people and stabilized into the slow glide of option 2.

The fact that utilitarianism is not only impossible for humans to execute but actually a potential cause of great internal suffering to even know about is probably not talked about enough.

Comment by moridinamael on ialdabaoth is banned · 2019-12-13T18:22:44.928Z · LW · GW

For the record, I view the fact that I commented in the first place, and that I now feel compelled to defend my comment, as being Exhibit A of the thing that I'm whining about. We chimps feel compelled to get in on the action when the fabric of the tribe is threatened. Making the banning of a badguy the subject of a discussion rather than being an act of unremarked moderator fiat basically sucks everybody nearby into a vortex of social wagon-circling, signaling, and reading a bunch of links to figure out which chimps are on the good guy team and which chimps are on the bad guy team. It's a significant cognitive burden to impose on people, a bit like an @everyone in a Discord channel, in that it draws attention and energy in vastly disproportionate scope relative to the value it provides.

If we were talking about something socio-emotionally neutral like changing the color scheme or something, cool, great, ask the community. I have no opinion on the color scheme, and I'm allowed to have no opinion on the color scheme. But if you ask me what my opinion is on Prominent Community Abuser, I can't beg off. That's not an allowed social move. Better not to ask, or if you're going to ask, be aware of what you're asking.

Sure, you can pull the "but we're supposed to be Rationalists(tm)" card, as you do in your last paragraph, but the Rationalist community has pretty consistently failed to show any evidence of actually being superior, or even very good, at negotiating social blow-ups.

Comment by moridinamael on ialdabaoth is banned · 2019-12-13T17:27:12.568Z · LW · GW

I wasn’t really intending to criticize the status quo. Social consensus has its place. I’m not sure moderation decisions like this one require social consensus.

Comment by moridinamael on ialdabaoth is banned · 2019-12-13T16:21:19.900Z · LW · GW

If you're looking for feedback ...

On one level I appreciate this post as it provides delicious juicy social drama that my monkey brain craves and enjoys on a base, voyeuristic level. (I recognize this as being a moderately disgusting admission, considering the specific subject matter; but I'm also pretty confident that most people feel the same, deep down.) I also think there is a degree of value to understanding the thought processes behind community moderation, but I also think that value is mixed.

On another level, I would rather not know about this. I am fine with Less Wrong being moderated by a shadowy cabal. If the shadowy cabal starts making terrible moderation decisions, for example banning everyone who is insufficiently ideologically pure, or just going crazy in some general way, it's not like there's anything I can do about it anyway. The good/sane/reasonable moderator subjects their decisions to scrutiny, and thus stands to be perpetually criticized. The bad/evil moderator does whatever they want, doesn't even try to open up a dialogue, and usually gets away with it.

Fundamentally you stand to gain little and lose much by making posts like this, and now I've spent my morning indulging myself reading up on drama that has not improved my life in any way.

Comment by moridinamael on Mental Mountains · 2019-11-27T20:03:20.857Z · LW · GW

Maybe, but I don't think that we developed our tendency to lock in emotional beliefs as a kind of self-protective adaptation. I think that all animals with brains lock in emotional learning by default because brains lock in practically all learning by default. The weird and new thing humans do is to also learn concepts that are complex, provisional, dynamic and fast-changing. But this new capability is built on the old hardware that was intended to make sure we stayed away from scary animals.

Most things we encounter are not as ambiguous, complex and resistant to empirical falsification as the examples in the Epistemic Learned Helplessness essay. The areas where both right and wrong positions have convincing arguments usually involve distant, abstract things.

Comment by moridinamael on moridinamael's Shortform · 2019-11-25T03:16:26.503Z · LW · GW

I thought folks might enjoy our podcast discussion of two of Ted Chiang's stories, Story of Your Life and The Truth of Fact, the Truth of Feeling.

Comment by moridinamael on Fibromyalgia, Pain & Depression. How much is due to physical misalignment? · 2019-10-30T04:37:44.757Z · LW · GW

Thanks for writing this up. Do you think massage materially would help with this type of issue?

I've been able to help a few people (including myself) with chronic neck/shoulder pain by getting people to utilize their rhomboids rather than their trapezius for the purpose of holding their shoulders back. The rhomboids have a significant mechanical advantage for that purpose. Most people can't even intentionally activate their rhomboids; they have no kinesthetic awareness of even possessing them. Wondered if you had a response to this, within the framework of the "main muscles of movement".

Comment by moridinamael on On Internal Family Systems and multi-agent minds: a reply to PJ Eby · 2019-10-30T01:30:07.580Z · LW · GW

My examples of subagents appearing to mysteriously answer questions was meant to suggest that there are subtle things that IFS explains/predicts, which aren't automatically explained in other models. Examples of phenomena that contradict IFS model would be even more useful, though I'm failing to think of what those would look like.

Comment by moridinamael on On Internal Family Systems and multi-agent minds: a reply to PJ Eby · 2019-10-29T21:23:30.879Z · LW · GW

I'm still not sure what it would mean for humans to actually have subagents, versus to just behave exactly as if they have subagents. I don't know what empirical finding would distinguish between those two theories.

There are some interesting things that crop up during IFS sessions that I think require explanation.

For example, I find it surprising that you can ask the Part a verbal question, and that part will answer in English, and the answer it gives can often be startling, and true. The whole process feels qualitatively different from just "asking yourself" that same question. It also feels qualitatively different from constructing fictional characters and asking them questions.

I also find that taking an IFS approach, in contrast to a pure Focusing approach, results in much more dramatic and noticeable internal/emotional shifts. The IFS framework is accessing internal levers that Focusing alone isn't.

One thing I wanted to show with my toy model, but didn't really succeed, was that arranging an agent architecture where certain functions belong to the "subagents" rather than the "agent" can be more elegant or parsimonious or strictly simpler. Philosophically, I would have preferred to write the code without using any for loops, because I'm pretty sure human brains never do anything that looks like a for loop. Rather, all of the subagents are running constantly, in parallel, and doing something more like message-passing according to their individual needs. The "agent" doesn't check each subagent, sequentially, for its state; the subagents pro-actively inject their states into the global workspace when a certain threshold is met. This is almost certainly how the brain works, regardless of whether you wish to use the word "subagent" or "neural submodule" or what exactly. In this light, at least algorithmically, it would seem that the submodules do qualify as agents, in most senses of the word.

Comment by moridinamael on The first step of rationality · 2019-09-29T19:06:35.981Z · LW · GW

Unfortunately there are many prominent examples of Enlightened/Awakened/Integrated individuals who act like destructive fools and ruin their lives and reputations, often through patterns of abusive behavior. When this happens over and over, I don't think it can be written off as "oh those people weren't actually Enlightened." Rather, I think there's something in the bootstrapping dynamics of tinkering with your own psyche that predictably (sometimes) leads in this direction.

My own informed guess as to how this happens is something like this: imagine your worst impulse arising, and imagine that you've been so careful to take every part of yourself seriously that you take that impulse seriously rather than automatically swatting it away with the usual superegoic separate shard of self; imagine that your normal visceral aversion to following through on that terrible impulse is totally neutralized, toothless. Perhaps you see the impulse arise and you understand intellectually that it's Bad but somehow its Badness is no longer compelling to you. I don't know. I'm just putting together the pieces of what certain human disasters have said.

Anyway, I don't actually think you're wrong to think integration is an important goal. The problem is that integration is mostly neutral. You can integrate in directions that are holistically bad for you and those around you, maybe even worse than if you never attempted it in the first place.

Comment by moridinamael on September Bragging Thread · 2019-08-31T00:24:11.749Z · LW · GW

The podcasting network that I own and co-run has hit some major internal milestones recently. It's extremely gratifying to see four years of work begin to pay off. I'm continually amazed at the progress we've made, and proud of the community we've built.

Comment by moridinamael on In defense of Oracle ("Tool") AI research · 2019-08-07T20:04:52.401Z · LW · GW

Regarding the comment about Christiano, I was just referring to your quote in the last paragraph, and it seems like I misunderstood the context. Whoops.

Regarding the idea of a singleton, I mainly remember the arguments from Bostrom's Superintelligence book and can't quote directly. He summarizes some of the arguments here.


Comment by moridinamael on In defense of Oracle ("Tool") AI research · 2019-08-07T16:05:28.993Z · LW · GW

You made a lot of points, so I'll be relatively brief in addressing each of them. (Taking at face value your assertion that your main goal is to start a discussion.)

1. It's interesting to consider what it would mean for an Oracle AI to be good enough to answer extremely technical questions requiring reasoning about not-yet-invented technology, yet still "not powerful enough for our needs". It seems like if we have something that we're calling an Oracle AI in the first place, it's already pretty good. In which case, it was getting to that point that was hard, not whatever comes next.

2. If you actually could make an Oracle that isn't secretly an Agent, then sure, leveraging a True Oracle AI would help us figure out the general coordination problem, and any other problem. That seems to be glossing over the fact that building an Oracle that isn't secretly an Agent isn't actually something we know how to go about doing. Solving the "make-an-AI-that-is-actually-an-Oracle-and-not-secretly-an-Agent Problem" seems just as hard as all the other problems.

3. I ... sure hope somebody is taking seriously the idea of a dictator AI running CEV, because I don't see anything other than that as a stable ("final") equilibrium. There are good arguments that a singleton is the only really stable outcome. All other circumstances will be transitory, on the way to that singleton. Even if we all get Neuralink implants tapping into our own private Oracles, how long does that status quo last? There is no reason for the answer to be "forever", or even "an especially long time", when the capabilities of an unconstrained Agent AI will essentially always surpass those of an Oracle-human synthesis.

4. If the Oracle isn't allowed to do anything other than change pixels on the screen, then of course it will do nothing at all, because it needs to be able to change the voltages in its transistors, and the local EM field around the monitor, and the synaptic firings of the person reading the monitor as they react to the text ... Bright lines are things that exist in the map, not the territory.

5. I'm emotionally sympathetic to the notion that we should be pursuing Oracle AI as an option because the notion of a genie is naturally simple and makes us feel empowered, relative to the other options. But I think the reason why e.g. Christiano dismisses Oracle AI is that it's not a concept that really coheres beyond the level of verbal arguments. Start thinking about how to build the architecture of an Oracle at the level of algorithms and/or physics and the verbal arguments fall apart. At least, that's what I've found, as somebody who originally really wanted this to work out.

Comment by moridinamael on RAISE AI Safety prerequisites map entirely in one post · 2019-07-18T06:17:28.928Z · LW · GW

To be clear, I didn't mean to say that I think AGI should be evolved. The analogy to breeding was merely to point out that you can notice a basically correct trick for manipulating a complex system without being able to prove that the trick works a priori and without understanding the mechanism by which it works. You notice the regularity on the level of pure conceptual thought, something closer to philosophy than math. Then you prove it afterward. As far as I'm aware, this is indeed how most truly novel discoveries are made.

You've forced me to consider, though, that if you know all the math, you're probably going to be much better and faster at spotting those hidden flaws. It may not take great mathematical knowledge to come up with a new and useful insight, but it may indeed require math knowledge to prove that the insight is correct, or to prove that it only applies in some specific cases, or to show that, hey, it wasn't actually that great after all.

Comment by moridinamael on RAISE AI Safety prerequisites map entirely in one post · 2019-07-17T12:44:20.914Z · LW · GW

I'm going to burn some social capital on asking a stupid question, because it's something that's been bothering me for a long time. The question is, why do we think we know that it's necessary to understand a lot of mathematics to productively engage in FAI research?

My first line of skepticism can perhaps be communicated with a simplified analogy: It's 10,000 BC and two people are watching a handful of wild sheep grazing. The first person wonders out loud if it would be possible to somehow teach the sheep to be more docile.

The second person scoffs, and explains that they know everything there is to know about training animals, and it's not in the disposition of sheep to be docile. They go on to elaborate all the known strategies for training dogs, and how none of them can really change the underlying temperament of the animal.

The first person has observed that certain personality traits seem to pass on from parent to child and from dog to puppy. In a flash of insight they conceive of the idea of intentional breeding.

They cannot powerfully articulate this insight at the level of genetics or breeding rules. They don't even know for a fact that sheep can be bred to be more docile. But nonetheless, in a flash, in something like one second of cognitive experience they've gone from not-knowing to knowing this important secret.

End of analogy. The point being: it is obviously possible to have true insights without having the full descriptive apparatus needed to precisely articulate and/or prove the truth of the insight. In fact I have a suspicion that most true, important insight comes in the form of new understandings that are not well-expressed by existing paradigms, and eventually necessitate a new communication idiom to express the new insight. Einstein invented Einstein notation because not just because it's succinct, but because it visually rearranges the information to emphasize what's actually important in the new concept he was communicating and working with.

So maybe my steelman of "why learn all this math" is something like "because it gives you the language that will help you construct/adapt the new language which will be required to express the breakthrough insight." But that doesn't actually seem like it would be important in being able to come up with that insight in the first place.

I will admit I feel a note of anxiety at the thought that people are looking at this list of "prerequisites" and thinking, wow, I'm never going to be useful in thinking about FAI. Thinking that because they don't know what Cantor's Diagonalization is and don't have the resources in terms of time to learn, their brainpower can't be productively applied to the problem. Whereas, in contrast, I will be shocked if the key, breakthrough insight that makes FAI possible is something that requires understanding Cantor's Diagonalization to grasp. In fact, I will be shocked if the key, breakthrough insight can't be expressed almost completely in 2-5 sentences of jargon-free natural language.

I have spent a lot of words here trying to point at the reason for my uncertainty that "learn all of mathematics" is a prerequisite for FAI research, and my concerns with what I perceive to be the unproven assumption that the pathway to the solution necessarily lies in mastering all these existing techniques. It seems likely that there is an answer here that will make me feel dumb, but if there is, it's not one that I've seen articulated clearly despite being around for a while.

Comment by moridinamael on Jeff Hawkins on neuromorphic AGI within 20 years · 2019-07-15T20:16:13.431Z · LW · GW

Thanks for writing this up, it helps to read somebody else's take on this interview.

My thought after listening to this talk is that it's even worse ("worse" from an AI Risk perspective) than Hawkins implies because the brain relies on one or more weird kludges that we could probably easily improve upon once we figured out what those kludges are doing and why they work.

For example, let's say we figured out that some particular portion of a brain structure or some aspect of a cortical column is doing what we recognize as Kalman filtering, uncertainty quantification, or even just correlation. Once we recognize that, we can potentially write our next AIs so that they just do that explicitly instead of needing to laboriously simulate those procedures using huge numbers of artificial neurons.

I have no idea what to make of this quote from Hawkins, which jumped to me when I was listening and which you also pulled out:

"Real neurons in the brain are time-based prediction engines, and there's no concept of this at all" in ANNs; "I don't think you can build intelligence without them".

We've had neural network architectures with a time component for many many years. It's extremely common. We actually have very sophisticated versions of them that intrinsically incorporate concepts like short-term memory. I wonder if he somehow doesn't know this, or if he just misspoke, or if I'm misunderstanding what he means.

Comment by moridinamael on Modeling AI milestones to adjust AGI arrival estimates? · 2019-07-12T04:43:26.973Z · LW · GW

Looks like all of the "games"-oriented predictions that were supposed to happen in the first 25 years have already happened within 3.

edit: Misread the charts. It's more like the predictions within the first ~10 years have already been accomplished, plus or minus a few.

Comment by moridinamael on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-18T20:35:39.231Z · LW · GW

Perhaps tautology is a better word than sophistry. Of course turning usable energy into unusable forms is a fundamental feature of life; it's a fundamental feature of everything to which the laws of thermodynamics apply. It'd be equally meaningless to say that using up useful energy is a fundamental property of stars, and that the purpose of stars is to waste energy. It's just something that stars do, because of the way the universe is set up. It's a descriptive observation. It's only predictive insofar as you would predict that life will probably only continue to exist where there are energy gradients.

Comment by moridinamael on Accelerate without humanity: Summary of Nick Land's philosophy · 2019-06-17T14:30:38.283Z · LW · GW

The part about wasting energy seems quite silly. The universe has a fixed amount of mass-energy, so presumably when he talks about wasting energy, what he means is taking advantage of energy gradients. Energy gradients will always and everywhere eventually wind down toward entropy on their own without help, so life isn't even doing anything novel here. It's not like the sun stops radiating out energy if life isn't there to absorb photons.

The observation that life takes advantage of concentration pockets of energy and thus this is the "purpose" of life is just sophistry. It deserves to be taken about as seriously as George Carlin's joke that humans were created because Mother Nature wanted plastic and didn't know how to make it.

Comment by moridinamael on What is your personal experience with "having a meaningful life"? · 2019-05-24T03:41:47.541Z · LW · GW

To point one, if I feel an excitement and eagerness about the thing, and if I expect I would feel sad if the thing were suddenly taken away, then I can be pretty sure that it’s important to me. But — and this relates to point two — it’s hard to care about the same thing for weeks or months or years at a time with the same intensity. Some projects of mine have oscillated between providing deep meaning and being a major drag, depending on contingent factors. This might manifest as a sense of ugh arising around certain facets of the activity. Usually the ugh goes away eventually. Sometimes it doesn’t, and you either accept that the unpleasantness is part and parcel with the fun, or you decide it’s not worth it.

Comment by moridinamael on What is your personal experience with "having a meaningful life"? · 2019-05-23T13:35:19.907Z · LW · GW

As far as I can tell, meaning is a feeling, something like a passive sense that you’re on the right track. The feeling is generated when you are working on something that you personally enjoy and care about, and when you are socializing sufficiently often with people you enjoy and care about. “Friends and hobbies are the meaning of life” is how I might phrase it.

Note that the activity that you spend your time on could be collecting all the stars in Mario64, as long as you actually care about completing the task. However, you tend to find it harder to care about things that don’t involve winning status or helping people, especially as you get older.

I think some people get themselves into psychological trouble by deciding that all of the things that they enjoy aren’t “important” and interacting with people they care about is a “distraction”. They paint themselves into a corner where the only thing they allow themselves to consider doing is something for which they feel no emotional attraction. They feel like they should enjoy it because they’ve decided it’s important, but they don’t, and then they feel guilty about that. The solution to this is to recognize the kind of animal you are and try to feed the needs that you have rather than the ones you wish you had.

Comment by moridinamael on The Relationship Between the Village and the Mission · 2019-05-14T17:03:43.487Z · LW · GW

I'm interested as well. as someone trying to grow the Denver rationality community, I want to be aware of failure modes.

Comment by moridinamael on AI Alignment Problem: “Human Values” don’t Actually Exist · 2019-04-23T15:11:33.848Z · LW · GW
The idea of AI alignment is based on the idea that there is a finite, stable set of data about a person, which could be used to predict one’s choices, and which is actually morally good. The reasoning behind this basis is because if it is not true, then learning is impossible, useless, or will not converge.

Is it true that these assumptions are required for AI alignment?

I don't think it would be impossible to build an AI that is sufficiently aligned to know that, at pretty much any given moment, I don't want to be spontaneously injured, or be accused of doing something that will reliably cause all my peers to hate me, or for a loved one to die. There's quite a broad list of "easy" specific "alignment questions", that virtually 100% of humans will agree on in virtually 100% of circumstances. We could do worse than just building the partially-aligned AI who just makes sure we avoid fates worse than death, individually and collectively.

On the other hand, I agree completely that coupling the concepts of "AI alignment" and "optimization" seems pretty fraught. I've wondered if the "optimal" environment for the human animal might be a re-creation of the Pleistocene, except with, y'know, immortality, and carefully managed, exciting-but-not-harrowing levels of resource scarcity.

Comment by moridinamael on Episode 1 of "Tsuyoku Naritai!" (the 'becoming stronger' podcast/YT series). · 2019-04-18T17:34:02.026Z · LW · GW

You may already know this, but almost all YouTube videos will have an automatically generated transcript. Click "..." to the bottom right of the video panel and click "Open transcript" on the pulldown. YouTube's automatic speech transcription is very good.

Comment by moridinamael on Episode 1 of "Tsuyoku Naritai!" (the 'becoming stronger' podcast/YT series). · 2019-04-18T14:29:57.598Z · LW · GW

This exceeded my expectations. You kept it short and to the point, and the description of the technique was very clear. I look forward to more episodes.

Comment by moridinamael on Subagents, akrasia, and coherence in humans · 2019-03-26T16:40:51.496Z · LW · GW

Have you - or anyone, really - put much thought into the implications of these ideas to AI alignment?

If it's true that modeling humans at the level of constitutive subagents renders a more accurate description of human behavior, then any true solution to the alignment problem will need to respect this internal incoherence in humans.

This is potentially a very positive development, I think, because it suggests that a human can be modeled as a collection of relatively simple subagent utility functions, which interact and compete in complex but predictable ways. This sounds closer to a gears-level portrayal of what is happening inside a human, in contrast to descriptions of humans as having a single convoluted and impossible-to-pin-down utility function.

I don't know if you're at all familiar with Mark Lippman's Folding material and his ontology for mental phenomenology. My attempt to summarize his framework of mental phenomena is as follows: there are belief-like objects (expectations, tacit or explicit, complex or simple), goal-like objects (desirable states or settings or contexts), affordances (context-activated representations of the current potential action space) and intention-like objects (plans coordinating immediate felt intentions, via affordances, toward goal-states). All cognition is "generated" by the actions and interactions of these fundamental units, which I infer must be something like neurologically fundamental. Fish and maybe even worms probably have something like beliefs, goals, affordances and intentions. Ours are just bigger, more layered, more nested and more interconnected.

The reason I bring this up is that Folding was a bit of a kick in the head to my view on subagents. Instead of seeing subagents as being fundamental, I now see subagents as expressions of latent goal-like and belief-like objects, and the brain is implementing some kind of passive program that pursues goals and avoids expectations of suffering, even if you're not aware you have these goals or these expectations. In other words, the sense of there being a subagent is your brain running a background program that activates and acts upon the implications of these more fundamental yet hidden goals/beliefs.

None of this is at all in contradiction to anything in your Sequence. It's more like a slightly different framing, where a "Protector Subagent" is reduced to an expression of a belief-like object via a self-protective background process. It all adds up to the same thing, pretty much, but it might be more gears-level. Or maybe not.

Comment by moridinamael on Subagents, introspective awareness, and blending · 2019-03-04T22:05:53.372Z · LW · GW

Could you elaborate on how you're using the word "symmetrical" here?

Comment by moridinamael on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] · 2019-01-29T19:39:34.215Z · LW · GW

The best I can do after thinking about it for a bit is compute every possible combination of units under 200 supply, multiply that by the possible positions of those units in space, multiply that by the possible combinations of buildings on the map and their potential locations in space, multiply that by the possible combinations of upgrades, multiply that by the amount of resources in all available mineral/vespene sources ... I can already spot a few oversimplifications in what I just wrote, and I can think of even more things that need to be accounted for. The shields/hitpoints/energy of every unit. Combinatorially gigantic.

Just the number of potential positions of a single unit on the map is already huge.

But AlphaStar doesn't really explore much of this space. It finds out pretty quickly that there's really no reason to explore the parts of the space the include building random buildings in weird map locations. It explores and optimizes around the parts of the state space that look reasonably close to human play, because that was its starting point, and it's not going to find superior strategies randomly, not without a lot of optimization in isolation.

That's one thing I would love to see, actually. A version of the code trained purely on self-play, without a basis in human replays. Does it ever discover proxy plays or other esoteric cheese without a starting point provided in the human replays?

Comment by moridinamael on "AlphaStar: Mastering the Real-Time Strategy Game StarCraft II", DeepMind [won 10 of 11 games against human pros] · 2019-01-25T16:12:47.831Z · LW · GW

Before now, it wasn't immediately obvious that SC2 is a game that can be played superhumanly well without anything that looks like long-term planning or counterfactual reasoning. The way humans play it relies on a combination of past experience, narrow skills, and "what-if" mental simulation of the opponent. Building a superhuman SC2 agent out of nothing more than LSTM units indicates that you can completely do away with planning, even when the action space is very large, even when the state space is VERY large, even when the possibilities are combinatorially enormous. Yes, humans can get good at SC2 with much less than 200 years of time played (although those humans are usually studying the replays of other masters to bootstrap their understanding) but I think it's worthwhile to focus on the inverse of this observation: that a sophisticated problem domain which looks like it ought to require planning and model-based counterfactual reasoning actually requires no such thing. What other problem domains seem like they ought to require planning and counterfactual reasoning, but can probably be conquered with nothing more advanced than a deep LSTM network?

(I haven't seen anyone bother to compute an estimate of the size of the state-space of SC2 relative to, for example, Go or Chess, and I'm not sure if there's even a coherent way to go about it.)