Posts

You are invited to apply to the Guild of the ROSE 2021-08-13T18:25:35.568Z
Guild of the ROSE Mixer 2021-08-13T18:11:24.241Z
Rationality Yellow Belt Test Questions? 2021-07-06T18:15:41.029Z
Guild of Servants Bacchanalia (March 27, 2021) 2021-03-26T16:35:47.855Z
A Retrospective Look at the Guild of Servants: Alpha Phase 2021-03-22T20:34:30.520Z
Mental subagent implications for AI Safety 2021-01-03T18:59:50.315Z
New Eliezer Yudkowsky interview on We Want MoR, the HPMOR Podcast 2020-12-21T17:21:43.895Z
An Invitation to the Guild of Servants — Alpha 2020-08-14T19:03:23.973Z
We Want MoR (HPMOR Discussion Podcast) Completes Book One 2020-02-19T00:34:15.864Z
moridinamael's Shortform 2019-11-25T03:16:22.857Z
Literature on memetics? 2019-11-07T19:18:05.748Z
Complex Behavior from Simple (Sub)Agents 2019-05-10T21:44:04.895Z
A Dialogue on Rationalist Activism 2018-09-10T18:30:01.130Z
Sandboxing by Physical Simulation? 2018-08-01T00:36:32.374Z
A Sarno-Hanson Synthesis 2018-07-12T16:13:36.158Z
A Few Tips on How to Generally Feel Better (and Avoid Headaches) 2018-04-30T16:02:24.144Z
My Hammertime Final Exam 2018-03-22T18:40:55.539Z
Spamming Micro-Intentions to Generate Willpower 2018-02-13T20:16:09.651Z
Fun Theory in EverQuest 2018-02-05T20:26:34.761Z
The Monthly Newsletter as Thinking Tool 2018-02-02T16:42:49.325Z
"Slow is smooth, and smooth is fast" 2018-01-24T16:52:23.704Z
What the Universe Wants: Anthropics from the POV of Self-Replication 2018-01-12T19:03:34.044Z
A Simple Two-Axis Model of Subjective States, with Possible Applications to Utilitarian Problems 2018-01-02T18:07:20.456Z
Mushrooms 2017-11-10T16:57:33.700Z
The Five Hindrances to Doing the Thing 2017-09-25T17:04:53.643Z
Measuring the Sanity Waterline 2016-12-06T20:38:57.307Z
Jocko Podcast 2016-09-06T15:38:41.377Z
Deepmind Plans for Rat-Level AI 2016-08-18T16:26:05.540Z
Flowsheet Logic and Notecard Logic 2015-09-09T16:42:35.321Z
Less Wrong Business Networking Google Group 2014-04-24T14:45:21.253Z
Bad Concepts Repository 2013-06-27T03:16:14.136Z
Towards an Algorithm for (Human) Self-Modification 2011-03-29T23:40:26.774Z

Comments

Comment by moridinamael on Should I treat pain differently if it’s “all in my head?” · 2021-09-05T17:18:45.021Z · LW · GW

First, pain in the wrists is often to due to muscle knots (“trigger points”) in the forearm muscles that you may not be aware are there until you go probing for them. There are many online resources for treating such knots if you find them. My advice: don’t overdo it, or you’ll bruise yourself and make it worse.

Second, the main thing you should not do if it is psychosomatic is take pain medication. Your brain and body can easily become psychologically and physiologically dependent on even “benign” drugs like NSAIDs, leading to situations where you’ll be in pain when you don’t take them.

Comment by moridinamael on COVID/Delta advice I'm currently giving to friends · 2021-08-24T12:27:28.435Z · LW · GW

I feel like the “diseases just naturally become not-dangerous” perspective neglects smallpox and other extremely deadly endemic viruses which we have used vaccination to control or eradicate.

Comment by moridinamael on You are invited to apply to the Guild of the ROSE · 2021-08-23T04:13:37.262Z · LW · GW

I recommend checking the website and joining the Discord as a first step to get in contact with the group.

Comment by moridinamael on Thinking of our epistemically troubled friend · 2021-08-18T14:21:23.152Z · LW · GW

Between friends I usually wager a sandwich or a cup of coffee. Enough to make it clear that a specific bet is being articulated and agreed upon, but not enough to really hurt anyone's feelings if they lose.

Comment by moridinamael on Is the argument that AI is an xrisk valid? · 2021-07-19T14:30:56.212Z · LW · GW

What is "instrumental intelligence?"

Comment by moridinamael on Ideal College Education for an Aspiring Rationalist? · 2021-07-13T15:54:47.126Z · LW · GW

This is an interview I conducted with a college professor friend of mine about how to get the most out of education. I have provided a timecode link to the part where we start talking about college.

Edit: An incomplete tl;dr would be: Don't go to a large university, go to a small PUI (Primarily Undergraduate Institution) where the focus of the professors will be teaching rather than research and grant-writing. Teaching a course well is a full-time job, but teaching is the third of fourth priority for university professors. The other answers on this post are probably adequate for deciding what to major in.

Comment by moridinamael on I’m no longer sure that I buy dutch book arguments and this makes me skeptical of the "utility function" abstraction · 2021-06-24T13:22:51.500Z · LW · GW

Yes, thanks, I’ll fix it.

Comment by moridinamael on I’m no longer sure that I buy dutch book arguments and this makes me skeptical of the "utility function" abstraction · 2021-06-22T17:00:38.980Z · LW · GW

Perhaps relevant, I wrote a post a while back (all the images are broken and irretrievable; sorry) about the idea that suffering and happiness (and possibly many other states) should be considered separate dimensions around which we intuitively try to navigate, and that compressing all these dimensions onto the single dimension of utility gives you some advantages (you can rank things more easily) but discards a tremendous amount of detail. Fundamentally, forcing yourself to use utility in all circumstances is like throwing away your detailed map in exchange for a single number representing how far you are from your destination. In theory you can still find your way home, but you'll encounter obstacles you might have avoided otherwise.

Comment by moridinamael on How can there be a godless moral world ? · 2021-06-21T21:51:45.383Z · LW · GW

There are already many good answers in this thread but your question as stated can be answered very simply. First I ask if you can accept that I have within me “a morality.” I have beliefs about right and wrong, and a system for judging actions and outcomes; that’s probably all that’s required to qualify. So that’s one morality that exists, namely mine. It’s not a cosmic, all-pervading, True morality, but it’s a morality, and that’s what you asked for.

I don’t know if the following will appeal to the gut or not, but here goes: people only very rarely make decisions based on morality, or on expected utility, or on any kind of explicit basis for distinguishing the better choice. Most choices have no obvious moral dimension. The only times when someone invokes morality or utilitarianism is when two different parts of their mind want two different things and they need some kind of judgement call from the ref on which option will be more in alignment with all the stuff you want. The reason you don’t commit mass murder has nothing to do with morality. The reason you choose to reduce your meat consumption might have something to do with morality, or might not.

The reason I bring that up is that the existence or nonexistence of ultimate cosmic morality is probably not all that important, practically speaking. You won’t do things that you deeply feel are immoral for the same reason you won’t intentionally smash your hand with a hammer. If you find yourself frequently doing immoral things, you probably don’t really think they’re all that immoral, and/or “immoral” has become a dangerously meaningless symbol in your mind.

Comment by moridinamael on Which rationalists faced significant side-effects from COVID-19 vaccination? · 2021-06-15T01:17:58.831Z · LW · GW

Pfizer. First shot caused pain at the injection site that lasted about 24 hours. Second shot led to about twelve hours of what I would call moderately severe flu symptoms. Chills, exhaustion, headache (though one should note that the slightest breeze will give me a headache), brainfog. I basically slept all day, and woke up feeling fine the next morning.

Comment by moridinamael on Reply to Nate Soares on Dolphins · 2021-06-10T16:33:29.330Z · LW · GW

I think we only need to consider pragmatism. 

It is useful to be able to ask a butcher for their fish selection and not be shown sea snail, crab, and dolphin.

It is useful to be able to say "I saw a fish!" and have the listener know you mean a fish and not a saltwater crocodile or sea snake.

It's a useful category to keep distinct.

Comment by moridinamael on It's like poetry, they rhyme: Lanier and Yudkowsky, Weyl and S-s... Alexander · 2021-05-24T14:54:56.511Z · LW · GW

After reading this article and the Scott/Weyl exchanges, I'm left with the impression that one side is saying: "We should be building bicycles for the mind, not trying to replace human intellect." And the other side is trying to point out: "There is no firm criteria by which we can label a given piece of technology a bicycle for the mind versus a replacement for human intellect." 

Perhaps uncharitably, it seems like Weyl is saying to us, "See, what you should be doing is working on bicycles for the mind, like this complicated mechanism design thing that I've made." And Scott is sort of saying, "By what measure are you entitle to describe that particular complicated piece of gadgetry a bicycle for the mind, while I am not allowed to call some sort of sci-fi exocortical AI assistant a bicycle for the mind?" And then Weyl, instead of really attempting to provide that distinction, simply lists a bunch of names of other people who had strong opinions about bicycles.

Parenthetically, I'm reminded of the idea from the Dune saga that it wasn't just AI that was eliminated in the Butlerian Jihad, but rather, the enemy was considered to be the "machine attitude" itself. That is, the attitude that we should even be trying to reduce human labor through automation. The result of this process is a universe locked in feudal stagnation and tyranny for thousands of years. To this day I'm not sure if Herbert intended us to agree that the Butlerian Jihad was a good idea, or to notice that his universe of Mentats and Guild Navigators was also a nightmare dystopia. In any case, the Dune universe has lasguns, spaceships, and personal shields, but no bicycles that I can recall.

Comment by moridinamael on Get your gun license · 2021-05-21T14:33:35.806Z · LW · GW

Gun ownership requires a license.

In some (many?) states owning a gun does not require a license, but carrying a loaded gun on your person does.

Comment by moridinamael on Can you improve IQ by practicing IQ tests? · 2021-04-27T14:06:23.530Z · LW · GW

Short answer, yes you can get better at IQ tests by learning the common patterns and practicing the tests. Some people do so, and reach very high IQ scores. But there is essentially no reward for doing this, and thus almost no one bothers to do it. In the absence of practice, IQ is an empirically relatively stable metric which correlates with a number of other empirical outcomes, about as well as anything in the social sciences ever does.

Comment by moridinamael on Auctioning Off the Top Slot in Your Reading List · 2021-04-14T16:10:16.358Z · LW · GW

I do this for movies, and formerly did it for books and TV shows, but people mainly try to just pay me to watch anime.

Comment by moridinamael on Book review: "A Thousand Brains" by Jeff Hawkins · 2021-04-14T14:19:25.516Z · LW · GW

I think the way this could, work, conceptually, is as follows. Maybe the Old Brain does have specific "detectors" for specific events like: are people smiling at me, glaring at me, shouting at me, hitting me; has something that was "mine" been stolen from me; is that cluster of sensations an "agent"; does this hurt, or feel good. These seem to be the kinds of events the small children, most mammals, and even some reptiles seem to be able to understand.

The neocortex then constructs increasingly nuanced models based on these base level events. It builds up a fairly sophisticated cognitive behavior such as, for example, romantic jealousy, or the desire to win a game, or the perception that a specific person is a rival, or a long-term plan to get a college degree, by gradually linking up elements of its learned world model with internal imagined expectations of ending up in states that it natively perceives (with the Old Brain) as good or bad. 

Obviously the neocortex isn't just passively learning, it's also constantly doing forward-modeling/prediction using its learned model to try to navigate toward desirable states. Imagined instances of burning your hand on a stove are linked with real memories of burning your hand on a stove, and thus imagined plans that would lead to burning your hand on the stove are perceived as undesirable, because the Old Brain knows instinctively (i.e. without needing to learn) that this is a bad outcome.

eta: Not wholly my original thought, but I think one of the main purposes of dreams is to provide large amounts of simulated data aimed at linking up the neocortical model of reality with the Old Brain. The sorts of things that happen in dreams tend to often be very dramatic and scary. I think the sleeping brain is intentionally seeking out parts of the state space that agitate the Old Brain in order to link up the map of the outside world with the inner sense of innate goodness and badness.

Comment by moridinamael on Logan Strohl on exercise norms · 2021-03-30T14:14:09.678Z · LW · GW

This struck me as well.

gymnastics, soccer, dance, yoga, martial arts, running, weight lifting, swimming, cycling, hiking

Part of my brain reads this list as "Broken bones, busted knees, torn ankle ligaments, burst spinal and knee cushions." I can associate many of my forays into fitness with a particular chronic injury. Basketball, ankle doesn't work right anymore. Taekwondo, toes on right foot no longer support my weight.

I'm sure there are plenty of people who don't accrue all these injuries when they exercise. A cursory Googling suggests that there are some important genetic factors relating to connective tissue strength/integrity and/or recovery speed.

As I've gotten older, I've chosen to simply focus on keeping my resting heart rate solidly into what is considered a healthy zone. This is one of those easily measurable knobs that can be intervened upon from a number of directions. If somebody suggested that I need to pack on muscle to be healthier, I think I could argue pretty persuasively that they are wrong.

Comment by moridinamael on The (not so) paradoxical asymmetry between position and momentum · 2021-03-28T17:29:18.513Z · LW · GW

I doubt that I understand this very well. I thought there was a chance I might help and also a chance that I would be so obviously wrong that I would learn something.

Comment by moridinamael on The (not so) paradoxical asymmetry between position and momentum · 2021-03-28T15:43:57.365Z · LW · GW

Epistemic status: Relating how this was explained to me in the hopes that somebody will either say "That's right!" or "No, you're still wrong, let me correct you!"

The way this was explained to me is that this is one of those things that is deceptively simple but always explained very poorly.

Knowing the position of the particle/excitation means reducing the width of \deltaX, which means summing more plane waves. Summing more plane waves means having less precision in the frequency/energy/momentum domain. Conversely, having less positional certainty (wider \deltaX) means you require fewer plane waves to describe the excitation, meaning you know the frequency decomposition (and therefor the energy/momentum description) very accurately, in a sense because the position is spread out.

The confusion enters because educators insist on talking about "knowing the position of a particle" when a particle literally is a wavelike excitation of a field and does not have a position in the sense that you think of a bowling ball having a position.

Comment by moridinamael on A Retrospective Look at the Guild of Servants: Alpha Phase · 2021-03-23T14:14:09.451Z · LW · GW

I do not love passive voice either, but the nature of this document is:

We have collected feedback from stakeholders in the form of interviews, and consolidated that feedback in this document.

If there is ever a place for passive voice, it is a document whose purpose is to consolidate the opinions of multiple people while explicitly not implying definite consensus among those people.

Comment by moridinamael on Comments on "The Singularity is Nowhere Near" · 2021-03-16T22:17:35.168Z · LW · GW

It is fun to note that Metaculus is extremely uncertain about how many FLOPS will be required for AGI. The community lower 25% bound is 3.9x10^15 FLOPS and the upper 75% bound is 4.1x10^20 FLOPS with very flattish tails extending well beyond these bounds. (The median is 6.2e17.) 

I mention this mainly to point out that his estimate of 10^21 FLOPS is simplify overconfident in his particular model. There are simple objections that should reduce confidence in that kind of extremely high estimate at least somewhat. 

For example, the human brain runs on 20 watts of glucose-derived power, and is optimized to fit through a birth canal. These design constraints alone suggest that much of its architectural weirdness arises due to energy and size restrictions, not due to optimization on intelligence. Actually optimizing for intelligence with no power or size restrictions will yield intelligent structures that look very different, so different that it is almost pointless to use brains as a reference object.

Again, I think a healthy stance to take here isn't "Tim Dettmers is WRONG" but rather "Tim Dettmers is overconfident."

Comment by moridinamael on Against Sam Harris's personal claim of attentional agency · 2021-01-30T20:21:46.953Z · LW · GW

I've been concerned for some time that intensive meditation causes people to stop feeling their emotions but not to stop having those emotions. Sam's podcasts are in fact littered with examples where he clearly (from a third-party perspective) seems to become agitated, flustered or angry, but seems weirdly in denial about it because his inner experience is not one of upset. I'm not up to speculating on exactly how this happens, but there also seems to be an wide but informal folklore concerning long-term meditators who are interpersonally abusive or insensitive.

Comment by moridinamael on Would most people benefit from being less coercive to themselves? · 2021-01-22T02:01:33.979Z · LW · GW

There's one sense in which self-coercion is impossible because you cannot make yourself do something that at least some part of yourself doesn't endorse. There's another sense in which self-coercion is an inescapable inevitability because some particular part of you will always dis-endorse any given action.

It's definitely worth it to seek to understand yourself well enough that you can negotiate between dissatisfied parts of yourself, pre-emptively or on-the-fly. This helps you generate plans that aren't so self-coercive that they're preordained to fail.

In my framing, the effective approach isn't to find a non-coercive plan, but rather a minimally-coercive plan that still achieves the goal. This turns it from an exercise of willpower to an exercise of strategy. Plus, the only way you can really learn where plans sit on the coerciveness landscape is to attempt to execute them.

Comment by moridinamael on Benefits of "micro-tracking" for personal health measurements? · 2021-01-19T20:23:26.680Z · LW · GW

It has been unambiguously helpful for my Apple Watch to inform me that my sleep quality is detectably higher when I exercise, even if that exercise is just a brisk 1-2 mile walk. I generally agree subjectively feel better when the watch tells me I've slept well. Connecting "go for your walk" to "feel noticeably better tomorrow" is much more motivating than going for a walk due to nebulous long-term health reasons. None of this would happen if the watch wasn't automatically tracking my sleep (including interruptions and sleeping heart rate), and my daily activities.

Comment by moridinamael on What trade should we make if we're all getting the new COVID strain? · 2020-12-26T01:59:32.961Z · LW · GW

Interesting. The market has not increased much since the announcement of the Moderna and Pfizer vaccines, so I'd have a hard time causally connecting the market to the vaccine announcement.

My feeling was that the original sell-off in February and early March was due to the fact that we were witnessing and unprecedented-in-our-lifetimes event, anything could happen. A more contagious form of the same virus will only trigger mass selloff if and only if investors believe that other investors believe that the news of the new strain is bad enough to trigger a panic selloff.

There are too many conflicting things going on for me to make confident claims about timelines and market moves, but I really do doubt the story that the market is up ~13% relative to January of this year simple because investors anticipate a quick return to normal.

Comment by moridinamael on What trade should we make if we're all getting the new COVID strain? · 2020-12-25T22:21:38.506Z · LW · GW

People are both buying more equities and selling less, because (1) their expenses are low, due to lockdowns and impossibility of travel and (2) the market is going up very fast, retail investors don't want to sell out of a bull market. There's obviously more going on than just this, retail investors are a minority of all invested capital, but even the big firms appear to have nowhere else to put their money right now. So as long as the lockdowns persist, both household and corporate expenses will be flatlined.

Even if my entire previous paragraph is wrong and dumb, you can simply observed that the market has soared ever since the original panic-crash, and ask why the virus increasing in its virulence would cause a different consequence than what we've already seen.

Comment by moridinamael on What trade should we make if we're all getting the new COVID strain? · 2020-12-25T19:42:20.092Z · LW · GW

Continued lockdowns will likely drive the markets higher, right? A more infectious strain might tend to increase the rate of lockdowns, even as the vaccine continues the rollout. So I would just buy-and-hold and then magically know exactly the right time to bail, when lockdowns look like they’re about to end.

Comment by moridinamael on New Eliezer Yudkowsky interview on We Want MoR, the HPMOR Podcast · 2020-12-22T20:35:33.056Z · LW · GW

A theme is by definition an idea that appears repeatedly, so the easiest method is to just sit back and notice what ideas are cropping up more than once. The first things you notice will by default be superficial, but after reflection you can often hone in on a more concise and deep statement of what the themes are.

For example, a first pass of HPMOR might pick out "overconfidence" as a theme, because Harry (and other characters) are repeatedly overconfident in ways that lead to costly errors. But continued consideration would show that the concept is both more specific and deeper than just "overconfidence", and ties into a whole thesis about Rationality, what Rationality is and isn't (as Eliezer says, providing positive and negative examples), and why it's a good idea.

Another strategy is to simply observe any particular thing that appears in the book and ask "why did the author do that?" The answer, for fiction with any degree of depth, is almost never going to be "because it was entertaining." Even a seemingly shallow gag like Ron Weasley's portrayal in HPMOR is still articulating something.

If this is truly a thing you're interested in getting better at, I would suggest reading books that don't even have cool powerful characters. For example, most things but Ursula Le Guin are going to feel very unsatisfying if you read them with the attitude that you're supposed to be watching a cool dude kick ass, but her books are extremely rewarding in other ways. Most popular genre fare is full of wish-fulfillment narratives, but there's still a lot of genre fiction that doesn't indulge itself in this way. And there's nothing intrinsically wrong with reading that way.

I'm not sure if I can name any podcast that has exclusively "definitive", or, author-intended readings/interpretations, but my own podcast typically goes into themes.

Comment by moridinamael on Pain is not the unit of Effort · 2020-11-25T16:07:09.047Z · LW · GW

I very recently realized something was wrong with my mental stance when I realized I was responding to work agenda items with some variation of the phrase, "Sure, that shouldn't be too painful." Clearly the first thing that came to mind when contemplating a task wasn't how long it would take, what resources would be needed, or how to do it, but rather how much suffering I would have to go through to accomplish it. This actually motivated some deeper changes in my lifestyle. Seeing this post here was extremely useful and timely for me.

Comment by moridinamael on Why are young, healthy people eager to take the Covid-19 vaccine? · 2020-11-22T19:52:28.091Z · LW · GW

Is there some additional reason to be concerned about side effects even after the point at which the vaccine has passed all the required trials, relative to the level of concern you should have about any other new vaccine?

Comment by moridinamael on the scaling “inconsistency”: openAI’s new insight · 2020-11-07T16:56:49.823Z · LW · GW

I really appreciated the degree of clarity and the organization of this post.

I wonder how much the slope of L(D) is a consequence of the structure of the dataset, and whether we have much power to meaningfully shift the nature of L(D) for large datasets. A lot of the structure of language is very repetitive, and once it is learned, the model doesn't learn much from seeing more examples of the same sort of thing.  But, within the dataset are buried very rare instances of important concept classes. (In other words, the Common Crawl data has a certain perplexity, and that perplexity is a function of both how much of the dataset is easy/broad/repetitive/generic and how much is hard/narrow/unique/specific.) For example: I can't, for the life of me, get GPT-3 to give correct answers on the following type of prompt:

You are facing north. There is a house straight ahead of you. To your left is a mountain. In what cardinal direction is the mountain?

No matter how much priming I give or how I reframe the question, GPT-3 tends to either give a basically random cardinal direction, or just repeat whatever direction I mentioned in the prompt. If you can figure out how to do it, please let me know, but as far as I can tell, GPT-3 really doesn't understand how to do this. I think this is just an example of the sort of thing which simply occurs so infrequently in the dataset that it hasn't learned the abstraction. However, I fully suspect that if there were some corner of the Internet where people wrote a lot about the cardinal directions of things relative to a specified observer, GPT-3 would learn it.

It also seems that one of the important things that humans do but transformers do not, is actively seek out more surprising subdomains of the learning space. The big breakthrough in transformers was attention, but currently the attention is only within-sequence, not across-dataset. What does L(D) look like if the model is empowered to notice, while training, that its loss on sequences involving words like "west" and "cardinal direction" is bad, and then to search for and prioritize other sequences with those tokens, rather than simply churning through the next 1000 examples of sequences from which it has essentially already extracted the maximum amount of information. At a certain point, you don't need to train it on "The man woke up and got out of {bed}", it knew what the last token was going to be long ago.

It would be good to know if I'm completely missing something here.

Comment by moridinamael on Is Stupidity Expanding? Some Hypotheses. · 2020-10-19T03:37:17.838Z · LW · GW

By “meme” I mean Dawkins’ original definition. A meme is just any idea to which Darwinian selection forces apply. For example, a good idea will be gradually stripped of nuance and accuracy as it passes through the communication network, and eventually becomes dumb.

Comment by moridinamael on Is Stupidity Expanding? Some Hypotheses. · 2020-10-15T21:26:58.620Z · LW · GW

We've built a bunch of tools for instant mind-to-mind communication, with built in features that amplify communiques that are short, simple and emotional. Over the last ten years an increasingly large fraction of all interpersonal communication has passed through these "dumbpass filter" communication systems. This process has systematically favored memes that are stupid. When everyone around you appears to be stupid, it makes you stupid. Even if you aren't on these communication platforms, your friends are, and their brains are being filled up with finely-honed, evolutionarily optimized stupidity. 

Comment by moridinamael on Rationality and Climate Change · 2020-10-07T14:43:42.809Z · LW · GW

Not sure that I disagree with you at all on any specific point.

It's just that "Considering the possibility that a technological fix will not be available" actually looks like staring down the barrel of a civilizational gun. There is no clever policy solution that dodges the bullet. 

If you impose a large carbon tax, or other effective global policy of austerity that reduces fossil fuel use without replacing that energy somehow, you're just making the whole world poor, as our electricity, food, transportation and medical bills go up above even their currently barely affordable levels, and the growth of the developing world is completely halted, and probably reversed. If your reason for imposing a carbon tax is not "to incentivize tech development" but instead "punish people for using energy", then people will revolt. There were riots in France because of a relatively modest gasoline tax. An actual across-the-board policy implementation of "austerity" in some form would either be repealed quickly, would lead to civilizational collapse and mass death, or both. If you impose a small carbon tax (or some other token gesture at austerity and conservation) it will simply not be adequate to address the issue. It will at best impose a very slight damping on the growth function. This is what I mean when I say there is no practical policy proposal that addresses the problem. It is technology, or death. If you know of a plan that persuasively and quantitatively argues otherwise, I'd love to see it.

Comment by moridinamael on Rationality and Climate Change · 2020-10-05T15:13:52.325Z · LW · GW

Epistemic status: You asked, so I'm answering, though I'm open to having my mind changed on several details if my assumptions turn out to be wrong. I probably wouldn't have written something like this without prompting. If it's relevant, I'm the author of at least one paper commissioned by the EPA on climate-related concerns.

I don't like the branding of "Fighting Climate Change" and would like to see less of it. The actual goal is providing energy to sustain the survival and flourishing of 7.8+ billion people, fueling a technologically advanced global civilization, while simultaneously reducing the negative externalities of energy generation. In other words, we're faced with a multi-dimensional optimization problem, while the rhetoric of "Fighting Climate Change" almost universally only addresses the last dimension, reducing externalities. Currently 80% of worldwide energy comes from fossil fuels and only 5% comes from renewables. So, simplistically, renewables need to generate 16x as much energy as they do right now. This number is "not so bad" if you assume that technology will continue to develop, putting renewables on an exponential curve, and "pretty bad" if you assume that renewables continue to be implemented at about the current rate.

And we need more energy generating capacity than we have now. A lot more. Current energy generation capacity only really provides a high standard of living for a small percentage of the world population. Everybody wants to lift Africa out of poverty, but nobody seems interested in asking many new power plants that will require. These power plants will be built with whatever technology is cheapest. We cannot dictate policy in power plant construction in the developing world; all we can do is try to make sure that better technologies exist when those plants are built.

I have seen no realistic policy proposal that meaningfully addresses climate change through austerity (voluntary reduced consumption) or increased energy usage efficiency. These sorts of things can help on the margins, but any actual solution will involve technology development. Direct carbon capture is also a possible target for technological breakthrough.

Comment by moridinamael on Three car seats? · 2020-10-01T20:23:46.628Z · LW · GW

https://www.multimac.com/p/multimac_1320_4_seater

£1599.00 =)

It's pretty cool, but hardly a slam-dunk rejoinder if the whole issue in question is whether having a 3rd of 4th child is discontinuously costly due to sedan width.

Personally, I just ended up buying a minivan.

Comment by moridinamael on Three car seats? · 2020-10-01T16:21:12.253Z · LW · GW

It qualifies as a trivial inconvenience. We had to essentially buy three new car seats when we had our third, because the two that we were using for the first two kids took up too much space, and needed to be replaced with thinner versions.

It does seem like having four children would pose more serious difficulties, since you can no longer fit four young children in a sedan no matter what you do.

Comment by moridinamael on moridinamael's Shortform · 2020-09-01T14:54:01.167Z · LW · GW

I'm writing an effortpost on this general topic but wanted to gauge reactions to the following thoughts, so I can tweak my approach.

I was first introduced to rationality about ten years ago and have been a reasonably dedicated practitioner of this discipline that whole time. The first few years saw me making a lot of bad choices. I was in the Valley of Bad Rationality; I didn't have experience with these powerful tools, and I made a number of mistakes.

My own mistakes had a lot to do with overconfidence in my ability to model and navigate complex situations. My ability to model and understand myself was particularly lacking.

In the more proximal part of this ten year period -- say, in the last five years -- I've actually gotten a lot better. And I got better, in my opinion, because I kept on thinking about the world in a fundamentally rationalist way. I kept making predictions, trying to understand what happened when my predictions went wrong, and updating both my world-model and my meta-model of how I should be thinking about predictions and models.

Centrally, I acquired an intuitive, gut level sense of how to think about situations where I could only see a certain angle, where I was either definitely or probably missing information, or situations involving human psychology. You could also classify another major improvement as being due generally to "actually multiplying probabilities semi-explicitly instead of handwaving", e.g. it's pretty unlikely that two things with independent 30% odds of being true, are both true. You could say through trial and error I came to understand why no wise person attempts a plan where more than one thing has to happen "as planned".

I think if you had asked me at the 5 year mark if this rationality thing was all it was cracked up to be, I very well might have said that it had led me to make a lot of bad decisions and execute bad plans, but after 10 years, and especially the last year or three, it has started working for me in a way that it didn't before.

Comment by moridinamael on Building brain-inspired AGI is infinitely easier than understanding the brain · 2020-06-02T15:00:39.008Z · LW · GW

FWIW I have come to similar conclusions along similar lines. I've said that I think human intelligence minus rat intelligence is probably easier to understand and implement than rat intelligence alone. Rat intelligence requires a long list of neural structures fine-tuned by natural selection, over tens of millions of years, to enable the rat to do very specific survival behaviors right out of the womb. How many individual fine-tuned behaviors? Hundreds? Thousands? Hard to say. Human intelligence, by contrast, cannot possibly be this fine tuned, because the same machinery lets us learn and predict almost arbitrarily different* domains.

I also think that recent results in machine learning have essentially proven the conjecture that moar compute regularly and reliably leads to moar performance, all things being equal. The human neocortical algorithm probably wouldn't work very well if it were applied in a brain 100x smaller because, by its very nature, it requires massive amounts of parallel compute to work. In other words, the neocortex needs trillions of synapses to do what it does for much the same reason that GPT-3 can do things that GPT-2 can't. Size matters, at least for this particular class of architectures.

*I think this is actually wrong - I don't think we can learn arbitrarily domains, not even close. Humans are not general. Yann LeCun has repeatedly said this and I'm inclined to trust him. But I think that the human intelligence architecture might be general. It's just that natural selection stopped seeing net fitness advantage at the current brain size.

Comment by moridinamael on What are your greatest one-shot life improvements? · 2020-05-17T15:55:01.378Z · LW · GW

I grew up in warm climates and tend to suffer a lot in cold weather. I moved to a colder climate a few years ago and discovered scarves. Wearing scarves eliminated 90% of this suffering. Scarves are not exactly a bold and novel invention, but people from warm climates may underestimate their power.

Comment by moridinamael on LessWrong Coronavirus Agenda · 2020-03-18T12:26:48.380Z · LW · GW

Scaling up testing seems to be critical. With easy, fast and ubiquitous testing, huge numbers of individuals could be tested as a matter of routine, and infected people could begin self-isolating before showing symptoms. With truly adequate testing policies, the goal of true "containment" could potentially be achieved, without the need to resort to complete economic lockdown, which causes its own devastating consequences in the long term.

Cheap, fast, free testing, possibly with an incentive to get tested regularly even if you don't feel sick, could move us beyond flattening the curve and into actual containment.

Even a test with relatively poor accuracy helps, in terms of flattening the curve, provided it is widely distributed.

So I might phrase this as a set of questions:

  • Should I get tested, if testing is available?
  • How do we best institute wide-scale testing?
  • How do we most quickly enact wide-scale testing?
Comment by moridinamael on How Do You Convince Your Parents To Prep? To Quarantine? · 2020-03-16T15:52:23.171Z · LW · GW

As my brother pointed out to me, arguments are not won in real time. Give them information in packets and calmly deal with objections as they come up, then disengage and let them process.

Comment by moridinamael on What will be the big-picture implications of the coronavirus, assuming it eventually infects >10% of the world? · 2020-02-27T00:58:45.347Z · LW · GW

Perhaps there’s some obvious way in which I’m misunderstanding, but if10% of people contract the virus over a shortish time frame then won’t essentially everyone contract it eventually? Why would it reach a 10% penetration and then stop? Isn’t this like asking what happens if 10% of people contract influenza? Maybe in a given year your odds of getting the flu are X% but your odds of getting it once in 10 years are roughly 10*X%. Am I missing something that implies the virus will be corralled and gotten under control at a certain point?

Comment by moridinamael on The two-layer model of human values, and problems with synthesizing preferences · 2020-01-27T17:48:03.610Z · LW · GW

Fantastic post; I'm still processing.

One bite-sized thought the occurs to me is that maybe this coupling of the Player and the Character is one of the many things accomplished by dreaming. The mind-system confabulates bizarre and complex scenarios, drawn in some sense from the distribution of possible but not highly probable sensory experiences. The Player provides an emotional reaction to these scenarios - you're naked in school, you feel horrifying levels of embarrassment in the dream, and the Character learns to avoid situations like this one without ever having to directly experience it.

I think that dreaming does this sort of thing in a general way, by simulating scenarios and using those simulations to propagate learning through the hierarchy, but in particular it would seem that viewing the mind in terms of Player/Character gives you a unique closed-loop situation that really bootstraps the ability of the Character to intuitively understand the Player's wishes.

Comment by moridinamael on How Doomed are Large Organizations? · 2020-01-22T19:07:49.137Z · LW · GW

I would love to see an answer to or discussion of this question. The premise of the OP that large companies would be better off if split into much much smaller companies is a shocking and bold claim. If conglomeration and growth of large firms were a purely Molochian and net-negative proposition, then the world would look different than it does.

Comment by moridinamael on Reality-Revealing and Reality-Masking Puzzles · 2020-01-17T16:05:58.610Z · LW · GW

I'm reminded of the post Purchase Fuzzies and Utilons Separately.

The actual human motivation and decision system operates by something like "expected valence" where "valence" is determined by some complex and largely unconscious calculation. When you start asking questions about "meaning" it's very easy to decouple your felt motivations (actually experienced and internally meaningful System-1-valid expected valence) from what you think your motivations ought to be (something like "utility maximization", where "utility" is an abstracted, logical, System-2-valid rationalization). This is almost guaranteed to make you miserable, unless you're lucky enough that your System-1 valence calculation happens to match your System-2 logical deduction of the correct utilitarian course.

Possible courses of action include:

1. Brute forcing it, just doing what System-2 calculates is correct. This will involve a lot of suffering, since your System-1 will be screaming bloody murder the whole time, and I think most people will simply fail to achieve this. They will break.

2. Retraining your System-1 to find different things intrinsically meaningful. This can also be painful because System-1 generally doesn't enjoy being trained. Doing it slowly, and leveraging your social sphere to help warp reality for you, can help.

3. Giving up, basically. Determining that you'd rather just do things that don't make you miserable, even if you're being a bad utilitarian. This will cause ongoing low-level dissonance as you're aware that System-2 has evaluated your actions as being suboptimal or even evil, but at least you can get out of bed in the morning and hold down a job.

There are probably other options. I think I basically tried option 1, collapsed into option 3, and then eventually found my people and stabilized into the slow glide of option 2.

The fact that utilitarianism is not only impossible for humans to execute but actually a potential cause of great internal suffering to even know about is probably not talked about enough.

Comment by moridinamael on ialdabaoth is banned · 2019-12-13T18:22:44.928Z · LW · GW

For the record, I view the fact that I commented in the first place, and that I now feel compelled to defend my comment, as being Exhibit A of the thing that I'm whining about. We chimps feel compelled to get in on the action when the fabric of the tribe is threatened. Making the banning of a badguy the subject of a discussion rather than being an act of unremarked moderator fiat basically sucks everybody nearby into a vortex of social wagon-circling, signaling, and reading a bunch of links to figure out which chimps are on the good guy team and which chimps are on the bad guy team. It's a significant cognitive burden to impose on people, a bit like an @everyone in a Discord channel, in that it draws attention and energy in vastly disproportionate scope relative to the value it provides.

If we were talking about something socio-emotionally neutral like changing the color scheme or something, cool, great, ask the community. I have no opinion on the color scheme, and I'm allowed to have no opinion on the color scheme. But if you ask me what my opinion is on Prominent Community Abuser, I can't beg off. That's not an allowed social move. Better not to ask, or if you're going to ask, be aware of what you're asking.

Sure, you can pull the "but we're supposed to be Rationalists(tm)" card, as you do in your last paragraph, but the Rationalist community has pretty consistently failed to show any evidence of actually being superior, or even very good, at negotiating social blow-ups.

Comment by moridinamael on ialdabaoth is banned · 2019-12-13T17:27:12.568Z · LW · GW

I wasn’t really intending to criticize the status quo. Social consensus has its place. I’m not sure moderation decisions like this one require social consensus.

Comment by moridinamael on ialdabaoth is banned · 2019-12-13T16:21:19.900Z · LW · GW

If you're looking for feedback ...

On one level I appreciate this post as it provides delicious juicy social drama that my monkey brain craves and enjoys on a base, voyeuristic level. (I recognize this as being a moderately disgusting admission, considering the specific subject matter; but I'm also pretty confident that most people feel the same, deep down.) I also think there is a degree of value to understanding the thought processes behind community moderation, but I also think that value is mixed.

On another level, I would rather not know about this. I am fine with Less Wrong being moderated by a shadowy cabal. If the shadowy cabal starts making terrible moderation decisions, for example banning everyone who is insufficiently ideologically pure, or just going crazy in some general way, it's not like there's anything I can do about it anyway. The good/sane/reasonable moderator subjects their decisions to scrutiny, and thus stands to be perpetually criticized. The bad/evil moderator does whatever they want, doesn't even try to open up a dialogue, and usually gets away with it.

Fundamentally you stand to gain little and lose much by making posts like this, and now I've spent my morning indulging myself reading up on drama that has not improved my life in any way.

Comment by moridinamael on Mental Mountains · 2019-11-27T20:03:20.857Z · LW · GW

Maybe, but I don't think that we developed our tendency to lock in emotional beliefs as a kind of self-protective adaptation. I think that all animals with brains lock in emotional learning by default because brains lock in practically all learning by default. The weird and new thing humans do is to also learn concepts that are complex, provisional, dynamic and fast-changing. But this new capability is built on the old hardware that was intended to make sure we stayed away from scary animals.

Most things we encounter are not as ambiguous, complex and resistant to empirical falsification as the examples in the Epistemic Learned Helplessness essay. The areas where both right and wrong positions have convincing arguments usually involve distant, abstract things.