Posts

Democracy Is in Danger, but Not for the Reasons You Think 2022-11-06T21:15:43.962Z
The Village and the River Monsters... Or: Less Fighting, More Brainstorming 2022-10-03T23:01:18.973Z
A Critique of AI Alignment Pessimism 2022-07-19T02:28:14.340Z
Where Utopias Go Wrong, or: The Four Little Planets 2022-05-27T01:24:18.729Z
Basic Mindsets 2020-06-06T00:44:58.188Z
Order and Chaos 2019-11-28T21:27:22.979Z
The Foundational Toolbox for Life: Introduction 2019-06-22T06:11:59.497Z
Ia! Ia! Extradimensional Cephalopod Nafl'fhtagn! 2018-11-17T23:00:05.983Z
AI Goal Alignment Entry: How to Teach a Computer to Love 2018-01-01T03:37:23.626Z

Comments

Comment by ExCeph on What's Not Our Problem · 2023-03-14T05:03:40.985Z · LW · GW

I agree that high-rung thinkers would benefit from putting forth a more collaborative and organized effort to resolve the golem problem, and not limiting themselves to the individual habit-building that Tim refers to in the Answers section of the Changing Course chapter.  

There are ways that Idea Labs can reclaim territory that has been ceded to the Power Games–ways to dissolve golems.  To bring down a golem, it is not necessary to seek power over policy or institution.  Instead of a top-down approach, I prefer to start by deconstructing a golem's narrative.  

The deconstruction method starts with a values conversation: establishing understanding of what people actually want, rather than trying to establish a shared picture of the status quo.  After identifying the values at stake for people and demonstrating understanding of and respect for those values, the next step is exploring the effects of people’s current methods.  This is where people start to see how they might be harming others, and even themselves.  The last step is to present alternative approaches for accomplishing their goals, ones they can recognize as preferable.  It’s up to them to decide what to do with what they now understand.  

Deconstruction takes skill and practice to use reliably, but for quick reference I abbreviate the process as follows: 

  1. Make them comfortable
  2. Make them think
  3. Make them choose

The deconstruction approach is unlikely to persuade the entire population of the golem.  However, it can persuade enough people that the golem crumbles away as the people within it see the effectiveness of high-rung thinking at solving their problems.  

The genie doesn’t have to defeat the golem by beating it at the Power Games.  The genie can defeat the golem on the memetic level, by understanding the non-toxic values motivating the people in the golem, and addressing those values constructively, better than the golem itself can.  In other words, the genie can show people it knows what they want and can deliver.  

To make it faster and easier to identify people’s values, I’ve boiled down people’s motivations and the obstacles they face into some keywords.  Expressing what matters most as simply as possible has the added benefit of preventing people from latching onto particular methods of accomplishing their goals.  It allows people to recognize satisfactory solutions even if those solutions differ from what the people originally had in mind.  

Helping people get on the same page about the criteria for a solution is an essential first step towards building more effective genies, which is my area of specialty.  

For more of the tools that a genie would use to do the work of democracy and thereby outcompete golems, here is one of my more recent articles on the subject: https://ginnungagapfoundation.wordpress.com/2022/11/06/democracy-is-in-danger-but-not-for-the-reasons-you-think/.  

Thanks for starting this conversation!  

Comment by ExCeph on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-12-11T06:19:07.945Z · LW · GW

I ended up writing a satirical poem about politicians exaggerating and perpetuating divisions in order to profit from conflict.  What do you think?  https://ginnungagapfoundation.wordpress.com/2022/12/11/your-party-is-not-your-friend-or-the-new-library-and-the-old-baseball-diamond/

P.S. Granted, the poem doesn't describe the process by which people are inspired to negotiate with each other and actually solve their problem.  In real life I expect that process can be made easier by... having people read the poem.  We'll see if the satire is effective.  

Comment by ExCeph on Democracy Is in Danger, but Not for the Reasons You Think · 2022-11-16T19:15:23.639Z · LW · GW

That's a fair point.  I should elaborate on the concept of stagnation, to avoid giving people the wrong impression about it.  

Stagnation is the fundamental liability defined by predictable limitations on people's motivations.  

Like the other liabilities, stagnation is also an intrinsic aspect of conscious existence as we know it.  Predictable motivations are what allow us to have identity, as individuals and as groups.  Identity and stagnation are two sides of the same coin--stagnation is just what we call it when it interferes with what we otherwise want.  

Our identities should not become prisons, not only because that prevents us from dealing with other liabilities but also because part of being conscious is not knowing everything about ourselves.  Choice is another aspect of consciousness, the flip side of conflict, defined by what we don't already know about our motivations.  Part of our existence is not always being able to predict which goal will triumph over other goals, either within a person or between different people.  

In short, it seems to me that we should make sure we never lose the ability to surprise ourselves.  When we know everything about what we will want in the future, then we lose an important part of what makes us conscious beings.  Does that make more sense?  

Comment by ExCeph on Democracy Is in Danger, but Not for the Reasons You Think · 2022-11-07T03:30:52.598Z · LW · GW

I appreciate your questions and will do my best to clarify.  

The values from the section you quoted pertain to civilization as a whole.  You are correct that individual motivations/desires/ambitions require other concepts to describe them (see below).  I apologize for not making that clear.  The "universal values" are instrumental values in a sense, because they describe a civilization in which individuals are more able to pursue their own personal motivations (the terminal values, more or less) without getting stuck.  

In other words, the "universal values of civilization" just mean the opposites of the fundamental liabilities.  We could put a rationalist taboo on the "values" and simply say, "all civilizations want scarcity, disaster, stagnation, and conflict to not obstruct people's goals."  They just lose sight of that big-picture vision when they layer a bunch of lower-level instrumental values on top of it.  (And to be fair, those layers of values are usually more concrete and immediately practical than "make the liabilities stop interfering with what we want".  It's just that losing sight of the big picture prevents humanity from making serious efforts to solve big-picture problems.)  

The concepts describing the individual motivations are enumerated in this comment, which for brevity's sake I will link rather than copying: https://www.lesswrong.com/posts/BLddiDeE6e9ePJEEu/the-village-and-the-river-monsters-or-less-fighting-more?commentId=T7SF6wboFdKBeuoZz.  (As a heads up, my use of the word "values" lumps different classes of concept together (motivations, opposites of liabilities, tradeoffs, and constructive principles).  I apologize if that lumping makes things unclear; I can clarify if need be.  

Valuing being treated with dignity would typically go under the motivation of idealization, while valuing social status over others could fall under idealization, acquisition, or control.  (It's possible for different people to want the same thing for different reasons.  Knowing their motivations helps us predict what other things they will probably also want.)  

As for what we can do when people have different priorities, I attempted to explain that in the part describing ethics, and included an example (the neighbors and the trombone).  Was there some aspect of that explanation that was unclear or otherwise unsatisfactory?  It might be necessary for me to clarify that even though my example was on the level of individuals, the principles of ethics also pertain to conflict on the policy level.  I chose an individual example because I wanted to illustrate pure ethics, and most policy conflicts involve other liabilities, which I predicted would confuse people.  Does that make more sense?  

(Your utopia isn't here because it's only easy in hindsight.)  

Comment by ExCeph on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-09T01:55:57.262Z · LW · GW

Ah, that's where the anti-zombie shibboleths come in handy.  People who are afraid of zombies "know" that zombies can't understand the values of regular, living people.  (The zombies being a metaphor for a distorted view of one's ideological opponents.)  

All I have to do is describe why being alive is good and being a zombie is bad, and that proves I'm not a zombie.  That calms people down, to the point where we can explore some possible advantages of zombiehood and disadvantages of having vital function, and what we can do about that without losing what we value about breathing, having a heartbeat, et cetera.  

Any expert on conflict resolution can tell you that one of the first things to do is to paraphrase and validate someone's concerns.  I can tell you that if you dig deep enough existentially into someone's values, there's usually something to understand, and even agree with on some level, even if you don't agree with the methods they use to pursue those values.  

As for the politicians spreading panic, they aren't literally standing around screaming at people all the time.  There is plenty of opportunity to help people feel safe enough to think.  The main problem that I occasionally run into is when a person just gets into a loop of regurgitating information, like they're a one-person echo chamber.  Those people tend to be on the older side, and I don't think they're prevalent enough or capable enough to try and shut down intelligent discussion.  

Does that all make sense?  

Comment by ExCeph on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-09T01:38:00.690Z · LW · GW

That's a valid way to look at it.  I used to use three axes for them: increase versus decrease, experience versus influence, and average versus variance (or "quantity versus quality").  

I typically just go with the eight desires described above, which I call "motivations".  It's partially for thematic reasons, but also to emphasize that they are not mutually exclusive, even within the same context.  

It is perfectly possible to be both boldness-responsive and control-responsive: seeking to accomplish unprecedented things and expecting to achieve them without interference or difficulty.  That's simultaneously breaking and imposing limits through one's influence.  

Likewise, it's possible to be both acquisition-responsive and relaxation-responsive: seeking power over a larger dominion without wanting to constantly work to maintain that power.  

They're not scalars, either--curiosity about one topic does not always carry over to other topics.  There's a lot of nuance in motivation, but having concepts that form a basis for motivation-space helps.  

These motivations are not goals in and of themselves, but they help us describe what sorts of goals people are likely to adopt.  You could call them meta-goals.  It's a vocabulary for talking about what people care about and what they want out of life.  I suppose it's part of the basis for my understanding of Fun Theory.  

What do you think?  

Comment by ExCeph on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-07T23:09:30.313Z · LW · GW

That's where the deconstruction method comes in: 

  1. Make them comfortable.
  2. Make them think.
  3. Make them choose.

The first step is most important.  You don't have to start by convincing someone there are no zombies.  You just have to show them that you're not going to let any zombies get them.  Sometimes that means making small concessions by agreeing to contingencies against hypothetical zombies.  

You can tell someone that there's nothing in the dark basement, but to get them to make it five feet in to the light switch, sometimes it's most effective to just hand them a crowbar for defense.  

People need to feel safe before they can think.  I consider this technique an Asymmetric Weapon version of empathy mindset: making people feel safe helps them feel comfortable suspending their assumptions and reevaluating them.  

How does that sound?  

Comment by ExCeph on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-07T16:40:05.549Z · LW · GW

I count eight fundamental desires, but they can take countless forms based on context.  For example, celebration might lead one person to seek out a certain type of food, while leading another person to regularly go jogging.  It's the same motivation, but manifesting for two different stimuli.  

Here are the eight fundamental desires: 

  • Celebration, the desire to bring more of something into one's experience
  • Acquisition, the desire to bring more of something into one's influence
  • Insulation, the desire to push something out of one's experience
  • Relaxation, the desire to push something out of one's influence
  • Curiosity, the desire for unpredictable experience
  • Boldness, the desire for unpredictable influence
  • Idealization, the desire for more predictable experience
  • Control, the desire for more predictable influence

The four fundamental liabilities can impede us from fulfilling our desires, so people often respond by developing instrumental values, which make it easier to fulfill desires.  Some of these values are tradeoffs, but others are more constructive.  Values inform a society's public policy.  

  • For the liability of scarcity, the tradeoffs are wastefulness and austerity, and the constructive value is investment.
  • For the liability of disaster, the tradeoffs are negligence and susceptibility, and the constructive value is preparation.
  • For the liability of stagnation, the tradeoffs are decadence and dogma, and the constructive value is transcension.
  • For the liability of conflict, the tradeoffs are turmoil and corruption, and the constructive value is ethics.

Identical desires would not automatically lead to harmony if people want the same thing and start fighting over it.  Identical values might help, if it means people support the same policies for society.  

Using ethics to reconcile conflict is not a trivial set of goals, but it makes it much more possible for people to establish mutual trust and cooperation even if they can't all get everything they want.  By working together, they will likely find they can get something just as satisfactory as what they originally had in mind.  That's a society that people can feel good about living in.  

Does that all make sense?  

Comment by ExCeph on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-07T04:06:20.577Z · LW · GW

As you say, the ability to coordinate large-scale action by decree requires a high place in a hierarchy.  With the internet, though, it doesn't take authority just to spread an idea, as long it's one that people find valuable or otherwise really like.  I'm not sure why adjacency has to be "proper"; I'm just talking about social networks, where people can be part of multiple groups and transmit ideas and opinions between them.  

Regarding value divergence: Yes, there is conflict because of how people prioritize desires and values differently.  However, it would be a huge step forward to get people to see that it is merely their priorities that are different, rather than their fundamental desires and values.  It would be a further huge step forward for them to realize that if they work together and let go of some highly specific expectations of how those desires and values are to be fulfilled (which they will at least sometimes be willing to do), they can accomplish enormous mutual benefit.  This approach is not going to be perfect, but it will be much better than what we have now because it will keep things moving forward instead of getting stuck.  

Your suggestions are indeed ways to make the world a better place.  They're just not quite fast enough or high-impact enough for my standards.  Being unimpressed with human philosophy, I figured that there could easily be some good answers that humans hadn't found because they were too wrapped up in the ones they already had.  Therefore, I decided to seek something faster and more effective, and over the years I've found some very useful approaches.  

When I say a field is "low-hanging fruit", it's because I think that there are clear principles that humans can apply to make large improvements in that field, and that the only reason they haven't done so is they are too confused and distracted (for various reasons) to see the simplicity of those principles underneath all the miscellaneous gimmicks and complex literature.  

The approach I took was to construct a vocabulary of foundational building-block concepts, so that people can keep a focus on the critical aspects of a problem and, to borrow from Einstein, make everything as simple as possible, but no simpler.  

There's tremendous untapped potential in human society as a whole, and the reason it is untapped is because humans don't know how to communicate with each other about what matters.  All they need is a vocabulary for describing goals, the problems they face in reaching those goals, and the skills they need to overcome those problems.  I'm not knowledgeable enough or skilled enough to solve all of humanity's problems--but humanity is, once individual humans can work together effectively.  My plan is simply to enable them to do that.  

I understand that most people assume it's not possible because they've never seen it done and are used to writing off humans (individually and collectively) as hopeless.  Perhaps I should dig through the World Optimization topics to see if there's anyone in this community who recognizes the potential of facilitating communication.  

In any case, I appreciate your engagement on this topic, and I'm glad you enjoyed the story enough to comment.  If you do decide to explore new options for communication, I'll be around.  

Comment by ExCeph on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-05T23:07:35.976Z · LW · GW

Not all human politics is low-hanging fruit, to be sure.  I was thinking of issues like the economy, healthcare, education, and the environment.  It seems like there are some obvious win-win improvements we can make in those contexts if we just shift the discussion in a constructive direction.  We can show people there are more ideas for solutions than just the ones they've been arguing about.  

It is true that the process shown in this story is not sufficient to dismantle religion.  Such an undertaking requires a constructive meta-culture with which to replace religion.  As it happens, I've got a basis for one of those now, but humans will have to fill in the specifics to suit their own needs and styles.  (A constructive meta-culture must address the fundamental liabilities of scarcity, disaster, stagnation, and conflict using the four constructive principles of investment, preparation, transcension, and ethics.  How it does that depends on the physical and social context and on the choices of the society.)  

The trick to effective communication is to start out by identifying what people care about.  This step is easy enough with a toolbox of basic concepts for describing value.  The next step is to find the early adopters, the first ones who are willing to listen.  They can influence the people ideologically adjacent to them, who can influence the people adjacent to them, et cetera.  

By contrast, if we don't reach out to people and try to communicate with them, there are limitations on how much society can improve, especially if you are averse to conquest.  

For this reason, I conclude that facilitating communication about values and solutions seems to be the single best use of my time.  Whatever low-hanging fruit exists in other fields, it will all run into a limiting factor based on human stagnation and conflict.  I don't know if you have an extraordinary effort, but this one is mine.  I make it so that the effort doesn't have to be nearly so extraordinary for other people.  

So far as I can tell, the tools I've accumulated for this endeavor appear to be helping the people around me a great deal.  The more I learn about connecting with people across different paradigms, the easier it gets.  It starts with expressing as simply as possible what matters most.  It turns out there is a finite number of concepts that describe what people care about.  

There's a lot more I've been up to than what you see here; I just haven't spent much time posting on LessWrong because most people here don't seem to consider it important or feasible to introduce other people to new paradigms for solving problems.  

Is there another approach to making the world a better place without changing how humans think, that I'm unaware of?  

Comment by ExCeph on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-05T22:29:25.566Z · LW · GW

Good questions!  

Most pro-choice people I have discussed the issue with are already on the same page about how personhood does not start at conception, and for similar reasons.  I don't usually run the the thought experiments by them to see if our reasoning processes are the same; I should do that.  I do know that some pro-choice people do think that a zygote is a "person" but that its rights do not supersede its parent's bodily autonomy, at least in the early stages.  

When pro-life people brush the thought experiments and intuition pumps aside, I usually invite them to reflect on why we ascribe unique rights to the human species in the first place, compared to other life forms.  The United States Declaration of Independence notwithstanding, I do not hold that rights are "self-evident", but rather that society derives them from principles that result in a society that people actually want to live in, even if they don't have a rigorous understanding of what they're doing.  This doesn't work much better.  

I think that the issue with abortion is not so much a lack of answers to be had, but rather that most people have mental blocks around the questions.  I think most humans are afraid of asking the tough questions because they're afraid they won't like the answers, but I find the answers tend to be quite reassuring.  Most other political issues are unlikely to run into the same problem, because they tend not to involve existential questions on the nature of consciousness.  I welcome any insights or suggestions you have to offer, though.  

Comment by ExCeph on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-05T19:38:25.331Z · LW · GW

You raise a good point.  This story does not contain politicians who profit from playing factions against one another and maintaining polarization.  It might make the story a bit more applicable to our world if there were villagers who gained social influence from being the champions of each side while never engaging in negotiation or brainstorming, and who subsequently lost that power once the villagers learned how to do those things for themselves.  I may go back and add that in; thanks for the suggestion!  

As for our own world, I predict that as people see just how possible it is to find common ground and build on it, they will lose their susceptibility to the polarization efforts of politicians.  By learning how to establish mutual understanding and trust instead of fearing each other, they will become more willing to vote their own faction's politicians out of office and will therefore be able to hold their politicians accountable.  The politicians will be forced to do their jobs effectively in order to keep their positions.  

Does that make sense?  

Comment by ExCeph on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-04T02:59:05.154Z · LW · GW

Do you think there would be a problem with attempting to reconcile people's values on abortion?  

You jest, but abortion is actually on my list of future Midmorning Zone articles.  The Midmorning Zone series follows discussions between two characters representing different sides of various issues.  In doing so, it demonstrates how they can use the reconciliation method to figure out constructive approaches they can collaborate on.  

Part of what makes it difficult for humans to discuss abortion is the need to detangle the cultural baggage around sex (which boils down to a false dichotomy between decadence and dogma) from the ethical questions about personhood.  

Regarding the latter, people need to confront the question of why we ascribe rights to living humans, because that informs what criteria we want to use to decide what rights a living human organism has at what stages in its development.  I strongly suspect that what makes the personhood question difficult for people to acknowledge is that they fear that a definition of "person" that isn't "living human" will allow people with evil intentions to warp the definition of "person" to exclude people they don't like.  That's why they resist considering scenarios where being a human is neither necessary nor sufficient for possessing a sapient mind, which is what I would consider a "person".  ("Sapient mind" is a rather slippery concept for most humans, because they don't learn the concepts for defining one in school.  They are understandably apprehensive about trying to define it in policy.)  

The only reason I haven't finished the abortion Midmorning Zone article yet is because I haven't had much success with getting pro-life people to acknowledge the personhood question, even amongst pro-life secular intellectuals, and it would be intellectually dishonest of me to portray a pro-life character as accepting a paradigm when I haven't yet seen that happen in real discussions.  

So yes, I am applying the reconciliation method on the hardest problems, with gradual progress.  No, I haven't influenced humans to reconcile over every ideological conflict involving difficult fundamental abstract ethical questions.  The scientific method hasn't answered all of the questions of the physical universe yet, either, but that doesn't mean it's not worth practicing.  

If we can help people with more concrete political disputes and change how they approach conflict, we can work our way up from there.  I always figured Effective Altruism was more comfortable with incremental improvement than I myself am, anyway.  

With that in mind, do you have any suggestions for issues start using this approach with, or any concerns about potential negative consequences of trying?  

Comment by ExCeph on A Critique of AI Alignment Pessimism · 2022-07-20T03:53:02.961Z · LW · GW

(Made a few cosmetic tweaks to make some sentences less awkward.)  

Comment by ExCeph on Ruling Out Everything Else · 2022-05-27T22:50:47.532Z · LW · GW

This seems like a good analysis of how a person can use what I call the mindsets of reputation and clarification.  

Reputation mindset combines the mindsets of strategy and empathy, and it deals with fortifying impressions.  It can help one to be aware of emotional associations that others may have for things one may be planning to say or do.  That way one can avoid unintended associations where possible, or preemptively build up positive emotional associations to counteract negative ones that can't be avoided, such as by demonstrating one understands and supports someone's values before criticizing the methods they use to pursue those values.  

Clarification mindset combines strategy mindset and semantics mindset, and it deals with explicit information.  It helps people provide context and key details to circumvent unintended interpretations of labels and rules, or at least the most likely misinterpretations in a particular context.  

(Reputation and clarification make up the strategy side of presentation mindset.  Presentation deals with ambiguity in general, and the strategy side handles robust communication.)  

These are powerful tools, and it's helpful to have characterizations of them and examples of use cases.  Nicely done!  

Comment by ExCeph on Secular Solstice Online (Americas) · 2020-12-16T06:59:12.048Z · LW · GW

Logistics page here for those like me who didn't check the main page: https://www.lesswrong.com/posts/TH5qNLtuEi5MRwnq6/logistics-for-the-2020-online-secular-solstice-celebration

Comment by ExCeph on Basic Mindsets · 2020-06-08T04:06:17.498Z · LW · GW

1. Ah, now I see. Yes, removing assumptions is one good way to direct one's use of synthesis mindset. It helps with exploring the possibilities.

2. Organization can gather information efficiently, but integrating it all and catching contradictions is normally a job more suited for analysis. It's still possible to combine the two. That can end up forming strategy or something similar, or it could be viewed as using the mindsets separately to support each other.

Does that make sense?

Comment by ExCeph on Basic Mindsets · 2020-06-06T23:04:03.851Z · LW · GW

Thanks for the input!

1. You mean we can fiddle with the explicit assumptions we use with synthesis mindset? That can help, but to get the full benefit of synthesis I find it's often better to let go of explicit assumptions, and then apply other mindsets with those explicit assumptions to the results yielded by synthesis.

Otherwise our explicit assumptions may cause synthesis to miss hypotheses that ultimately point us in a helpful direction, even though the hypothesis itself violates the explicit assumptions. Sometimes the issue is that we make too many assumptions and need to remove some of them, and practicing synthesis is a good way to do that. Does that address your point?

2. I'm not sure what you mean by replacing the goal of 'utility' with information. Can you please elaborate on that?

3. Fixed, thanks. Not sure how the goats got in there, but I'll check the latch on the gate.

4. That's encouraging. I'll stand by for more feedback. Glad you liked it!

Comment by ExCeph on Order and Chaos · 2019-11-29T19:58:13.766Z · LW · GW

I confess, your comment surprised me by calling for a different epistemic standard than I figured this article required. I had to unpack and address several issues, listed below.

  1. I can make a bibliography from the links I’ve already included, if it would help.
  2. Are there any specific assertions in this article that you think call for more evidence to support them over the alternatives?
  3. This article is meant to build the foundation for explaining the concepts that we'll be working with in the next article. After that article, we'll mostly be using those concepts instead. Those will be supported by your own observations of how people learn different skills with varying degrees of difficulty.
  4. I didn't know how much of the theory I was building on would be taken as a given in this community, so I decided to just post and see what wasn't already part of the general LW paradigm. I’d like to hear from more people before I make any judgment calls.
  5. These ideas at this point in the sequence are not intended to make new predictions that would require the introduction of new evidence. They are intended to help the reader more clearly and efficiently conceptualize the information they already have. This article asserts that some ideas are conceptually distinct from each other and others aren’t, which is not an empirical issue. The technical terms I introduce in the article are a condensation and consolidation of existing ideas, so that people can more easily process and apply new information. I predict that as I continue to explain the paradigms I’ve developed, they will be consistent with each other and with empirical evidence, and that the reader will develop a more elegant perspective which will allow them to apply their knowledge more effectively. It may be that I need to make that more clear in future articles.
  6. In order to think effectively, there are many concepts we can and must learn and apply without relying on the scientific establishment to do experiments for us.

Does that all make sense? I'll work on framing future articles so that it's clear when they are making empirical predictions from evidence and when they are presenting a concept as being better than other concepts at carving reality at its joints.

Comment by ExCeph on The Foundational Toolbox for Life: Introduction · 2019-06-23T07:23:22.129Z · LW · GW

Practice with different example problems is indeed important for helping people internalize the principles behind the skills they're learning. However, just being exposed to these problems doesn't always mean a person figures out what those principles are. Lack of understanding of the principles usually means a person finds it difficult to learn the skill and even more difficult to branch out to similar skills.

However, if we can explicitly articulate those principles in a way people can understand, such as illustrating them with analogies or stories, then people have the foundation to actually get the benefits from the practice problems.

For example, let's say you see numbers being sorted into Category A or Category B. Even with a large set of data, if you have no mathematical education, you could spend a great deal of effort without figuring out what rule is being used to determine which category a number belongs in. You wouldn't be able to predict the category of a given number. To succeed, you would have to derive concepts like square numbers or prime numbers from scratch, which would take most people longer than they're willing or able to spend. However, if you're already educated on such concepts, you have tools to help you form hypotheses and mental models much more easily.

The objective here is to provide a basic conceptual framework for at least being aware of all aspects of all types of problems, not just easily quantifiable ones like math problems. If you can put bounds on them, you are better equipped to develop more advanced and specific skills to investigate and address them.

And yes, experiments on the method's effectiveness may be very difficult to design and run. I tend to measure effectiveness by whether people can grasp concepts they couldn't before, and whether they can apply those concepts with practice to solve problems they couldn't before. That's proof of concept enough for me to work on scaling it up.

Does that answer your question?

Comment by ExCeph on Should rationality be a movement? · 2019-06-22T06:32:53.086Z · LW · GW

With finesse, it's possible to combine the techniques of truth-seeking with friendliness and empathy so that the techniques work even when the person you're talking to doesn't know them. That's a good way to demonstrate the effectiveness of truth-seeking techniques.

It's easiest to use such finesse on the individual level, but if you can identify general concepts which help you understand and create emotional safety for larger groups of people, you can scale it up. Values conversations require at least one of the parties involved to have an understanding of value-space, so they can recognize and show respect for how other people prioritize different values even as they introduce alternative priority ordering. Building a vocabulary for understanding value-space to enable productive values conversations on the global scale is one of my latest projects.

Comment by ExCeph on Ia! Ia! Extradimensional Cephalopod Nafl'fhtagn! · 2018-11-19T04:14:33.205Z · LW · GW

Yes, that's exactly what I meant, and that's a great clarification. I do prefer looking at the long-term expected utility of a decision, as a sort of Epicurean ideal. (I'm still working on being able to resist the motivation of relaxation, though.)

Comment by ExCeph on Ia! Ia! Extradimensional Cephalopod Nafl'fhtagn! · 2018-11-18T04:33:08.156Z · LW · GW

The specific attributes I was referring to in that sentence are three out of what I call the four primary attributes:

  • Initiative (describes how much one relies on environmental conditions to prompt one to start pursuing a goal)
  • Resilience (describes how much one relies on environmental conditions to allow one to continue pursuing a goal)
  • Mobility (describes how rapidly one can effectively change the parameters of one's efforts)
  • Intensity (describes how far one can continue pushing the effects of one's efforts)

I had only been using intensity since I didn't know about the others and didn't develop them naturally. Since combined they are stronger than the sum of them separately, I was stuck at less than 25% of my theoretical maximum effectiveness.

The deep differences in worldview that you refer to are something that I've noticed as well. The different mindsets people use inform what aspects of the world they are aware of, but when those awarenesses doesn't overlap enough, conflict seems almost inevitable.

I agree that knowing our utility functions is also important. For one thing, it helps with planning. For another, it lets us resist being controlled by our motivations, which can happen if we get too attached to them, or if we are only responsive to one or two of them. (That may have been what you meant by "exercising agency"?) "Eschatology" is an interesting way of phrasing that. It puts me in mind of the fundamental liabilities that threaten all goals. I wish we taught people growing up how to both accept and manage those liabilities.

I'll be writing a sequence elaborating on all of these concepts, which I've been applying in order to become more capable.

Comment by ExCeph on On Doing the Improbable · 2018-10-30T03:36:59.946Z · LW · GW

You raise a good point about the multiple factors that go into motivation and why it's important to address as many of them as possible.

I'm having trouble interpreting your second paragraph, though. Do you mean that humanity has a coordination problem because there is a great deal of useful work that people are not incentivized to do? Or are you using "coordination problem" in another sense?

I'm skeptical of the idea that a solution is unlikely just because people haven't found it yet. There are thousands of problems that were only solved in the past few decades when the necessary tools were developed. Even now, most of humanity doesn't have an understanding of whatever psychological or sociological knowledge may help with implementing a solution to this type of problem. Those who might have such an understanding aren't yet in a position to implement it. It may just be that no one has succeeded in Doing the Impossible yet.

However, communities and community projects of varying types exist, and some have done so for millennia. That seems to me to serve as proof of concept on a smaller scale. Therefore, for some definitions of "coordinating mankind" I suspect the problem isn't quite as insurmountable as it may look at first. It seems worth some quality time to me.

Comment by ExCeph on On Doing the Improbable · 2018-10-28T23:37:14.219Z · LW · GW

I'm painfully familiar with the issue of lack of group participation, since I can't even get people to show up to a meetup.

Because of that, I've been doing research on identifying the factors contributing towards this issue and how to possibly mitigate them. I'm not sure if any of this will be new to you, but it might spark more discussion.
These are the first ideas that come to mind:

1. For people to be intrinsically motivated to do something, the process of working on it has to be fun or fulfilling.

2. Extrinsic motivation, as you say, requires either money or a reason to believe the effort will accomplish more than other uses of one's time would. If it's a long-term project, the problem of hyperbolic discounting may lead people to watch TV or [insert procrastination technique here] instead, even if they think the project is likely to succeed.

3. If people already have a habit of performing an activity, then it takes less effort for them to participate in similar activities and they demand less benefit from doing so. Identifying habits that are useful for the task you have in mind can be tricky if it's a complex issue, but successfully doing so can keep productivity consistent and reduce its mental cost.

4. Building a habit requires either intense and consistent motivation, or very small steps that build confidence. Again, though, identifying very small steps that still make for good productivity early on may be tricky.

5. If you have trouble getting people to start joining, it may be good to seek out early adopters to provide social proof. However, the social proof may only work for people who are familiar with those specific early adopters and who take cues from them. In that case, you may need to find some regular early adopters and then identify trendsetters in society (habitual early adopters from whom many people take their cues) you could get on board, after which their followers will consider participating. (Then the danger becomes making sure that the participants understand the project, but at least you have more people to choose from.)

6. It may help to remind people from time to time what they're working towards, even though everyone already knows. Being able to celebrate successes and take some time to review the vision can go quite a ways in relieving stress when people start to feel like their work isn't rewarding.

From item 1, if people think they can get a benefit from working on a project even if the project fails, they might be willing to participate. Socializing with project members and forming personal relationships with them may help in this respect, since they'll enjoy working with people. Alternatively, you could emphasize the skills they'll pick up along the way.

From item 4, I've been working on 'mini-habits" (a concept I got from Stephen Guise) to lower my own mental costs for doing things, and it seems to be working fairly well. Then the trick becomes getting enough buy-in (per item 5) so you can get other people started on those mini-habits.

There are probably some other factors I'm overlooking at the moment. Since I haven't been able to get results yet, I can't say for sure what will work, but I hope this provides a helpful starting point for framing the problem.

Comment by ExCeph on Is Rhetoric Worth Learning? · 2018-04-16T01:56:39.244Z · LW · GW

Currently, Difficult Conversations is the only book I recommend to literally all people, because it establishes the principles and practices of effective collaborative truth-seeking. If you want a good chance of persuading someone of something they are already opposed to, you have demonstrate that you understand their point of view and value their well-being. (On a similar note, I read Ender's Game in middle school and took to heart the idea of understanding your adversaries so well that you love them.)

Can the art of influencing emotions be used for destructive purposes? Yes. It's certainly possible to play off of many humans' biases to get them to adopt positions that are arbitrarily chosen by an outside source, by presenting different perspectives of situations and associating different emotions with them. However, it is also possible to explore as many relevant aspects of a situation as possible, validate people's concerns, and have your concerns listened to in turn. Like any other tool it can be used to constructively get people to feel better about seeking the truth. Rhetoric allows you to reframe a situation and get people to go along with it. Some try to reframe a situation for selfish purposes, but you can still frame a situation as accurately as possible, and persuade people to accept and contribute to this reframing.

Here's a twist, though: rhetoric would still be important even if people were rational truth-seekers by default. You can't accurately and efficiently convey the relevant aspects of a situation or idea without rhetoric. The people listening to you will have to spend more energy than necessary to understand your meaning, because you don't know how to arrange your message in a logical order, with clear language.

You'd also be missing a quick method for getting people to start appreciating others' emotions different cultural frames of reference. Even putting them through a simulation wouldn't work as well; their own frame of reference (the Curse of Knowledge) would likely prevent or delay them from having an epiphany about the other person's paradigm. Sometimes you just need to spell things out, and for that, you need rhetoric and other communication skills.

Just because rhetoric isn't sufficient to seek truth doesn't mean it's not necessary. If we tossed out everything that can be used for destruction as well as for construction, we'd be facing the world naked.

Comment by ExCeph on AI Goal Alignment Entry: How to Teach a Computer to Love · 2018-01-03T01:56:47.072Z · LW · GW

How to actually construct the AI was not part of the scope of the essay request, as I understood it. My intention was to describe some conceptual building blocks that are necessary to adequately frame the problem. For example, I address how utility functions are generated in sapient beings, including both humans and AI. Additionally, that explanation works whether or not huge paradigms shifts occur.
No amount of technical understanding is going to substitute for an understanding of why we have utility functions in the first place, and what shapes they take. Rather than the tip of the iceberg, these ideas are supposed to be the foundation of the pyramid. I didn't write about my approach to the problems of external reference and model specification because they were not the subject of the call for ideas, but I can do so if you are interested.

Furthermore, at no point do I describe "programming" the AI to do anything--quite the opposite, actually. I address that when I rule out the concept of the 3 Laws. The idea is effectively to "raise" an AI in such a way as to instill the values we want it to have. Many concepts specific to humans don't apply to AIs, but many concepts specific to people do, and those are ones we'll need to be aware of. Apparently I was not clear enough on that point.

Comment by ExCeph on Announcing the AI Alignment Prize · 2018-01-01T03:41:35.749Z · LW · GW

Submitting this entry for your consideration: https://www.lesserwrong.com/posts/bkoeQLTBbodpqHePd/ai-goal-alignment-entry-how-to-teach-a-computer-to-love. I'll email it as well. Your commitment to this call for ideas is much appreciated!

Comment by ExCeph on [deleted post] 2017-11-08T04:28:21.602Z

Based on my understanding of the wide variety of human thought, there are several basic mindsets which people use to address situations and deal with problems. Many people only use the handful that come naturally to them, and the mindsets dealing with abstract reasoning are some of the least common. Abstract reasoning requires differentiating and evaluating concepts, which are not skills most people feel the need to learn, since in most cases concepts are prepackaged for their consumption. Whether these packages represent reality in any useful way is another story...

To use your examples, planning one's day takes an awareness of resources, requirements, and opportunities; an ability to prioritize them; and the generation and comparison of various options. Some people find it difficult, but usually not because they don't already have all the concepts they need. It is certainly conscious thought, but it does not deal with the abstract. This is organization mindset.

Reacting to what one's friends say and do in social situations is usually one of two related mindsets: dealing with people similar to oneself takes intuition, and usually does not call for much imagination. Feeling out the paradigms and emotions of a less similar person requires a blend of both. That leads to an appreciation for differences, but doesn't help with hard rules.

Thinking about the future doesn't require abstract reasoning, if it's just extrapolation based on past experiences, or wishful thinking blended from experiences and desires. Serious predictions, though, should have an understanding of causality, and for that, abstract thinking is necessary.

Mostly pattern-matchers make decisions based on what they think is supposed to happen in a situation, based in turn on past experiences or what they've heard, or seen on TV. They accept that things won't always work out for them, but they sometimes don't how to learn from their failures, or they learn an unbalanced lesson.

From a pattern-matcher's perspective, things just sort of happen. Sometimes they have very simple rules, although people disagree on what those rules are and mostly base their own opinion on personal experience and bias (but those who disagree are usually either obviously wrong or "just as right in their own way"). Other times things have complex and arcane rules, like magic. A person with a high "intelligence" (which is implicitly assumed to be a scalar) can make use of these rules to achieve impressive things, like Hollywood Hacking. With ill-defined limits and capabilities, such a person would be defeated either by simply taking out their hardware or by a rival hacker who is "better". The rules wouldn't mean much to the audience anyway, so they're glossed over or blurred beyond recognition.

Does that help with visualizing non-abstract thought?

Comment by ExCeph on [deleted post] 2017-11-07T04:42:31.942Z

Just to add some more examples, I frequently pick up on some of the following things in casual social situations:

  • Use of textbook biases and logical fallacies
  • Reliance on "common sense" or "obviousness"
  • Failure to recognized nuanced situations (false dichotomies)
  • Failing at other minds
  • Failure to recognize diminishing marginal returns
  • Failure to draw a distinction between the following concepts:
    • Correlation and causation
    • Description and norm (is and ought)
    • Fact and interpretation
    • Necessary and sufficient
    • Entertaining an idea and accepting it

What distinguishes someone who has not learned how to think abstractly isn't just that they make these mistakes, but that when you call them on it and explain the principle to them, they still don't know what their mistake means or how it could weaken their position in any way. A good counterexample or parable usually helps them see what they're overlooking, though.

Comment by ExCeph on [deleted post] 2017-11-05T17:07:25.242Z

I've been afraid that most people lack abstract reasoning for quite some time. Thank you for describing the phenomenon so clearly. However, I also fear that you may be underestimating its biggest consequence in your life.

I strongly suspect that the biggest consequence of people lacking abstract reasoning isn't that different methods are required to explain concepts to pattern-matching people, but rather that most of the systems and institutions around you have been designed by people who have or had poor abstract reasoning skills, and that this will continue to be the case unless something is done about it.

The further consequence is that these structures are only equipped to deal with situations that the designers could conceptualize, which is limited to their immediate experiences. Unprecedented situations, long-term or large-scale effects, or immediate effects that they simply have not yet learned to notice are all ignored for the purposes of design, and this results in problems that might have been avoided, maybe even easily, had abstract reasoning been applied towards the project. These sorts of problems are the bane of my existence.

Following from this, I advocate for teaching abstract reasoning, if possible, from an early age. (Ensuring that most people possess such thinking skills is my central life purposes for the foreseeable future.) I believe it is likely possible, but have not yet compiled evidence or a formal argument for its feasibility. At the very least, I believe it is worth a try, and have been working on a project to address the situation for some years now. For elaboration on why I believe it is important, I refer to my response to this post: https://www.lesserwrong.com/posts/dPLLnAJbas97GGsvQ/leave-beliefs-that-don-t-constrain-experience-alone

Comment by ExCeph on Yet another failed utopia · 2017-11-03T02:01:11.309Z · LW · GW

I strongly suspect that you cannot, with a feedback loop as you describe. If you measure discontent based on social media, suffering that does not get posted to social media effectively does not exist. The AI would need a way of somehow recognizing that the social media are only its window to the discontent that exists in the world beyond, which is what it is intended to minimize. Proverbially, it would need to be able to look at the moon rather than the finger pointing to it.

Comment by ExCeph on Leaders of Men · 2017-11-03T01:50:17.972Z · LW · GW

I would argue that for larger, more complex projects, it seems crucial to have basic proficiency in supporting skills as well as the core skills. It is not uncommon for a person with extreme skill in one area to fail or experience diminishing marginal returns on their skill, because it is necessary but not sufficient to succeed in their goal.

Between a person with core skills for a project and one with supporting skills, the person with core skills will get better results. However, between a person with core skills and one with core and supporting skills or subordinates with supporting skills, I predict the latter has a good chance of doing better on a complex project even if their core skills are not as strong.

In the baseball example, the core skill seems to be eliciting effort from the team, and the supporting skill would be optimizing the allocation of that effort. They may not be inherently "core" and "supporting", though: it may just be that eliciting effort seems sufficient for victory (and therefore "core") because few other coaches have it. (I don't follow baseball, so I don't know how true that is.) Once the environment changes and standards for effort are raised, the Red Queen's race begins again, and optimization for effort yields huge returns since everyone's team is putting for close to peak effort.

If supporting skills seem to interfere with the core skills due to conflicting priorities or methods, to me that just means that those who can balance the two and make them work together will see even better results.

Thoughts?

Comment by ExCeph on Leave beliefs that don't constrain experience alone · 2017-11-03T01:37:53.127Z · LW · GW

Apologies in advance for the long response. Hopefully this will be worth the read.

I greatly appreciate your post because it challenges some of my own beliefs and made me reassess them. I agree that a person can get by in this world with bad epistemological hygiene. However, humans are animals that evolved to adapt behaviors for many environments. Getting by is cheap. The problem with poor epistemological hygiene (EH) isn't that a person can't get by. As I see it, there are three issues:

  1. The worse your EH is, the more your success is based on luck. If you're right, it's by accident, or you learn the hard way.
  2. Bad EH means bad predictions and therefore bad future-proofing. The world changes, and people are often clumsy to adapt, if they adapt at all.
  3. Though individual humans can survive, poor EH across all humanity leads to poor decisions that are made collectively but not deliberately (e.g. the tragedy of the commons, or cycles of political revolution), which hurt us all in ways that are hard to measure, either because they are gradual or because they require comparing to a counterfactual state of the world.

Most animal populations can survive with trial and error, natural selection, and no ability to destroy the world. I would prefer the standards for humanity to be higher. Thoughts?

Anecdote follows:

Coincidentally, I had a conversation at work today that culminated in the concept you describe as "less-examined beliefs that don't have to pay rent for you to happily contribute in the way you like". The people with whom I was speaking were successful members of society, so they fell into the uncanny valley for me when they started pushing the idea that everyone has their own truth. I'm not sure if it's better or worse that they didn't quite literally believe that, but didn't know how to better articulate what they actually believed.

Ultimately what I got them to agree to (I think) is that although everyone has irrefutable experiences, what they infer about the structure of the world from those experiences may be testably wrong. However, I personally will have no strong beliefs about the truth value of their hypothesis if I have too much conflicting evidence. However, I won't want to put much effort into testing the hypothesis unless my plans depend on it being true or false. It's murkier with normative beliefs, because when those become relevant, it's because they conflict with each other in an irreconcilable way, and it's much more difficult if not impossible to provide evidence that leads people to change their basic normative beliefs.

That said, I suspect that if we're not making plans that falsify each other's beliefs and conflict with each other's sense of right and wrong, we're probably stagnating as a civilization. That ties in with your idea of beliefs not being examined because they don't have to be. The great problem is that people aren't putting their beliefs in situations where they will either succeed or fail. To me, that's the true spirit of science.

For example, my objection to people believing in poltergeists (which is how the conversation started) isn't that they believe it. It's that they don't see the vast implications of a) transhumanism via ghost transformation, b) undetectable spies, c) remote projection of physical force, or d) possibly unlimited energy. They live as if none of those possibilities exist, which to me is a worse indictment of their beliefs than a lack of evidence, and an indictment of their education even if they're right about the ghosts. If people traced the implications of their beliefs, they could act more consistently on them and more easily falsify them. I strongly suspect that cultivating this habit would yield benefits on the individual and population level.