Posts

Have you lost your purpose? 2019-05-30T22:35:38.295Z · score: 29 (15 votes)
What is a good moment to start writing? 2019-05-29T21:47:50.454Z · score: 24 (10 votes)
What features of people do you know of that might predict academic success? 2019-05-10T18:16:59.922Z · score: 16 (2 votes)
Experimental Open Thread April 2019: Socratic method 2019-04-01T01:29:00.664Z · score: 31 (11 votes)
Open Thread April 2019 2019-04-01T01:14:08.567Z · score: 30 (5 votes)
RAISE is launching their MVP 2019-02-26T11:45:53.647Z · score: 85 (28 votes)
What makes a good culture? 2019-02-05T13:31:57.792Z · score: 30 (10 votes)
The housekeeper 2018-12-03T20:01:57.618Z · score: 26 (16 votes)
We can all be high status 2018-10-10T16:54:19.047Z · score: 61 (28 votes)
Osmosis learning: a crucial consideration for the craft 2018-07-10T15:40:12.193Z · score: 27 (8 votes)
Open Thread July 2018 2018-07-10T14:51:12.351Z · score: 11 (4 votes)
RAISE is looking for full-time content developers 2018-07-09T17:01:38.401Z · score: 25 (7 votes)
A friendly reminder of the mission 2018-06-05T00:36:38.869Z · score: 16 (5 votes)
The league of Rationalists 2018-05-23T11:55:14.248Z · score: 26 (17 votes)
Fundamentals of Formalisation level 2: Basic Set Theory 2018-05-18T17:21:30.969Z · score: 24 (7 votes)
The reverse job 2018-05-13T13:55:35.573Z · score: 49 (16 votes)
Fundamentals of Formalisation level 1: Basic Logic 2018-05-04T13:01:50.998Z · score: 15 (7 votes)
Soon: a weekly AI Safety prerequisites module on LessWrong 2018-04-30T13:23:15.136Z · score: 83 (25 votes)
Give praise 2018-04-29T21:00:42.003Z · score: 143 (48 votes)
Raising funds to establish a new AI Safety charity 2018-03-17T00:09:30.843Z · score: 126 (42 votes)
Welcome to LW Netherlands [Edit With Your Details] 2018-03-16T10:13:08.360Z · score: 3 (1 votes)
Updates from Amsterdam 2017-12-16T22:14:48.767Z · score: 16 (5 votes)
Project proposal: Rationality Cookbook 2017-11-21T14:34:01.537Z · score: 55 (20 votes)
In defense of common-sense tribalism 2017-11-02T08:43:11.715Z · score: 17 (9 votes)
We need a better theory of happiness and suffering 2017-07-04T20:14:15.539Z · score: 1 (1 votes)
Announcing AASAA - Accelerating AI Safety Adoption in Academia (and elsewhere) 2017-06-15T18:55:06.306Z · score: 18 (13 votes)
Meetup : Meetup 17 - Comfort Zone Expansion (CoZE) 2017-05-10T09:59:48.318Z · score: 0 (1 votes)
Meetup : Meetup 15 (for real this time) - Trigger Action Planning 2017-04-12T12:52:40.546Z · score: 0 (1 votes)
Meetup : Meetup #15 - Trigger-Action Patterns 2017-03-29T00:55:10.507Z · score: 0 (1 votes)
Meetup : #13 - Focusing 2017-02-28T20:52:57.897Z · score: 0 (1 votes)
Meetup : Amsterdam - Meetup #12 - Friendly AI 2017-02-15T15:08:00.708Z · score: 0 (1 votes)
Meetup : #10: Making a difference 2017-01-14T17:29:32.945Z · score: 0 (1 votes)
Meetup : Meetup #9 - 2017 is prime 2017-01-01T16:06:08.061Z · score: 0 (1 votes)
Meetup : Meetup #8 - Reversed stupidity is not intelligence 2016-12-05T19:59:45.201Z · score: 0 (1 votes)
Meetup : Meetup #7 - Becoming Less Wrong 2016-11-21T16:50:21.143Z · score: 0 (1 votes)
Meetup : Meetup #6 - Still Amsterdam! 2016-11-08T18:02:59.597Z · score: 0 (1 votes)
Meetup : Meetup #5 - Amsterdam edition! 2016-10-28T17:55:15.286Z · score: 0 (1 votes)

Comments

Comment by toonalfrink on "Cheat to Win": Engineering Positive Social Feedback · 2019-08-05T15:37:14.494Z · score: 2 (1 votes) · LW · GW

I think the context of "haters don't matter" is one where you already decided to ignore them.

Comment by toonalfrink on The Tails Coming Apart As Metaphor For Life · 2019-08-05T13:48:26.984Z · score: 2 (1 votes) · LW · GW

To me this looks like a knockdown argument to any non-solipsistic morality. I really do just care about my qualia.

In some sense it's the same mistake the deontologists make, on a deeper level. A lot their proposed rules strike me as heavily correlated with happiness. How were these rules ever generated? Whatever process generated them must have been a consequentialist process.

If deontology is just applied consequentialism, then maybe "happiness" is just applied "0x7fff5694dc58".

Your post still leaves the possibility that "quality of life", "positive emotions" or "meaningfulness" are objectively existing variables, and people differ only in their weighting. But I think the problem might be worse than that.

I think this makes the problem less bad, because if you get people to go up their chain of justification, they will all end up at the same point. I think that point is just predictions of the valence of their qualia.

Comment by toonalfrink on What is a good moment to start writing? · 2019-06-10T12:40:06.184Z · score: 2 (1 votes) · LW · GW

Because some people might already be at this level, and I worry that I'm just adding noise to their signal.

Maybe my question is this: given that, every year, I unexpectedly learn important considerations that discredit my old beliefs, how can I tell that my models are further along this process than those written by others?

Comment by toonalfrink on Our plan for 2019-2020: consulting for AI Safety education · 2019-06-05T16:30:36.403Z · score: 10 (4 votes) · LW · GW

Thank you. We're reflecting on this and will reach out to have a conversation soon.

Comment by toonalfrink on Our plan for 2019-2020: consulting for AI Safety education · 2019-06-05T15:50:16.654Z · score: 22 (6 votes) · LW · GW

Apologies, that was a knee-jerk reply. I take it back: we did disagree about something.

We're going to take some time to let all of this criticism sink in.

Comment by toonalfrink on Our plan for 2019-2020: consulting for AI Safety education · 2019-06-03T23:14:56.955Z · score: -7 (5 votes) · LW · GW

Looks like I didn't entirely succeed in explaining our plan.

My recommendation was very concretely "try to build an internal model of what really needs to happen for AI-risk to go well" and very much not "try to tell other people what really needs to happen for AI-risk", which is almost the exact opposite.

And that's also what we meant. The goal isn't to just give advice. The goal is to give useful and true advice, and this necessarily requires a model of what really needs to happen for AI risk.

We're not just going to spin up some interesting ideas. That's not the mindset. The mindset is to generate a robust model and take it from there, if we ever get that far.

We might be talking to people in the process, but as long as we are in the dark the emphasis will be on asking questions.

EDIT: this wasn't a thoughtful reply. I take it back. See Ruby's comments below

Comment by toonalfrink on Can movement from Conflict to Mistake theorist be facilitated effectively? · 2019-06-03T17:47:21.880Z · score: 9 (6 votes) · LW · GW

To what extent are these just features of low-trust and high-trust environments?

Assuming that these dimensions are the same, here's my incomplete list of things that modulate trust levels:

  • Group size (smaller is easier to trust)
  • Average emotional intelligence
  • The quality of group memes that relate to emotions
  • Scarcity mindsets
  • The level of similarity of group members
  • Group identity

Some of these might screen off others. This model suggests that groups with healthy discourse tend to be small, affluent, emotionally mature and aligned.

Apart from social effects I get the impression that there are also psychological factors that modulate the tendency to trust, including:

  • Independence (of those that disagree)
  • Ambiguity tolerance
  • Agreeableness

_______

Different answer: one thing that I've seen work is to meet someone offline. People tend to be a lot more considerate after that


Comment by toonalfrink on Egoism In Disguise · 2019-05-31T16:22:45.270Z · score: 3 (2 votes) · LW · GW

Still I think this line of thinking is extremely important because it means that people won't agree with any proposal for a morality that isn't useful for them, and keeping this in mind makes it a lot easier to propose moralities that will actually be adopted.

Comment by toonalfrink on Egoism In Disguise · 2019-05-31T16:20:42.090Z · score: 4 (2 votes) · LW · GW

I agree with your conclusion, but feel like there's some nuance lacking. In three ways.

1.

It seems that indeed a lot of our moral reasoning is confused because we fall for some kind of moral essentialism, some idea that there is an objective morality that is more than just a cultural contract that was invented and refined by humans over the course of time.

But then you reinstall this essentialism into our "preferences", which you hold to be grounded in your feelings:

Human flourishing is good because the idea of human flourishing makes me smile. Kicking puppies is bad because it upsets me.

We recursively justify our values, and this recursion doesn't end at the boundary between consciousness and subconsciousness. Your feelings might appear to be your basic units of value, but they're not. This is obvious if you consider that our observation about the world often change our feelings.

Where does this chain of justifications end? I don't know, but I'm reasonably sure about two things:

1) The bedrock of our values are probably the same for any human being, and any difference between conscious values is either due to having seen different data, but more likely due to different people situationally benefitting more under different moralities. For example a strong person will have "values" that are more accepting of competition, but that will change once they become weaker.

2) While a confused ethicist is wrong to be looking for a "true" (normative) morality, this is still better than not searching at all because you hold your conscious values to be basic. The best of both worlds is an ethicist that doesn't believe in normative morality, but still knows there is something to be learned about the source of our values.

2.

Considering our evolutionary origins, it seems very unlikely to me that we are completely selfish. It seems a lot more likely to me that the source of our values is some proxy of the survival and spread of our genes.

You're not the only one who carries your genes, and so your "selfish" preferences might not be completely selfish after all

3.

We're a mashup of various subagents that want different things. I'd be surprised if they all had the same moral systems. Part of you might be reflective, aware of the valence of your experience, and actively (and selfishly) trying to increase it. Part of you will reflect your preferences for things that are very not-selfish. Other parts of you will just be naive deontologists.

Comment by toonalfrink on Have you lost your purpose? · 2019-05-31T15:26:09.036Z · score: 3 (2 votes) · LW · GW

Does that still lead to good outcomes though? I found that being motivated by my social role makes me a lot less effective because signalling and the actual thing come apart considerably. At least for the short term.

Comment by toonalfrink on Have you lost your purpose? · 2019-05-31T15:19:20.832Z · score: 4 (2 votes) · LW · GW

Sure.

It starts with the sense that, if something doesn't feel viscerally obvious, there is something left to be explained.

It's a bottom up process. I don't determine that images will convince me, then think of some images and play them in front of me so that they will hopefully convince my s1.

Instead I "become" my s1, take on a skeptical attitude, and ask myself what the fuss is all about.

Warning: the following might give you nightmares, if you're imaginative enough.

In this case, what happened was something like "okay, well I guess at some point we're going to have pretty strong optimizers. Fine. So what? Ah, I guess that's gonna mean we're going to have some machines that carry out commands for us. Like what? Like *picture of my living room magically tidying itself up*. Really? Well yeah I can see that happening. And I suppose this magical power can also be pretty surprising. Like *blurry picture/sense of surprising outcome*. Is this possible? Yeah like *memory of this kind of surprise*. What if this surprise was like 1000x stronger? Oh fuck..."

I guess the point is that convincing a person, or a subagent, can be best explained as an internal decision to be convinced, and not as an outside force of convincingness. So if you want to convince a part of you that feels like something outside of you, then first you have to become it. You do this by sincerely endorsing whatever it has to say. Then if the part of you feels like you, you (formerly it) decide to re-evaluate the thing that the other subagent (formerly you) disagreed with.

A bit like internal double crux, but instead of going back and forth you just do one round. Guess you could call it internal ITT.

Comment by toonalfrink on What is a good moment to start writing? · 2019-05-30T12:30:06.146Z · score: 2 (1 votes) · LW · GW
which confuses me because it seems like worrying about being embarrassed is worrying about impressions?

What I meant to say is that I can tell that my work isn't going to be very good from next year's standards, which are better standards because they're more informed

Comment by toonalfrink on What features of people do you know of that might predict academic success? · 2019-05-10T18:58:31.576Z · score: 5 (2 votes) · LW · GW

I expect that there are metrics that screen off gender so we can have better predictions and also circumvent the politics of doing anything related to gender

Comment by toonalfrink on Literature Review: Distributed Teams · 2019-05-02T12:36:54.490Z · score: 11 (4 votes) · LW · GW

My impression is that money can only lower prestige if the amount is low relative to an anchor.

For example a $3000 prize would be high prestige if it's interpreted as an award, but low prestige if it's interpreted as a salary.

Comment by toonalfrink on Subagents, akrasia, and coherence in humans · 2019-04-24T17:15:24.200Z · score: 2 (1 votes) · LW · GW

Besides the subsystems making their own predictions, there might also be a meta-learning system keeping track of which other subsystems tend to make the most accurate predictions in each situation, giving extra weight to the bids of the subsystem which has tended to perform the best in that situation.

This is why I eat junk food, sans guilt. I don't want my central planning subagent to lose influence over unimportant details. Spend your weirdness points wisely.

Comment by toonalfrink on Experimental Open Thread April 2019: Socratic method · 2019-04-02T12:22:48.819Z · score: 3 (2 votes) · LW · GW

Yeah, otherwise you're not narrowing down one person's beliefs, but possibly going back and forth.

Comment by toonalfrink on Open Thread April 2019 · 2019-04-02T01:07:51.285Z · score: 3 (3 votes) · LW · GW

I'd expect that option to be bad overall. I might just be justifying an alief here, but it seems to me that closing a set of people off entirely will entrench you in your beliefs.

Comment by toonalfrink on The Case for The EA Hotel · 2019-04-02T00:30:28.427Z · score: 5 (3 votes) · LW · GW

We have dorms that are purely dedicated to short term paying guests. This allows us to honestly tell people that they're always welcome. I think that's great.

Comment by toonalfrink on The Case for The EA Hotel · 2019-04-01T14:20:45.108Z · score: 10 (4 votes) · LW · GW

The funniest part is that I have a friend who really talks like this. We often listen to what he has to say intently to try to parse any meaning from it, but it seems like he's either too far beyond us or just mad (I'd say both). Guess I have a new nickname for him

Comment by toonalfrink on Experimental Open Thread April 2019: Socratic method · 2019-04-01T02:31:54.917Z · score: 2 (1 votes) · LW · GW

Well I don't really have a justification for it (ha), but I've noticed that explicit deductive thought rarely leads me to insights that turn out to be useful. Instead, I find that simply waiting for ideas to pop into my head, makes the right ideas pop into my head.

Comment by toonalfrink on Experimental Open Thread April 2019: Socratic method · 2019-04-01T02:29:45.317Z · score: 2 (1 votes) · LW · GW

You saw that correctly. What I mean is too often, not always.

Comment by toonalfrink on Experimental Open Thread April 2019: Socratic method · 2019-04-01T01:35:08.217Z · score: 19 (8 votes) · LW · GW

Claim: a typical rationalist is likely to be relying too much on legibility, and would benefit from sometimes not requiring an immediate explicit justification for their beliefs.

Comment by toonalfrink on Unconscious Economies · 2019-03-26T14:31:19.851Z · score: 2 (3 votes) · LW · GW

I have a different hypothesis for the "people aren't like that!" response. It's about signalling high status in order to be given high status. If I claim that "people aren't bad where I come from", it signals that I'm somehow not used to being treated badly, which is evidence that I'm not treated badly, which is evidence that mechanisms for preventing bad behavior are already in place.

This isn't just a random idea, this is introspectively the reason that I keep insisting that people really aren't bad. It's a sermon. An invitation to good people and a threat to bad ones.

The one who gets bullied is the one that openly behaves like they're already being bullied.

Comment by toonalfrink on The reverse job · 2019-03-11T21:11:50.685Z · score: 3 (2 votes) · LW · GW

Did not, despite this offer which offers a quite large social reward. Seems like people aren't interested.

Comment by toonalfrink on Ideas for an action coordination website · 2019-03-08T17:31:57.068Z · score: 3 (2 votes) · LW · GW

Going forward, I think there's a "revert to draft" feature! Or at least I noticed that option on the EA forum

Comment by toonalfrink on The Steering Problem · 2019-02-07T17:00:52.866Z · score: 1 (1 votes) · LW · GW

This part feels underdefined:

A program P is more useful than Hugh for X if, for every project using H to accomplish X, we can efficiently transform it into a new project which uses P to accomplish X. The new project shouldn’t be much more expensive---it shouldn’t take much longer, use much more computation or many additional resources, involve much more human labor, or have significant additional side-effects.

Why quantify over projects? Why is it not sufficient to say that P is as useful as H if it can also accomplish X?

Seems like you want to say that P can achieve X in more ways, but I fail to see why that is obviously relevant. What is even a project?

Or is this some kind of built in measure to prevent side effects, by making P achieve X in a humanlike way? Still doesn't feel obvious enough.

Comment by toonalfrink on We can all be high status · 2018-10-11T21:45:12.062Z · score: 3 (4 votes) · LW · GW

Honestly, I'm hardly solving this for myself. Just trying to shape the community in such a way that others are doing a bit better. I'd expect a lot of good to come from that. So let's not get into the frame of emotionally supporting me. That's not the outcome I'm looking for.

  • What does influence on the social environment look like to you?

It's fuzzy, but it means not being left in the dark when you're in need in some way. People maybe checking in if you've been feeling bad. People paying attention to your opinion when you think there's something that needs to change, and actually changing their behavior accordingly if they find themselves agreeing.

I think a key concept is leverage.

I suspect major progress would be made if someone managed to define this better. I think it's the hamming problem of this issue.

  • I notice you don't talk at all about the outcomes of the volunteering projects you did. What did you think of them, apart from the effect on status?

That's a bit of a broad question. Not sure what you're looking for. The project in question is this one. It's moving forward, but quite a bit slower than anticipated.

  • Does it seem to you like the EA volunteer efforts are organized to allow for the flakiness you describe, or does it seem like they are being impacted negatively?

Except for organisational overhead, they're relatively robust. Been running for a few months now, and this one guy has kept showing up, so that's kept it going.

Comment by toonalfrink on We can all be high status · 2018-10-11T21:32:54.166Z · score: 4 (4 votes) · LW · GW

There's a possibility for corruption here, as I briefly mentioned, if people get so deprived that they will sacrifice their other needs or values for the sake of status alone.

I considered that to be obvious in writing this. I'm not necessarily talking about the problem of getting status regardless of everything else. I'm also not talking about how to get status as an individual. I'm rather talking about getting the whole community a sense of status while keeping our other values intact.

"Focus on creating value" might be a great individual solution if you're talented enough. People recognize you're not goodharting as much and they're promoting you accordingly. But it doesn't help everyone. It doesn't scale. If it works for you that just means you've been able to win these competitions so far. Good for you.

As for the collective version: judging from the fact that we've taken some meaningful progress with this at LW Netherlands, there's clearly more traction to be made.

Comment by toonalfrink on We can all be high status · 2018-10-10T18:07:54.100Z · score: 9 (5 votes) · LW · GW

Yes, yes. All of this.

On the other hand, it also means that there's another sense in which "we can all be high-status": within our respective local communities. I'm curious how you feel about that, because that was quite adequate for me for a long time, especially as a student.

This is what we've built with LessWrong Netherlands. We call it the Home Bayes and it's a group of 15ish people with tight bonds and formal membership. It works like a charm.

On a broader level, one actionable idea I've been thinking about is to talk less about existential risk being "talent constrained", so that people who can't get full-time jobs in the field don't feel like they're not talented. A more accurate term in my eyes is "field-building constrained".

I'm glad someone else had this idea.

Coming from my own startup with plenty of talent around but so far not a lot of funding, I think the problem isn't initiative. It's getting the funding to the right initiatives. This is why 80K has listed grantmaking as one of their highest impact careers, because the money is there, but given the CEA assumption that random cause has 0 expected value, they have to single out the good ones, and that's happening so slowly that a lot of ideas are stranding before they even got "whitelisted".

Comment by toonalfrink on Things I Learned From Working With A Marketing Advisor · 2018-10-10T18:06:22.727Z · score: 5 (1 votes) · LW · GW

Offtopic:

I suspect there may actually be a function to that

Yep. Let's be wary of hubris. Let's not dismiss things we don't fully understand.

Well done.

Comment by toonalfrink on We can all be high status · 2018-10-10T17:45:58.871Z · score: -1 (4 votes) · LW · GW

What do you mean with ego?

Comment by toonalfrink on We can all be high status · 2018-10-10T17:40:14.557Z · score: 2 (3 votes) · LW · GW

The definition is debated, but most people in EA agree it's about utilitarianism, which is essentially just counting up the happiness of everyone together, including yourself. There are different versions of it, but as far as I know none of them ignore your own happiness.

So buying yourself an ice cream may not be "altruistic" in the common sense, but it is utilitarian.

As a community, organising yourself as a hierarchy might be utilitarian when, despite the suffering it may cause, it resolves more suffering outside of the community than it causes. This is probably true to some extent because hierarchies might cause a community to get more done, with the smartest people making the decisions.

Comment by toonalfrink on The Tails Coming Apart As Metaphor For Life · 2018-09-29T18:29:21.358Z · score: 24 (7 votes) · LW · GW

Lately when I'm confronted with extreme thought experiments that are repugnant on both sides, my answer has been "mu". No I can't give a good answer, and I'm skeptical that anyone can.

Balboa park to West Oakland is our established world. We have been carefully leaning into it's edge, slowly crafting extensions of our established moral code, adding bits to it and refactoring old parts to make it consistent with the new stuff.

It's been a mythical effort. People above our level have spent their 1000 year long lifetimes mulling over their humble little additions to the gigantic established machine that is our morality.

And this machine has created Mediocristan. A predictable world, with some predictable features, within which there is always a moral choice available. Without these features our moral programming would be completely useless. We can behave morally precisely because the cases in which there is no moral answer, don't happen so much.

So please, stop asking me whether I'd kill myself to save 1000 babies from 1000 years of torture. Both outcomes are repugnant and the only good answer I have is "get out of Extremistan".

The real morality is to steer the world towards a place where we don't need morality. Extend the borders of Mediocristan to cover a wider set of situations. Bolster it internally so that the intelligence required for a moral choice becomes lower - allowing more people to make it.

No morality is world-independent. If you think you have a good answer to morality, you have to provide it with a description of the worlds in which it works, and a way to make sure we stay within those bounds.

Comment by toonalfrink on Letting Go III: Unilateral or GTFO · 2018-07-10T18:07:59.687Z · score: 3 (2 votes) · LW · GW

In our WEIRD culture, unilateral is probably better. But it also reinforces that culture, and I have my qualms with it. I think we're choosing rabbit in a game of stag. You're essentially advocating for rabbit (which may or may not be a good thing)

In a highly individualistic environment you can't work things out *as a community* because there aren't any proper coherent communities, and people aren't going to sync their highly asynchronous lives with yours.

In a highly collectivist environment you can work things out alone, but it's not as effective as moving in a coordinated fashion because you actually do have that strictly superior option available to you.

I believe the latter has more upside potential, was the default in our ascendral environment, and has the ability to resolve equilibria of defection. The former is more robust because it's resistant to entropic decay, scales beyond dunbar's number, and doesn't rely on good coordinators.

So I would say "unilateral or GTFO" is a bit too cynical. I'd say "be aware of which options (unilateral or coordinated) are available to you". In a low-trust corporate environment it's certainly unilateral. In a high-trust community it is probably coordinated, and let's keep it that way.

Comment by toonalfrink on Context Windows: A Model of Unproductive Disagreement · 2018-07-10T17:36:17.356Z · score: 9 (3 votes) · LW · GW

IMO this is a disagreement of topic, not a disagreement of style. Klein is answering the question "what social truth is convenient?" and Harris is answering the question "what natural truth is accurate?". Seems like simply another failure of proper operationalisation.

Comment by toonalfrink on Fundamentals of Formalisation level 1: Basic Logic · 2018-07-03T12:39:55.126Z · score: 3 (3 votes) · LW · GW

Thank you for your criticism. We need more of that.

I am not aiming to get a formal diploma here, and I don't think you plan on awarding me any.

A pipeline has 2 purposes: training people and identifying good students. We want to do the latter as much as the former. Not just for the sake of the institutions we ultimately wish to recommend candidates to, but also for the sake of the candidates that want to know whether they are up to the task. We recently did a poll on Facebook asking "what seems to be your biggest bottleneck to becoming a researcher" and "I'm not sure I'm talented enough" was the most popular option by far (doubling the next one).

I agree that it looks silly right now because we're a tiny startup that uploaded 2 videos and a few guides to some textbooks, and it will probably be this small for at least a year to come. You got me to consider using something more humble in the meantime. I'll bring it up in our next meeting.

Comment by toonalfrink on Describing LessWrong in one paragraph · 2018-06-10T14:20:48.771Z · score: 5 (1 votes) · LW · GW

LessWrong is a movement that seriously tries to better the world by a significant margin, not shying away from the most unconventional strategies. Most notably, we believe in the prime importance of securing AI Safety, and we subscribe to the values of transhumanism. Knowing that nature is not a fair enemy, we put in a great effort to grow as individuals and as a community, hoping to gather enough strength to live up to the task. We do this in various ways, applying epistemic standards at least as rigorous as that of science, thinking hard about late advances in philosophy and how to put it's lessons into practice, while keeping an open mind to the benefits of subjective wisdom like spirituality and our intuitions.

Comment by toonalfrink on [deleted post] 2018-06-05T00:22:14.557Z

Would you share your model? My intuition is that there are no topics or opinions that should be shunned, because if tolerating a topic leads to bad outcomes, then you just have bad epistemics. i.e. it's a bandaid solution for your average conflict-theorist internet community that I think the thoroughly mistake-theorist LW doesn't need.

There is honor in it if we could handle this.

Comment by toonalfrink on The lesswrong slack - an introduction to our regulars · 2018-06-05T00:08:46.152Z · score: 6 (2 votes) · LW · GW

Now I feel bad for going quiet. Still love you guys!

Comment by toonalfrink on [deleted post] 2018-05-28T13:13:03.135Z

Appreciate your attempt to address a touchy subject. Do keep in mind that epistemic humility applies tenfold here. The subject is littered with blindspots and motivated reasoning, and I haven't come across anyone with a remotely satisfying answer yet.

And it’s never enough; their appetite is endless.

That's an assumption, and I think it's wrong. I think apple seekers are satisficers, like everyone else. I, for one, don't suffer from the brandishing. Got access to enough apples.

My model is that it's a problem of inequality. You see, apple holders get a large part of their status from which apple eater they associate with. Now when it comes to status, one naturally wants to be in the upper regions:

Imagine a world where, every few years, 90% of it’s highest status inhabitants are selected to replace the remaining 10%. If you’d want to remain in this world indefinitely, how much status would you need? Indeed, from the perspective of our genes, only the maximum is good enough.

Over the decades, Inequality in apple eaters has greatly increased (another assumption). Compared to decades before, It's a lot harder to find an apple eater that is truly on top of their shit. And so, apple holders are more reluctant to share their apples with someone of comparative (sexual) status, especially in the lower regions.

But it could be something else entirely. In any case, brandishing doesn't have to be a problem for apple eaters.

Comment by toonalfrink on Sleeping Beauty Resolved? · 2018-05-28T10:25:06.759Z · score: 5 (1 votes) · LW · GW

As it stands now, I can't accept this solution, simply because it doesn't inform the right decision.

Imagine you were Beauty and q(y) was 1, and you were offered that bet. What odds would you take?

Our models exist to serve our actions. There is no such thing as a good model that informs the wrong action. Probability must add up to winning.

Or am I interpreting this wrong, and is there some practical reason why taking 1/2 odds actually does win in the q(y) = 1 case?

Comment by toonalfrink on Questions about the Usefulness of Self-Importance · 2018-05-27T22:15:59.347Z · score: 7 (3 votes) · LW · GW

Hi Leo,

/Why/ am I trying to achieve that goal. I struggled with this idea of a "root goal" the primary function of my life that would give order to all other subgoals and I eventually settled on "to be a good human being", as unsatisfactory as that is, because I found no meaningful or fulfilling progress in existential questions of this nature.

Your root goal is not something to learn, it is something to decide. If nothing seems satisfactory, consider the possibility that you're in a dependency mindset. I.e. You're evaluating your goals according to the impression that it would make or the praise it would solicit, instead of according to what you want. The fact that you come here looking for guidance is evidence for that (not saying it's bad).

If everyone was dumber than you, if your knowledge was more advanced than anyone elses, what change would you strive to manifest? What kind of slightly better parrallel universe do you yearn for? Make it so.

I alternate between setting up a cozy life that I'm certain I could thrive in, (example: returning to my home town to teach) or committing my life to bettering a portion of the world larger than what's just in front of me at the cost of my own comfortability, or at least my sense of security.

Always be at your edge. There is no such thing as a cozy life. Finding a balance between overwhelm and boredom is where you'll find yourself most fulfilled. Here's a few interesting data points that I've come across lately:

  • Dopamine encodes a mismatch signal between data and prediction
  • Extraversion follows a u-shaped curve with increasing dopamine levels
  • Extraversion is strongly correlated to happiness

I take this as neurological evidence of Jordan Peterson's (and other spiritual people's) idea that the optimal place to be is on the edge of order on chaos. Bonus evidence: flow experiences occur when you're challenged exactly enough, but not too much.

I obviously think that I would do less harm if I found myself in a great position, but I suspect equally that incompetence could cause harm and I am not yet certain my competence is sufficient.

Not competence, integrity. There are different reasons to aspire to power. Mao strikes me as a person who was motivated by the wrong needs. He wanted power to placate his ego. There are other reasons, like love and beauty. But those are screened off by lesser needs like safety, so first make sure you have your needs met, then aspire to influence. Only then will you use it for good.

I fear that if I commit to a life of trying to obtain a great position I may cause myself unnecessary grief and ultimately do less good than if I merely did what I could with what's in front of me.

Power should come as an entirely unanticipated consequence of trying to attain something more pure. It's all about the incentive. So dig deep in your psyche and try to figure it out: why power? If you think you're probably well-intentioned, think again. Since I don't know you, and you strike me as dependent on approval, I give it a 1% chance that you're truly astruistic.

Please don't take that as an insult. I have the same prior for everyone else.

Also, Hello everyone. My name's Leo and I'm new here.

You're most welcome!

Comment by toonalfrink on The league of Rationalists · 2018-05-25T19:41:56.205Z · score: 5 (1 votes) · LW · GW
Do you believe that felt lack of status is completely uncorrelated with others' willingness to cooperate?

I think it's strongly correlated, and causally bidirectional: higher status leads to better performance (for mental health reasons) leads to higher status.

The way I see it is that high status is the baseline condition and lack of status is a malfunctioning that makes one function below their capacity. In the same way that having to go to the toilet does.

would it be easier to just literally wirehead? Electricity to the part of the brain that seeks status?

If we could, yes. How many years until it's commercially available?

Comment by toonalfrink on Of Gender and Rationality · 2018-05-25T18:49:48.226Z · score: 9 (2 votes) · LW · GW

This is what we're doing at LW Netherlands. The "partner" community we've chosen is the spirituality community, which strikes me as remarkably complementary to LW in multiple ways. We're going to weekly ecstatic dance parties, some of us are signing up for zen retreats (which is a bit more masculine), and there's the potential that some of us will try tantra at some point.

And it's really gold for learning rationality, because when it comes to lines of attack on becoming smarter, spirituality couldn't be more different from, yet as potent as, our strategy.

Bonus is that their gender ratio is pretty much the inverse of ours.

Comment by toonalfrink on The league of Rationalists · 2018-05-25T16:44:47.198Z · score: 5 (1 votes) · LW · GW

Surely if you go down to the nuts and bolts of it, you get a graph with a "willingness to help" function from People x People -> R. And then you could break this down even further adding "Time" and "Modality" to the domain, and all that...

But what I'm interested in is increasing the feeling of status, or to be more precise, minimizing the felt lack of status. I do expect those variables to be a scalar. How reality maps to this scalar is an interesting question.

status is a side-effect (or maybe a cognitive summary) of much more complicated interpersonal feelings and habits.

I think it mostly boils down to a few simple acts that are all proxies of this "willingness to help" thing.

As a general principle, In altering the perception of Thing, I believe it's best to just alter Thing. In our case that's altering the actual willingness to help each other.

I don't think this is all that much more likely for any individual to choose than your recommendation, but it has the advantage that it's unilateral and doesn't require anyone else to cooperate.

This looks like editing your utility function instead of satisfying it, which I think is a lot harder. Surely there is some low-hanging fruit in interpreting things differently to make yourself feel happier, but afaict we all learn this as kids and then we get stuck in the failure mode of assuming that it's always about reinterpretation. That's what happened to me, anyway.

Comment by toonalfrink on The league of Rationalists · 2018-05-25T16:24:55.070Z · score: 5 (1 votes) · LW · GW
Otherwise you always have additional avenues along which to gain status since every person is making a choice about how much status they consider a person to have even if it is heavily influenced by information they get from other people about how much status they think people should have.

True. But you still seem to implicitly assume people are maximizers, ie that they will capitalize on these opportunities.

But okay, let's grant that there will be differences. What if we ensured a minimum? Would that be enough?

Here's one data point: I no longer feel a strong longing for status, implying that there is indeed a threshold beyond which people are mostly fine. This contradicts my assumption that people want the maximum. Maybe they just want to reach an absolute threshold of social capital.

Comment by toonalfrink on The league of Rationalists · 2018-05-25T16:08:41.884Z · score: 5 (1 votes) · LW · GW

That's a horribly depraved thing to do. I'm not even accounting for environments that are that low-trust. Those just can't work. It's a non-starter. If this is really the kind of thing you're dealing with, and I am the exception as opposed to you, we should think about increasing trust in other ways.

Or (excuse me) you should move out of the US.

Comment by toonalfrink on A Self-Respect Feedback Loop · 2018-05-20T20:12:52.934Z · score: 11 (2 votes) · LW · GW

Uh, well I don’t know you, but it seems unlikely that anyone would deny an argument just because it’s conclusion (vaguely) implies that you should be regarded with respect.

Comment by toonalfrink on Decoupling vs Contextualising Norms · 2018-05-16T10:48:28.388Z · score: 10 (2 votes) · LW · GW

I think this article is a considerable step forward, but it could benefit from some examples. I think I have a pretty good idea what this is about (and share the horror of being called out by a low-decoupler for being some kind of ism), but still.

Comment by toonalfrink on Affordance Widths · 2018-05-14T18:18:19.021Z · score: 15 (4 votes) · LW · GW

It seems to make sense to make this graph 2-dimensional, with axes {A} and {B}, and plot Adam-Edgar as points on it. Clearly this isn't about {B}, and avoiding {X} and {Y} through adjusting {B} is hopeless. Clearly this is about {A}, and the right course of action is to figure out what {A} is.