Comment by toonalfrink on Experimental Open Thread April 2019: Socratic method · 2019-04-02T12:22:48.819Z · score: 3 (2 votes) · LW · GW

Yeah, otherwise you're not narrowing down one person's beliefs, but possibly going back and forth.

Comment by toonalfrink on Open Thread April 2019 · 2019-04-02T01:07:51.285Z · score: 3 (3 votes) · LW · GW

I'd expect that option to be bad overall. I might just be justifying an alief here, but it seems to me that closing a set of people off entirely will entrench you in your beliefs.

Comment by toonalfrink on The Case for The EA Hotel · 2019-04-02T00:30:28.427Z · score: 5 (3 votes) · LW · GW

We have dorms that are purely dedicated to short term paying guests. This allows us to honestly tell people that they're always welcome. I think that's great.

Comment by toonalfrink on The Case for The EA Hotel · 2019-04-01T14:20:45.108Z · score: 10 (4 votes) · LW · GW

The funniest part is that I have a friend who really talks like this. We often listen to what he has to say intently to try to parse any meaning from it, but it seems like he's either too far beyond us or just mad (I'd say both). Guess I have a new nickname for him

Comment by toonalfrink on Experimental Open Thread April 2019: Socratic method · 2019-04-01T02:31:54.917Z · score: 2 (1 votes) · LW · GW

Well I don't really have a justification for it (ha), but I've noticed that explicit deductive thought rarely leads me to insights that turn out to be useful. Instead, I find that simply waiting for ideas to pop into my head, makes the right ideas pop into my head.

Comment by toonalfrink on Experimental Open Thread April 2019: Socratic method · 2019-04-01T02:29:45.317Z · score: 2 (1 votes) · LW · GW

You saw that correctly. What I mean is too often, not always.

Comment by toonalfrink on Experimental Open Thread April 2019: Socratic method · 2019-04-01T01:35:08.217Z · score: 19 (8 votes) · LW · GW

Claim: a typical rationalist is likely to be relying too much on legibility, and would benefit from sometimes not requiring an immediate explicit justification for their beliefs.

Experimental Open Thread April 2019: Socratic method

2019-04-01T01:29:00.664Z · score: 31 (11 votes)

Open Thread April 2019

2019-04-01T01:14:08.567Z · score: 14 (5 votes)
Comment by toonalfrink on Unconscious Economies · 2019-03-26T14:31:19.851Z · score: 2 (3 votes) · LW · GW

I have a different hypothesis for the "people aren't like that!" response. It's about signalling high status in order to be given high status. If I claim that "people aren't bad where I come from", it signals that I'm somehow not used to being treated badly, which is evidence that I'm not treated badly, which is evidence that mechanisms for preventing bad behavior are already in place.

This isn't just a random idea, this is introspectively the reason that I keep insisting that people really aren't bad. It's a sermon. An invitation to good people and a threat to bad ones.

The one who gets bullied is the one that openly behaves like they're already being bullied.

Comment by toonalfrink on The reverse job · 2019-03-11T21:11:50.685Z · score: 2 (1 votes) · LW · GW

Did not, despite this offer which offers a quite large social reward. Seems like people aren't interested.

Comment by toonalfrink on Ideas for an action coordination website · 2019-03-08T17:31:57.068Z · score: 3 (2 votes) · LW · GW

Going forward, I think there's a "revert to draft" feature! Or at least I noticed that option on the EA forum

RAISE is launching their MVP

2019-02-26T11:45:53.647Z · score: 85 (28 votes)
Comment by toonalfrink on The Steering Problem · 2019-02-07T17:00:52.866Z · score: 1 (1 votes) · LW · GW

This part feels underdefined:

A program P is more useful than Hugh for X if, for every project using H to accomplish X, we can efficiently transform it into a new project which uses P to accomplish X. The new project shouldn’t be much more expensive---it shouldn’t take much longer, use much more computation or many additional resources, involve much more human labor, or have significant additional side-effects.

Why quantify over projects? Why is it not sufficient to say that P is as useful as H if it can also accomplish X?

Seems like you want to say that P can achieve X in more ways, but I fail to see why that is obviously relevant. What is even a project?

Or is this some kind of built in measure to prevent side effects, by making P achieve X in a humanlike way? Still doesn't feel obvious enough.

What makes a good culture?

2019-02-05T13:31:57.792Z · score: 30 (10 votes)

The housekeeper

2018-12-03T20:01:57.618Z · score: 26 (16 votes)
Comment by toonalfrink on We can all be high status · 2018-10-11T21:45:12.062Z · score: 0 (3 votes) · LW · GW

Honestly, I'm hardly solving this for myself. Just trying to shape the community in such a way that others are doing a bit better. I'd expect a lot of good to come from that. So let's not get into the frame of emotionally supporting me. That's not the outcome I'm looking for.

  • What does influence on the social environment look like to you?

It's fuzzy, but it means not being left in the dark when you're in need in some way. People maybe checking in if you've been feeling bad. People paying attention to your opinion when you think there's something that needs to change, and actually changing their behavior accordingly if they find themselves agreeing.

I think a key concept is leverage.

I suspect major progress would be made if someone managed to define this better. I think it's the hamming problem of this issue.

  • I notice you don't talk at all about the outcomes of the volunteering projects you did. What did you think of them, apart from the effect on status?

That's a bit of a broad question. Not sure what you're looking for. The project in question is this one. It's moving forward, but quite a bit slower than anticipated.

  • Does it seem to you like the EA volunteer efforts are organized to allow for the flakiness you describe, or does it seem like they are being impacted negatively?

Except for organisational overhead, they're relatively robust. Been running for a few months now, and this one guy has kept showing up, so that's kept it going.

Comment by toonalfrink on We can all be high status · 2018-10-11T21:32:54.166Z · score: 1 (3 votes) · LW · GW

There's a possibility for corruption here, as I briefly mentioned, if people get so deprived that they will sacrifice their other needs or values for the sake of status alone.

I considered that to be obvious in writing this. I'm not necessarily talking about the problem of getting status regardless of everything else. I'm also not talking about how to get status as an individual. I'm rather talking about getting the whole community a sense of status while keeping our other values intact.

"Focus on creating value" might be a great individual solution if you're talented enough. People recognize you're not goodharting as much and they're promoting you accordingly. But it doesn't help everyone. It doesn't scale. If it works for you that just means you've been able to win these competitions so far. Good for you.

As for the collective version: judging from the fact that we've taken some meaningful progress with this at LW Netherlands, there's clearly more traction to be made.

Comment by toonalfrink on We can all be high status · 2018-10-10T18:07:54.100Z · score: 6 (4 votes) · LW · GW

Yes, yes. All of this.

On the other hand, it also means that there's another sense in which "we can all be high-status": within our respective local communities. I'm curious how you feel about that, because that was quite adequate for me for a long time, especially as a student.

This is what we've built with LessWrong Netherlands. We call it the Home Bayes and it's a group of 15ish people with tight bonds and formal membership. It works like a charm.

On a broader level, one actionable idea I've been thinking about is to talk less about existential risk being "talent constrained", so that people who can't get full-time jobs in the field don't feel like they're not talented. A more accurate term in my eyes is "field-building constrained".

I'm glad someone else had this idea.

Coming from my own startup with plenty of talent around but so far not a lot of funding, I think the problem isn't initiative. It's getting the funding to the right initiatives. This is why 80K has listed grantmaking as one of their highest impact careers, because the money is there, but given the CEA assumption that random cause has 0 expected value, they have to single out the good ones, and that's happening so slowly that a lot of ideas are stranding before they even got "whitelisted".

Comment by toonalfrink on Things I Learned From Working With A Marketing Advisor · 2018-10-10T18:06:22.727Z · score: 5 (1 votes) · LW · GW


I suspect there may actually be a function to that

Yep. Let's be wary of hubris. Let's not dismiss things we don't fully understand.

Well done.

Comment by toonalfrink on We can all be high status · 2018-10-10T17:45:58.871Z · score: -1 (4 votes) · LW · GW

What do you mean with ego?

Comment by toonalfrink on We can all be high status · 2018-10-10T17:40:14.557Z · score: 2 (3 votes) · LW · GW

The definition is debated, but most people in EA agree it's about utilitarianism, which is essentially just counting up the happiness of everyone together, including yourself. There are different versions of it, but as far as I know none of them ignore your own happiness.

So buying yourself an ice cream may not be "altruistic" in the common sense, but it is utilitarian.

As a community, organising yourself as a hierarchy might be utilitarian when, despite the suffering it may cause, it resolves more suffering outside of the community than it causes. This is probably true to some extent because hierarchies might cause a community to get more done, with the smartest people making the decisions.

We can all be high status

2018-10-10T16:54:19.047Z · score: 61 (28 votes)
Comment by toonalfrink on The Tails Coming Apart As Metaphor For Life · 2018-09-29T18:29:21.358Z · score: 23 (6 votes) · LW · GW

Lately when I'm confronted with extreme thought experiments that are repugnant on both sides, my answer has been "mu". No I can't give a good answer, and I'm skeptical that anyone can.

Balboa park to West Oakland is our established world. We have been carefully leaning into it's edge, slowly crafting extensions of our established moral code, adding bits to it and refactoring old parts to make it consistent with the new stuff.

It's been a mythical effort. People above our level have spent their 1000 year long lifetimes mulling over their humble little additions to the gigantic established machine that is our morality.

And this machine has created Mediocristan. A predictable world, with some predictable features, within which there is always a moral choice available. Without these features our moral programming would be completely useless. We can behave morally precisely because the cases in which there is no moral answer, don't happen so much.

So please, stop asking me whether I'd kill myself to save 1000 babies from 1000 years of torture. Both outcomes are repugnant and the only good answer I have is "get out of Extremistan".

The real morality is to steer the world towards a place where we don't need morality. Extend the borders of Mediocristan to cover a wider set of situations. Bolster it internally so that the intelligence required for a moral choice becomes lower - allowing more people to make it.

No morality is world-independent. If you think you have a good answer to morality, you have to provide it with a description of the worlds in which it works, and a way to make sure we stay within those bounds.

Comment by toonalfrink on Letting Go III: Unilateral or GTFO · 2018-07-10T18:07:59.687Z · score: 3 (2 votes) · LW · GW

In our WEIRD culture, unilateral is probably better. But it also reinforces that culture, and I have my qualms with it. I think we're choosing rabbit in a game of stag. You're essentially advocating for rabbit (which may or may not be a good thing)

In a highly individualistic environment you can't work things out *as a community* because there aren't any proper coherent communities, and people aren't going to sync their highly asynchronous lives with yours.

In a highly collectivist environment you can work things out alone, but it's not as effective as moving in a coordinated fashion because you actually do have that strictly superior option available to you.

I believe the latter has more upside potential, was the default in our ascendral environment, and has the ability to resolve equilibria of defection. The former is more robust because it's resistant to entropic decay, scales beyond dunbar's number, and doesn't rely on good coordinators.

So I would say "unilateral or GTFO" is a bit too cynical. I'd say "be aware of which options (unilateral or coordinated) are available to you". In a low-trust corporate environment it's certainly unilateral. In a high-trust community it is probably coordinated, and let's keep it that way.

Comment by toonalfrink on Context Windows: A Model of Unproductive Disagreement · 2018-07-10T17:36:17.356Z · score: 9 (3 votes) · LW · GW

IMO this is a disagreement of topic, not a disagreement of style. Klein is answering the question "what social truth is convenient?" and Harris is answering the question "what natural truth is accurate?". Seems like simply another failure of proper operationalisation.

Osmosis learning: a crucial consideration for the craft

2018-07-10T15:40:12.193Z · score: 27 (8 votes)

Open Thread July 2018

2018-07-10T14:51:12.351Z · score: 11 (4 votes)

RAISE is looking for full-time content developers

2018-07-09T17:01:38.401Z · score: 25 (7 votes)
Comment by toonalfrink on Fundamentals of Formalisation level 1: Basic Logic · 2018-07-03T12:39:55.126Z · score: 3 (3 votes) · LW · GW

Thank you for your criticism. We need more of that.

I am not aiming to get a formal diploma here, and I don't think you plan on awarding me any.

A pipeline has 2 purposes: training people and identifying good students. We want to do the latter as much as the former. Not just for the sake of the institutions we ultimately wish to recommend candidates to, but also for the sake of the candidates that want to know whether they are up to the task. We recently did a poll on Facebook asking "what seems to be your biggest bottleneck to becoming a researcher" and "I'm not sure I'm talented enough" was the most popular option by far (doubling the next one).

I agree that it looks silly right now because we're a tiny startup that uploaded 2 videos and a few guides to some textbooks, and it will probably be this small for at least a year to come. You got me to consider using something more humble in the meantime. I'll bring it up in our next meeting.

Comment by toonalfrink on Describing LessWrong in one paragraph · 2018-06-10T14:20:48.771Z · score: 5 (1 votes) · LW · GW

LessWrong is a movement that seriously tries to better the world by a significant margin, not shying away from the most unconventional strategies. Most notably, we believe in the prime importance of securing AI Safety, and we subscribe to the values of transhumanism. Knowing that nature is not a fair enemy, we put in a great effort to grow as individuals and as a community, hoping to gather enough strength to live up to the task. We do this in various ways, applying epistemic standards at least as rigorous as that of science, thinking hard about late advances in philosophy and how to put it's lessons into practice, while keeping an open mind to the benefits of subjective wisdom like spirituality and our intuitions.

A friendly reminder of the mission

2018-06-05T00:36:38.869Z · score: 16 (5 votes)
Comment by toonalfrink on [deleted post] 2018-06-05T00:22:14.557Z

Would you share your model? My intuition is that there are no topics or opinions that should be shunned, because if tolerating a topic leads to bad outcomes, then you just have bad epistemics. i.e. it's a bandaid solution for your average conflict-theorist internet community that I think the thoroughly mistake-theorist LW doesn't need.

There is honor in it if we could handle this.

Comment by toonalfrink on The lesswrong slack - an introduction to our regulars · 2018-06-05T00:08:46.152Z · score: 6 (2 votes) · LW · GW

Now I feel bad for going quiet. Still love you guys!

Comment by toonalfrink on [deleted post] 2018-05-28T13:13:03.135Z

Appreciate your attempt to address a touchy subject. Do keep in mind that epistemic humility applies tenfold here. The subject is littered with blindspots and motivated reasoning, and I haven't come across anyone with a remotely satisfying answer yet.

And it’s never enough; their appetite is endless.

That's an assumption, and I think it's wrong. I think apple seekers are satisficers, like everyone else. I, for one, don't suffer from the brandishing. Got access to enough apples.

My model is that it's a problem of inequality. You see, apple holders get a large part of their status from which apple eater they associate with. Now when it comes to status, one naturally wants to be in the upper regions:

Imagine a world where, every few years, 90% of it’s highest status inhabitants are selected to replace the remaining 10%. If you’d want to remain in this world indefinitely, how much status would you need? Indeed, from the perspective of our genes, only the maximum is good enough.

Over the decades, Inequality in apple eaters has greatly increased (another assumption). Compared to decades before, It's a lot harder to find an apple eater that is truly on top of their shit. And so, apple holders are more reluctant to share their apples with someone of comparative (sexual) status, especially in the lower regions.

But it could be something else entirely. In any case, brandishing doesn't have to be a problem for apple eaters.

Comment by toonalfrink on Sleeping Beauty Resolved? · 2018-05-28T10:25:06.759Z · score: 5 (1 votes) · LW · GW

As it stands now, I can't accept this solution, simply because it doesn't inform the right decision.

Imagine you were Beauty and q(y) was 1, and you were offered that bet. What odds would you take?

Our models exist to serve our actions. There is no such thing as a good model that informs the wrong action. Probability must add up to winning.

Or am I interpreting this wrong, and is there some practical reason why taking 1/2 odds actually does win in the q(y) = 1 case?

Comment by toonalfrink on Questions about the Usefulness of Self-Importance · 2018-05-27T22:15:59.347Z · score: 7 (3 votes) · LW · GW

Hi Leo,

/Why/ am I trying to achieve that goal. I struggled with this idea of a "root goal" the primary function of my life that would give order to all other subgoals and I eventually settled on "to be a good human being", as unsatisfactory as that is, because I found no meaningful or fulfilling progress in existential questions of this nature.

Your root goal is not something to learn, it is something to decide. If nothing seems satisfactory, consider the possibility that you're in a dependency mindset. I.e. You're evaluating your goals according to the impression that it would make or the praise it would solicit, instead of according to what you want. The fact that you come here looking for guidance is evidence for that (not saying it's bad).

If everyone was dumber than you, if your knowledge was more advanced than anyone elses, what change would you strive to manifest? What kind of slightly better parrallel universe do you yearn for? Make it so.

I alternate between setting up a cozy life that I'm certain I could thrive in, (example: returning to my home town to teach) or committing my life to bettering a portion of the world larger than what's just in front of me at the cost of my own comfortability, or at least my sense of security.

Always be at your edge. There is no such thing as a cozy life. Finding a balance between overwhelm and boredom is where you'll find yourself most fulfilled. Here's a few interesting data points that I've come across lately:

  • Dopamine encodes a mismatch signal between data and prediction
  • Extraversion follows a u-shaped curve with increasing dopamine levels
  • Extraversion is strongly correlated to happiness

I take this as neurological evidence of Jordan Peterson's (and other spiritual people's) idea that the optimal place to be is on the edge of order on chaos. Bonus evidence: flow experiences occur when you're challenged exactly enough, but not too much.

I obviously think that I would do less harm if I found myself in a great position, but I suspect equally that incompetence could cause harm and I am not yet certain my competence is sufficient.

Not competence, integrity. There are different reasons to aspire to power. Mao strikes me as a person who was motivated by the wrong needs. He wanted power to placate his ego. There are other reasons, like love and beauty. But those are screened off by lesser needs like safety, so first make sure you have your needs met, then aspire to influence. Only then will you use it for good.

I fear that if I commit to a life of trying to obtain a great position I may cause myself unnecessary grief and ultimately do less good than if I merely did what I could with what's in front of me.

Power should come as an entirely unanticipated consequence of trying to attain something more pure. It's all about the incentive. So dig deep in your psyche and try to figure it out: why power? If you think you're probably well-intentioned, think again. Since I don't know you, and you strike me as dependent on approval, I give it a 1% chance that you're truly astruistic.

Please don't take that as an insult. I have the same prior for everyone else.

Also, Hello everyone. My name's Leo and I'm new here.

You're most welcome!

Comment by toonalfrink on The league of Rationalists · 2018-05-25T19:41:56.205Z · score: 5 (1 votes) · LW · GW
Do you believe that felt lack of status is completely uncorrelated with others' willingness to cooperate?

I think it's strongly correlated, and causally bidirectional: higher status leads to better performance (for mental health reasons) leads to higher status.

The way I see it is that high status is the baseline condition and lack of status is a malfunctioning that makes one function below their capacity. In the same way that having to go to the toilet does.

would it be easier to just literally wirehead? Electricity to the part of the brain that seeks status?

If we could, yes. How many years until it's commercially available?

Comment by toonalfrink on Of Gender and Rationality · 2018-05-25T18:49:48.226Z · score: 9 (2 votes) · LW · GW

This is what we're doing at LW Netherlands. The "partner" community we've chosen is the spirituality community, which strikes me as remarkably complementary to LW in multiple ways. We're going to weekly ecstatic dance parties, some of us are signing up for zen retreats (which is a bit more masculine), and there's the potential that some of us will try tantra at some point.

And it's really gold for learning rationality, because when it comes to lines of attack on becoming smarter, spirituality couldn't be more different from, yet as potent as, our strategy.

Bonus is that their gender ratio is pretty much the inverse of ours.

Comment by toonalfrink on The league of Rationalists · 2018-05-25T16:44:47.198Z · score: 5 (1 votes) · LW · GW

Surely if you go down to the nuts and bolts of it, you get a graph with a "willingness to help" function from People x People -> R. And then you could break this down even further adding "Time" and "Modality" to the domain, and all that...

But what I'm interested in is increasing the feeling of status, or to be more precise, minimizing the felt lack of status. I do expect those variables to be a scalar. How reality maps to this scalar is an interesting question.

status is a side-effect (or maybe a cognitive summary) of much more complicated interpersonal feelings and habits.

I think it mostly boils down to a few simple acts that are all proxies of this "willingness to help" thing.

As a general principle, In altering the perception of Thing, I believe it's best to just alter Thing. In our case that's altering the actual willingness to help each other.

I don't think this is all that much more likely for any individual to choose than your recommendation, but it has the advantage that it's unilateral and doesn't require anyone else to cooperate.

This looks like editing your utility function instead of satisfying it, which I think is a lot harder. Surely there is some low-hanging fruit in interpreting things differently to make yourself feel happier, but afaict we all learn this as kids and then we get stuck in the failure mode of assuming that it's always about reinterpretation. That's what happened to me, anyway.

Comment by toonalfrink on The league of Rationalists · 2018-05-25T16:24:55.070Z · score: 5 (1 votes) · LW · GW
Otherwise you always have additional avenues along which to gain status since every person is making a choice about how much status they consider a person to have even if it is heavily influenced by information they get from other people about how much status they think people should have.

True. But you still seem to implicitly assume people are maximizers, ie that they will capitalize on these opportunities.

But okay, let's grant that there will be differences. What if we ensured a minimum? Would that be enough?

Here's one data point: I no longer feel a strong longing for status, implying that there is indeed a threshold beyond which people are mostly fine. This contradicts my assumption that people want the maximum. Maybe they just want to reach an absolute threshold of social capital.

Comment by toonalfrink on The league of Rationalists · 2018-05-25T16:08:41.884Z · score: 5 (1 votes) · LW · GW

That's a horribly depraved thing to do. I'm not even accounting for environments that are that low-trust. Those just can't work. It's a non-starter. If this is really the kind of thing you're dealing with, and I am the exception as opposed to you, we should think about increasing trust in other ways.

Or (excuse me) you should move out of the US.

The league of Rationalists

2018-05-23T11:55:14.248Z · score: 26 (17 votes)
Comment by toonalfrink on A Self-Respect Feedback Loop · 2018-05-20T20:12:52.934Z · score: 11 (2 votes) · LW · GW

Uh, well I don’t know you, but it seems unlikely that anyone would deny an argument just because it’s conclusion (vaguely) implies that you should be regarded with respect.

Fundamentals of Formalisation level 2: Basic Set Theory

2018-05-18T17:21:30.969Z · score: 24 (7 votes)
Comment by toonalfrink on Decoupling vs Contextualising Norms · 2018-05-16T10:48:28.388Z · score: 10 (2 votes) · LW · GW

I think this article is a considerable step forward, but it could benefit from some examples. I think I have a pretty good idea what this is about (and share the horror of being called out by a low-decoupler for being some kind of ism), but still.

Comment by toonalfrink on Affordance Widths · 2018-05-14T18:18:19.021Z · score: 15 (4 votes) · LW · GW

It seems to make sense to make this graph 2-dimensional, with axes {A} and {B}, and plot Adam-Edgar as points on it. Clearly this isn't about {B}, and avoiding {X} and {Y} through adjusting {B} is hopeless. Clearly this is about {A}, and the right course of action is to figure out what {A} is.

Comment by toonalfrink on The reverse job · 2018-05-14T12:15:55.694Z · score: 5 (1 votes) · LW · GW
For most donors, you really do NOT want to give an accounting of the direct value you've provided, and instead want to focus on the indirect/long-term/global values that you're providing.

You're afraid that this will lead to perverse incentives, correct? Signaling instead of focusing on real impact?

Inviting them to working sessions, though, is confusing. Either they contribute value in these sessions, and you'd want them there even without the donation, or they detract and you're wasting your time and their money by having them there.

This looks like one of those cases where ignoring desiderata that are hard to measure leads to a skewed decision. The downsides are salient, the upsides are something fuzzy tribal romantic something. I wouldn't be surprised if the latter is sufficiently motivating for the counterfactual to be no donor rather than an uninvolved donor. Some people want inclusion. I want to include all these excellent people, but resources/logistics don't allow it. Their donating fixes that problem.

Probably the source of our disagreement is what an organisation is supposed to be. Here's a helpful distinction: the tribe and the hunting party. The hunting party is lean, maximizing, exclusive and goal-oriented. The tribe is broad, satisficing, inclusive and process-oriented.

It's easy to find a hunting party, but hard to find a tribe. It strikes me that I'm leveraging this need for a tribe to bolster my hunting party. You may have a point that this is a bad idea. On the other hand, there is evidence that hunting parties with tribal characteristics are more effective. This distinction might not be as useful as our culture might suggest.

So I still think it's worth a try, though I appreciate your warning, and I'll keep it it mind.

Comment by toonalfrink on The reverse job · 2018-05-14T11:52:08.971Z · score: 8 (2 votes) · LW · GW

This is great data. Thanks!

Comment by toonalfrink on A Self-Respect Feedback Loop · 2018-05-13T23:11:06.793Z · score: 4 (1 votes) · LW · GW

What do you mean?

Comment by toonalfrink on A Self-Respect Feedback Loop · 2018-05-13T16:02:36.073Z · score: 4 (1 votes) · LW · GW

You shouldn’t do this for everyone

I’m not sure. In my experience, people tend to respond to exceptional treatment with exceptional treatment. That I so readily put trust in people is often perceived as a sign of high social capital. As if my prior is that they will respond in kind, which I’d only have if I was used to that kind of thing.

And the funny thing is, people actually do respond in kind, so it becomes a self-fulfilling prophecy.

One important requirement is that you apply the same kind of respect to yourself. The proper mindset seems to be “we are excellent and both of us deserve the best”, not “you are excellent” or “I am excellent”. Think win-win.

The reverse job

2018-05-13T13:55:35.573Z · score: 49 (16 votes)
Comment by toonalfrink on Looking for AI Safety Experts to Provide High Level Guidance for RAISE · 2018-05-07T13:05:29.648Z · score: 4 (1 votes) · LW · GW

You represented it well. We're currently doing 2 things at once. The prerequisites track was too good to pass up.

Fundamentals of Formalisation level 1: Basic Logic

2018-05-04T13:01:50.998Z · score: 15 (7 votes)
Comment by toonalfrink on Give praise · 2018-05-04T10:21:46.024Z · score: 10 (2 votes) · LW · GW

I thought generating examples would be trivial.

Someone cooks for another, instead of not doing that. Net social capital increased. Right?

Comment by toonalfrink on Give praise · 2018-05-02T22:37:38.637Z · score: 4 (1 votes) · LW · GW

Just an example though.

Comment by toonalfrink on Give praise · 2018-05-02T22:36:00.463Z · score: 4 (1 votes) · LW · GW


I don't think the former is free from Goodharting either. My sense of a good community is one where we get the character judgment out of the way from the start. So indeed "people evaluating one another on fuzzy personal criteria". In the sense of "hey we like you for the things about you you can't change even if you tried". So personal value is secured, meaning that the person can actually start to pursue the things they value truly for their own sake.

As I said in other places: If I got all of my needs out of the way, I would still work on AI Safety (which I value for it's own sake), and have a lot more cognitive bandwidth to allocate to it too. All of which is now going to securing my worth. Which is essentially Goodharting, since I'm incentivized to skew everything I do to things that can be easily used for signaling.

An unsatisfied satisficer is a maximizer. I'm maximizing my status, and the useful work I'm doing is only a side effect. That doesn't seem like a good thing. Especially with a security mindset.

Comment by toonalfrink on Give praise · 2018-05-02T22:23:26.425Z · score: 4 (3 votes) · LW · GW

You seem to be coming from the premise that there is plenty of praise out there, just not in the right places. But the point of the post is that there just isn't enough praise out there. Gut-level appreciation, the thing I want people to have for me, isn't zero sum. They can have it for both building things and shiny blogs.

You also seem to assume that we should be using praise as an incentive. I'm on the fence about that. Maybe praise (or let's call it respect or personhood or appreciation here) should be the bottom level, and people can actually do things for their own worth.

I, for one, actually want things to be built regardless of social incentives, and I imagine being socially "satiated" will give me a lot more resources to actually allocate on building things (especially things that are hard to signal with).

Reminds me of project Hufflepuff. That's about getting people to do things that are good but hard to signal with, which is impossible if those people have a status deficit.

Comment by toonalfrink on Give praise · 2018-05-01T15:12:54.256Z · score: 4 (1 votes) · LW · GW

Huh. Okay. So it's not about recognizing your worth, but about recognizing the worth of your work, and associating that work with you. Does that capture it better?

But that's not far from recognizing that good work can be expected from you, which is the thing you actually game-theoretically want, right? That's what I mean with "good character", in the sense that the community is incentivized to support you and invest in you.

Comment by toonalfrink on Open Thread May 2018 · 2018-05-01T11:13:52.570Z · score: 8 (2 votes) · LW · GW

Maybe that’s the Berkeley optimum, but is it the optimum from an international perspective? My intuition is that 3 dunbar size communities in different cities is still better

Comment by toonalfrink on Open Thread May 2018 · 2018-05-01T10:08:14.731Z · score: 8 (2 votes) · LW · GW

Quick thought: since the Berkeley community currently has about Dunbar's number of people, shouldn't we want this number to remain stable? Of course new rationalists want in on all the fun and all the impact, and currently it looks like Berkeley is their only option. Should we be focusing on a second hub that aims to rival Berkeley in it's size and awesomeness?

What places should we be recommending? I'm thinking London or Berlin. Any other options?

Comment by toonalfrink on Give praise · 2018-05-01T02:05:18.686Z · score: 8 (2 votes) · LW · GW

Yes. This. Very much this. I get a sense that recognition, as opposed to positive judgment, is more durable. It’s not just that you did a good thing, it’s that the thing you did is a reflection of your good character, and we expect you to do more good things, and we want to keep you around and support you.

Comment by toonalfrink on Give praise · 2018-05-01T01:48:13.298Z · score: 6 (2 votes) · LW · GW

Wait, no. I don’t think social capital is zero-sum. People can spend more resources on other people. I can set aside 10 minutes to give someone advice, that I could have used on playing games instead (random example). Here net social capital increased.

Comment by toonalfrink on Give praise · 2018-04-30T17:37:40.921Z · score: 8 (2 votes) · LW · GW

Ah yes. So here we might have the connection to the first model I mentioned: status as the amount of resources you can expect to leverage if you need it. This is still different from relative influence in an important way, because it's about absolute influence, which is positive-sum, and plausibly the actual thing we want.

I have experienced something similar a few years ago in my freshman year of uni. It was a time when I felt very worthy, but then when I had a burnout nonetheless, none of that status amounted to any help. It made me a lot more suspicious and a lot more needy. I haven't recovered since.

So this whole thing seems to connect to the idea of Hufflepuff virtue, right? I hadn't realized these people were ahead of me.

Soon: a weekly AI Safety prerequisites module on LessWrong

2018-04-30T13:23:15.136Z · score: 83 (25 votes)

Give praise

2018-04-29T21:00:42.003Z · score: 140 (45 votes)

Raising funds to establish a new AI Safety charity

2018-03-17T00:09:30.843Z · score: 126 (42 votes)

Welcome to LW Netherlands [Edit With Your Details]

2018-03-16T10:13:08.360Z · score: 3 (1 votes)

Updates from Amsterdam

2017-12-16T22:14:48.767Z · score: 16 (5 votes)

Project proposal: Rationality Cookbook

2017-11-21T14:34:01.537Z · score: 55 (20 votes)

In defense of common-sense tribalism

2017-11-02T08:43:11.715Z · score: 17 (9 votes)

We need a better theory of happiness and suffering

2017-07-04T20:14:15.539Z · score: 1 (1 votes)

Announcing AASAA - Accelerating AI Safety Adoption in Academia (and elsewhere)

2017-06-15T18:55:06.306Z · score: 18 (13 votes)

Meetup : Meetup 17 - Comfort Zone Expansion (CoZE)

2017-05-10T09:59:48.318Z · score: 0 (1 votes)

Meetup : Meetup 15 (for real this time) - Trigger Action Planning

2017-04-12T12:52:40.546Z · score: 0 (1 votes)

Meetup : Meetup #15 - Trigger-Action Patterns

2017-03-29T00:55:10.507Z · score: 0 (1 votes)

Meetup : #13 - Focusing

2017-02-28T20:52:57.897Z · score: 0 (1 votes)

Meetup : Amsterdam - Meetup #12 - Friendly AI

2017-02-15T15:08:00.708Z · score: 0 (1 votes)

Meetup : #10: Making a difference

2017-01-14T17:29:32.945Z · score: 0 (1 votes)

Meetup : Meetup #9 - 2017 is prime

2017-01-01T16:06:08.061Z · score: 0 (1 votes)

Meetup : Meetup #8 - Reversed stupidity is not intelligence

2016-12-05T19:59:45.201Z · score: 0 (1 votes)

Meetup : Meetup #7 - Becoming Less Wrong

2016-11-21T16:50:21.143Z · score: 0 (1 votes)

Meetup : Meetup #6 - Still Amsterdam!

2016-11-08T18:02:59.597Z · score: 0 (1 votes)

Meetup : Meetup #5 - Amsterdam edition!

2016-10-28T17:55:15.286Z · score: 0 (1 votes)