Meetup #42 - Ideological Turing Test 2019-12-09T07:36:06.469Z · score: 6 (1 votes)
RAISE post-mortem 2019-11-24T16:19:05.163Z · score: 140 (54 votes)
Meetup #41 - Double Crux 2019-11-24T16:09:53.722Z · score: 6 (1 votes)
Steelmanning social justice 2019-11-17T11:52:43.771Z · score: -3 (20 votes)
Meetup #40 - Street epistemology 2019-11-08T16:37:27.873Z · score: 6 (1 votes)
Toon Alfrink's sketchpad 2019-10-31T14:56:10.205Z · score: 6 (1 votes)
Meetup #39 - Rejection game 2019-10-28T20:36:54.704Z · score: 6 (1 votes)
The first step of rationality 2019-09-29T12:01:39.932Z · score: -2 (13 votes)
Is competition good? 2019-09-10T14:01:56.297Z · score: 8 (7 votes)
Have you lost your purpose? 2019-05-30T22:35:38.295Z · score: 29 (15 votes)
What is a good moment to start writing? 2019-05-29T21:47:50.454Z · score: 24 (10 votes)
What features of people do you know of that might predict academic success? 2019-05-10T18:16:59.922Z · score: 16 (2 votes)
Experimental Open Thread April 2019: Socratic method 2019-04-01T01:29:00.664Z · score: 31 (11 votes)
Open Thread April 2019 2019-04-01T01:14:08.567Z · score: 30 (5 votes)
RAISE is launching their MVP 2019-02-26T11:45:53.647Z · score: 85 (28 votes)
What makes a good culture? 2019-02-05T13:31:57.792Z · score: 30 (10 votes)
The housekeeper 2018-12-03T20:01:57.618Z · score: 26 (16 votes)
We can all be high status 2018-10-10T16:54:19.047Z · score: 61 (28 votes)
Osmosis learning: a crucial consideration for the craft 2018-07-10T15:40:12.193Z · score: 27 (8 votes)
Open Thread July 2018 2018-07-10T14:51:12.351Z · score: 11 (4 votes)
RAISE is looking for full-time content developers 2018-07-09T17:01:38.401Z · score: 25 (7 votes)
A friendly reminder of the mission 2018-06-05T00:36:38.869Z · score: 16 (5 votes)
The league of Rationalists 2018-05-23T11:55:14.248Z · score: 26 (17 votes)
Fundamentals of Formalisation level 2: Basic Set Theory 2018-05-18T17:21:30.969Z · score: 24 (7 votes)
The reverse job 2018-05-13T13:55:35.573Z · score: 49 (16 votes)
Fundamentals of Formalisation level 1: Basic Logic 2018-05-04T13:01:50.998Z · score: 15 (7 votes)
Soon: a weekly AI Safety prerequisites module on LessWrong 2018-04-30T13:23:15.136Z · score: 83 (25 votes)
Give praise 2018-04-29T21:00:42.003Z · score: 147 (52 votes)
Raising funds to establish a new AI Safety charity 2018-03-17T00:09:30.843Z · score: 126 (42 votes)
Welcome to LW Netherlands 2018-03-16T10:13:08.360Z · score: 3 (1 votes)
Updates from Amsterdam 2017-12-16T22:14:48.767Z · score: 16 (5 votes)
Project proposal: Rationality Cookbook 2017-11-21T14:34:01.537Z · score: 55 (20 votes)
In defense of common-sense tribalism 2017-11-02T08:43:11.715Z · score: 17 (9 votes)
We need a better theory of happiness and suffering 2017-07-04T20:14:15.539Z · score: 1 (1 votes)
Announcing AASAA - Accelerating AI Safety Adoption in Academia (and elsewhere) 2017-06-15T18:55:06.306Z · score: 18 (13 votes)
Meetup : Meetup 17 - Comfort Zone Expansion (CoZE) 2017-05-10T09:59:48.318Z · score: 0 (1 votes)
Meetup : Meetup 15 (for real this time) - Trigger Action Planning 2017-04-12T12:52:40.546Z · score: 0 (1 votes)
Meetup : Meetup #15 - Trigger-Action Patterns 2017-03-29T00:55:10.507Z · score: 0 (1 votes)
Meetup : #13 - Focusing 2017-02-28T20:52:57.897Z · score: 0 (1 votes)
Meetup : Amsterdam - Meetup #12 - Friendly AI 2017-02-15T15:08:00.708Z · score: 0 (1 votes)
Meetup : #10: Making a difference 2017-01-14T17:29:32.945Z · score: 0 (1 votes)
Meetup : Meetup #9 - 2017 is prime 2017-01-01T16:06:08.061Z · score: 0 (1 votes)
Meetup : Meetup #8 - Reversed stupidity is not intelligence 2016-12-05T19:59:45.201Z · score: 0 (1 votes)
Meetup : Meetup #7 - Becoming Less Wrong 2016-11-21T16:50:21.143Z · score: 0 (1 votes)
Meetup : Meetup #6 - Still Amsterdam! 2016-11-08T18:02:59.597Z · score: 0 (1 votes)
Meetup : Meetup #5 - Amsterdam edition! 2016-10-28T17:55:15.286Z · score: 0 (1 votes)


Comment by toonalfrink on Raemon's Scratchpad · 2019-12-06T21:57:10.410Z · score: 6 (3 votes) · LW · GW

I'm looking forward to a bookshelf with LW review books in my living room. If nothing else, the very least this will give us is legitimacy, and legitimacy can lead to many good things.

Comment by toonalfrink on Affordance Widths · 2019-11-30T19:15:37.824Z · score: 5 (2 votes) · LW · GW

To me, the most useful part of this post is that it introduces this idea that affordances are personal, i.e. some people are allowed to do X while others are not. I like to see this as part of the pervasive social machinery that is Omega.

I imagine people of a certain political background to want to sneer at me, as in, "why did it take someone in your in-group to tell you this?"

To which I admit that, indeed, I should have listened. But I suppose I didn't (enough), and now I did, so here we are with a post that made my worldview more empathetic. The bottom line is what matters.

Comment by toonalfrink on Transhumanism as Simplified Humanism · 2019-11-30T19:13:04.582Z · score: 4 (2 votes) · LW · GW

This post has been my go-to definition of Transhumanism ever since I first read it.

It's hard to put into words why I think it has so much merit. To me it just powerfully articulates something that I hold as self-evident, that I wish others would recognize as self-evident too.

Comment by toonalfrink on Tradition is Smarter Than You Are · 2019-11-30T19:11:08.706Z · score: 2 (1 votes) · LW · GW

To me, this is exactly what the LW community (and the broader progressive tribe surrounding it) needs to hear. This post, along with other developments of thought in the same direction, has caused a major shift in how I think about changing things.

The first quote is most important, and I find myself using it quite often if I'm met with a person my age (or even older) that dismisses a tradition as obviously dumb. Why do you think the tradition exists in the first place? If you don't know, how can you be so sure it doesn't serve some function?

If a piece of code you ran raised an error, it would be pretty dumb to just remove the statement that raises the error to fix your program. The only reason why Omega doesn't break down that easily is because naive people like us have kicked at it long enough to make it a little more robust. That doesn't mean kicking it makes it stronger.

This direction of thinking caused a major shift in my career, from trying to hack away at the margins of society to becoming a part of the establishment. I'm writing this from my office at a governmental bank. This post is the reason that it's not some indie startup instead, and I don't regret it.

Comment by toonalfrink on The Tails Coming Apart As Metaphor For Life · 2019-11-30T19:07:28.546Z · score: 2 (1 votes) · LW · GW

I still find myself using this metaphor a lot in conversations. That's a good benchmark for usefulness.

Comment by toonalfrink on The Costly Coordination Mechanism of Common Knowledge · 2019-11-30T19:05:16.626Z · score: 2 (1 votes) · LW · GW

I could write a paragraph to explain some concept underlying a decision I made. OR there could be a word for this concept, in which case I can just use the word. But I can't use that word if it's not commonly understood.

The set of things that are common knowledge in a group of people is epistemic starting point. Imagine you had to explain your niche ideas about AI without using any concepts invented past 1900. You'd be speaking to toddlers.

I needed "common knowledge" to be common knowledge. It is part of our skill of upgrading skills. It's at the core of group rationality.

Comment by toonalfrink on A voting theory primer for rationalists · 2019-11-30T18:59:01.412Z · score: 2 (1 votes) · LW · GW

This post introduced me to a whole new way of thinking about institutional/agency design. The most merit, for me, was pointing out this field existed. The subject is close to one of the core subjects of my thoughts, which is how to design institutions that align selfish behavior with altruistic outcomes on different hierarchical levels, from the international, to the cultural, national, communal, relational, and as far down as the subagent level.

Comment by toonalfrink on Is daily caffeine consumption beneficial to productivity? · 2019-11-26T20:16:30.591Z · score: 4 (2 votes) · LW · GW

I don't think this is the right question to ask. Even if the net alertness gain of a cup of coffee is 0, it is still worth consuming during those moments that alertness is worth more, and abstaining during those moments where relaxation is worth more. Net alertness is not net EV.

Comment by toonalfrink on RAISE post-mortem · 2019-11-24T20:40:46.317Z · score: 3 (2 votes) · LW · GW

Good catch, fixed it.

100x is obviously a figure of speech. I'd love to see someone do some research into this and publish the actual numbers

Comment by toonalfrink on RAISE post-mortem · 2019-11-24T18:17:33.080Z · score: 24 (17 votes) · LW · GW

I suppose I was naive about the amount of work that goes into creating an online course. I had been a student assistant where my professor would meet with me and the other assistants to plan the entirety of the course a day before it started. Of course this was different because there was already a syllabus and the topic was well understood and well demarcated.

Also, I had visited Berkeley around that time, and word was out about a new prediction that the singularity was only 15 years ahead. I felt like I had no choice but to try and do something. Start moving mountains right there and then. Looking back, I suppose I was a little bit too impressed by the fad of the day.

Third reason is that when starting out the project was supposed to be relatively simple and limited in scope, not a full-blown charity, and every step towards making the thing bigger and drawing more resources felt logical at the time.

But to be honest I'm not very good at knowing my true motivations.

Comment by toonalfrink on Steelmanning social justice · 2019-11-23T21:24:24.085Z · score: 2 (1 votes) · LW · GW

That's fair

Comment by toonalfrink on Meetup #40 - Street epistemology · 2019-11-16T11:27:35.981Z · score: 6 (1 votes) · LW · GW

Take note! We moved the location from the VU (in the south of Amsterdam) to the OBA (near central station), so plan your trip accordingly

Comment by toonalfrink on Toon Alfrink's sketchpad · 2019-11-09T20:00:47.037Z · score: 5 (2 votes) · LW · GW

Moved to a new country twice, they broke once.

But the real cause is that I didnt regard these items as my standard inventory, which I would have done if I had more of a preservation mindset.

Comment by toonalfrink on Toon Alfrink's sketchpad · 2019-11-09T12:43:44.065Z · score: 10 (3 votes) · LW · GW

Here's a faulty psychological pattern that I recently resolved for myself. It's a big one.

I want to grow. So I seek out novelty. Try new things. For example I might buy high-lumen light bulbs to increase my mood. So I buy them, feel somewhat better, celebrate the win and move on.

Problem is, I've bought high-lumen bulbs three times in my life now already, yet I sit here without any. So this pattern might happen all over again: I feel like upgrading my life, get this nice idea of buying light bulbs, buy them, celebrate my win and move on.

So here's 4 life-upgrades, but did I grow 4 times? Obviously I only grew once. From not having high lumen light bulbs to having them.

My instinct towards growth seems to think this:

growth = novelty

But in reality, it seems to be more like this:

growth = novelty - decay

which I define as equal to

growth = novelty + preservation

The tap I installed that puts this preservation mindset into practice seems to be very helpful. It's as follows: if I wonder what to do, instead of starting over ("what seems like the best upgrade to add to my life?") I first check whether I'm on track with the implementation of past good ideas ("what did my past self intend to do with this moment again?")

Funnily enough, so far the feeling I get from this mindset seems pretty similar to the feeling I get from meditation. And meditation can be seen as training yourself to put your attention on your past intentions too.

I think this one goes a lot deeper than what I've written here. I'll be revisiting this idea.

Comment by toonalfrink on Toon Alfrink's sketchpad · 2019-11-07T13:21:08.382Z · score: 2 (1 votes) · LW · GW

He does influence my thinking

Comment by toonalfrink on Toon Alfrink's sketchpad · 2019-11-07T10:12:00.983Z · score: 10 (4 votes) · LW · GW

You may have heard of the poverty trap, where you have so little money that you're not able to spend any money on the things you need to make more. Being broke is an attractor state.

You may have heard of the loneliness trap. You haven't had much social interaction lately, which makes you feel bad and anxious. This anxiety makes it harder to engage in social interaction. Being lonely is an attractor state.

I think the latter is a close cousin of something that I'd like to call the irrelevance trap:

  • Lemma 1: having responsibilities is psychologically empowering. When others depend on your decisions, it is so much easier to make the right decision.
  • Lemma 2: being psychologically empowered makes it more likely for you to take on responsibility, and for others to give you responsibility, because you're more able to handle it.

I speculate that some forms of depression (the dopaminergic type) are best understood as irrelevance traps. I'm pretty sure that that was the case for me.

How do you escape such a trap? Well you escape a loneliness trap by going against your intuition and showing up at a party. You escape an irrelevance trap by going against your intuition and taking on more responsibility than you feel you can handle.

Comment by toonalfrink on steve2152's Shortform · 2019-11-01T16:38:13.557Z · score: 4 (2 votes) · LW · GW

I'm skeptical that anyone with that level of responsibility and acumen has that kind of juvenile destructive mindset. Can you think of other explanations?

Comment by toonalfrink on Toon Alfrink's sketchpad · 2019-11-01T14:47:19.455Z · score: 12 (4 votes) · LW · GW

Today I had some insight in what social justice really seems to be trying to do. I'll use neurodiversity as an example because it's less likely to lead to bad-faith arguments.

Let's say you're in the (archetypical) position of a king. You're programming the rules that a group of people will live by, optimizing for the well-being of the group itself.

You're going to shape environments for people. For example you might be running a supermarket and deciding what music it's going to play. Let's imagine that you're trying to create the optimal environment for people.

The problem is, since there is more than one person that is affected by your decision, and these people are not exactly the same, you will not be able to make the decision that is optimal for each one of them. If only two of your customers have different favourite songs, you will not be able to play both of them. In some sense, making a decision over multiple people is inherently "aggressive".

But what you can do, is reduce the amount of damage. My understanding is that this is usually done by splitting up the people as finely as possible. You might split up your audience into stereotypes for "men", "women", "youngsters", "elders", "autistic people", "neurotypicals", etc. In this case, you can make a decision that would be okay for each of these stereotypes, giving your model a lower error rate.

The problem with this is that stereotypes are leaky generalizations. Some people might not conform to it. Your stereotypes might be mistaken. Alternatively, there might be some stereotypes that you're not aware of.

Take these 2 models. Model A knows that some people are highly sensitive to sound. Model B is not aware of it. If your model of people is A, you will play much louder music in the supermarket. As a result, people that are highly sensitive to sound will be unable to shop there. This is what social justice means with "oppression". You're not actively pushing anyone down, but you are doing so passively, because you haven't resolved your "ignorance".

So the social justice project, as I understand it, is to enrich our models of humans to make sure that as many of them as possible are taken into consideration. It is a project of group epistemics, above all.

That means that good social justice means good epistemics. How do you collaboratively figure out the truth? The same laws apply as they would to any truthseeking. Have good faith, give it some probability that you're wrong, seek out to understand their model first, don't discard your own doubts, and be proud and grateful when you change your mind.

Comment by toonalfrink on Toon Alfrink's sketchpad · 2019-11-01T14:17:57.704Z · score: 2 (1 votes) · LW · GW

Sure. In this case "what it really is" means "what does it optimize for, why did people invent it"

Comment by toonalfrink on Toon Alfrink's sketchpad · 2019-11-01T14:14:23.083Z · score: 2 (1 votes) · LW · GW

Well yes, but no, the point is that these other people are merely means, but optimally distributing your assets over time is a means that screens off the other people, in a sense. In the end, assuming that people are really just optimizing for their own value, they might trade for that but in the end their goal is their own allocation.

Comment by toonalfrink on bgaesop's Shortform · 2019-10-31T15:02:10.472Z · score: 2 (1 votes) · LW · GW

Meta-point: I noticed almost falling into the defensive, trying to refute your sentiment without even reading the paper.

I don't hold you responsible for that, but a less polemic tone would probably get you a better response at least from me.

Comment by toonalfrink on steve2152's Shortform · 2019-10-31T14:57:08.599Z · score: 3 (2 votes) · LW · GW

Why not?

Comment by toonalfrink on Toon Alfrink's sketchpad · 2019-10-31T14:56:13.507Z · score: 4 (4 votes) · LW · GW

I've been trying to figure out what finance really is.

It's not resource allocation between different people, because the intention is that these resources are paid back at some point.

It's rather resource re-allocation between different moments in one person's life.

Finance takes money from a time-slice of you that has it, and gives it to a time-slice of you that can best spend it.

Optimal finance means optimal allocation of money across your life, regardless of when you earn it.

Comment by toonalfrink on Give praise · 2019-09-29T12:13:24.733Z · score: 2 (1 votes) · LW · GW

To me this pattern-matches to something else. The thing we need isn't just interaction, but "authentic" interaction. Let me unpack that:

An interaction is authentic when there is no inhibition involved. You're not hiding your true feelings and/or thoughts. You're not playing a role, or putting on a mask. You're just allowing your system 1 to do the interaction all by itself.

Hardly any interaction is 100% authentic. Even if you don't feel like you're inhibiting yourself, you most likely are. Still, there's a very important difference between 90% and 10% inhibition. An interaction is only as valuable as it's authenticity.

(on a side note, this is why I'm worried about today's tendency for people to forbid some forms of speech)

Comment by toonalfrink on Is competition good? · 2019-09-12T18:01:31.815Z · score: 7 (3 votes) · LW · GW

Replace AMF with any organisation for which this statement becomes obviously true. If none such organisations exists, I'm curious.

Comment by toonalfrink on Is competition good? · 2019-09-12T17:55:55.645Z · score: 2 (1 votes) · LW · GW

Most likely that's where this intuition can be traced back to

Comment by toonalfrink on Is competition good? · 2019-09-10T14:41:34.794Z · score: 3 (4 votes) · LW · GW

Right. You can make up a lot of just-so stories, but the one you came up with falls neatly into the categories I'm trying to explain.

In this case, being altruistic doesn't satisfy any need at all. There's no pressure because you're not penalized in any way for a shitty restaurant. That's why I make an exception for respect, in the sense that I claim that respect can be a driving force behind altruism even if other needs (like reduced income from being outperformed) are lacking.

I suppose any need ought to be considered when building incentive structures. Just using income will not always lead to the best outcome.

Comment by toonalfrink on "Cheat to Win": Engineering Positive Social Feedback · 2019-08-05T15:37:14.494Z · score: 2 (1 votes) · LW · GW

I think the context of "haters don't matter" is one where you already decided to ignore them.

Comment by toonalfrink on The Tails Coming Apart As Metaphor For Life · 2019-08-05T13:48:26.984Z · score: 2 (1 votes) · LW · GW

To me this looks like a knockdown argument to any non-solipsistic morality. I really do just care about my qualia.

In some sense it's the same mistake the deontologists make, on a deeper level. A lot their proposed rules strike me as heavily correlated with happiness. How were these rules ever generated? Whatever process generated them must have been a consequentialist process.

If deontology is just applied consequentialism, then maybe "happiness" is just applied "0x7fff5694dc58".

Your post still leaves the possibility that "quality of life", "positive emotions" or "meaningfulness" are objectively existing variables, and people differ only in their weighting. But I think the problem might be worse than that.

I think this makes the problem less bad, because if you get people to go up their chain of justification, they will all end up at the same point. I think that point is just predictions of the valence of their qualia.

Comment by toonalfrink on What is a good moment to start writing? · 2019-06-10T12:40:06.184Z · score: 2 (1 votes) · LW · GW

Because some people might already be at this level, and I worry that I'm just adding noise to their signal.

Maybe my question is this: given that, every year, I unexpectedly learn important considerations that discredit my old beliefs, how can I tell that my models are further along this process than those written by others?

Comment by toonalfrink on Our plan for 2019-2020: consulting for AI Safety education · 2019-06-05T16:30:36.403Z · score: 10 (4 votes) · LW · GW

Thank you. We're reflecting on this and will reach out to have a conversation soon.

Comment by toonalfrink on Our plan for 2019-2020: consulting for AI Safety education · 2019-06-05T15:50:16.654Z · score: 22 (6 votes) · LW · GW

Apologies, that was a knee-jerk reply. I take it back: we did disagree about something.

We're going to take some time to let all of this criticism sink in.

Comment by toonalfrink on Our plan for 2019-2020: consulting for AI Safety education · 2019-06-03T23:14:56.955Z · score: -8 (6 votes) · LW · GW

Looks like I didn't entirely succeed in explaining our plan.

My recommendation was very concretely "try to build an internal model of what really needs to happen for AI-risk to go well" and very much not "try to tell other people what really needs to happen for AI-risk", which is almost the exact opposite.

And that's also what we meant. The goal isn't to just give advice. The goal is to give useful and true advice, and this necessarily requires a model of what really needs to happen for AI risk.

We're not just going to spin up some interesting ideas. That's not the mindset. The mindset is to generate a robust model and take it from there, if we ever get that far.

We might be talking to people in the process, but as long as we are in the dark the emphasis will be on asking questions.

EDIT: this wasn't a thoughtful reply. I take it back. See Ruby's comments below

Comment by toonalfrink on Can movement from Conflict to Mistake theorist be facilitated effectively? · 2019-06-03T17:47:21.880Z · score: 9 (6 votes) · LW · GW

To what extent are these just features of low-trust and high-trust environments?

Assuming that these dimensions are the same, here's my incomplete list of things that modulate trust levels:

  • Group size (smaller is easier to trust)
  • Average emotional intelligence
  • The quality of group memes that relate to emotions
  • Scarcity mindsets
  • The level of similarity of group members
  • Group identity

Some of these might screen off others. This model suggests that groups with healthy discourse tend to be small, affluent, emotionally mature and aligned.

Apart from social effects I get the impression that there are also psychological factors that modulate the tendency to trust, including:

  • Independence (of those that disagree)
  • Ambiguity tolerance
  • Agreeableness


Different answer: one thing that I've seen work is to meet someone offline. People tend to be a lot more considerate after that

Comment by toonalfrink on Egoism In Disguise · 2019-05-31T16:22:45.270Z · score: 3 (2 votes) · LW · GW

Still I think this line of thinking is extremely important because it means that people won't agree with any proposal for a morality that isn't useful for them, and keeping this in mind makes it a lot easier to propose moralities that will actually be adopted.

Comment by toonalfrink on Egoism In Disguise · 2019-05-31T16:20:42.090Z · score: 4 (2 votes) · LW · GW

I agree with your conclusion, but feel like there's some nuance lacking. In three ways.


It seems that indeed a lot of our moral reasoning is confused because we fall for some kind of moral essentialism, some idea that there is an objective morality that is more than just a cultural contract that was invented and refined by humans over the course of time.

But then you reinstall this essentialism into our "preferences", which you hold to be grounded in your feelings:

Human flourishing is good because the idea of human flourishing makes me smile. Kicking puppies is bad because it upsets me.

We recursively justify our values, and this recursion doesn't end at the boundary between consciousness and subconsciousness. Your feelings might appear to be your basic units of value, but they're not. This is obvious if you consider that our observation about the world often change our feelings.

Where does this chain of justifications end? I don't know, but I'm reasonably sure about two things:

1) The bedrock of our values are probably the same for any human being, and any difference between conscious values is either due to having seen different data, but more likely due to different people situationally benefitting more under different moralities. For example a strong person will have "values" that are more accepting of competition, but that will change once they become weaker.

2) While a confused ethicist is wrong to be looking for a "true" (normative) morality, this is still better than not searching at all because you hold your conscious values to be basic. The best of both worlds is an ethicist that doesn't believe in normative morality, but still knows there is something to be learned about the source of our values.


Considering our evolutionary origins, it seems very unlikely to me that we are completely selfish. It seems a lot more likely to me that the source of our values is some proxy of the survival and spread of our genes.

You're not the only one who carries your genes, and so your "selfish" preferences might not be completely selfish after all


We're a mashup of various subagents that want different things. I'd be surprised if they all had the same moral systems. Part of you might be reflective, aware of the valence of your experience, and actively (and selfishly) trying to increase it. Part of you will reflect your preferences for things that are very not-selfish. Other parts of you will just be naive deontologists.

Comment by toonalfrink on Have you lost your purpose? · 2019-05-31T15:26:09.036Z · score: 3 (2 votes) · LW · GW

Does that still lead to good outcomes though? I found that being motivated by my social role makes me a lot less effective because signalling and the actual thing come apart considerably. At least for the short term.

Comment by toonalfrink on Have you lost your purpose? · 2019-05-31T15:19:20.832Z · score: 4 (2 votes) · LW · GW


It starts with the sense that, if something doesn't feel viscerally obvious, there is something left to be explained.

It's a bottom up process. I don't determine that images will convince me, then think of some images and play them in front of me so that they will hopefully convince my s1.

Instead I "become" my s1, take on a skeptical attitude, and ask myself what the fuss is all about.

Warning: the following might give you nightmares, if you're imaginative enough.

In this case, what happened was something like "okay, well I guess at some point we're going to have pretty strong optimizers. Fine. So what? Ah, I guess that's gonna mean we're going to have some machines that carry out commands for us. Like what? Like *picture of my living room magically tidying itself up*. Really? Well yeah I can see that happening. And I suppose this magical power can also be pretty surprising. Like *blurry picture/sense of surprising outcome*. Is this possible? Yeah like *memory of this kind of surprise*. What if this surprise was like 1000x stronger? Oh fuck..."

I guess the point is that convincing a person, or a subagent, can be best explained as an internal decision to be convinced, and not as an outside force of convincingness. So if you want to convince a part of you that feels like something outside of you, then first you have to become it. You do this by sincerely endorsing whatever it has to say. Then if the part of you feels like you, you (formerly it) decide to re-evaluate the thing that the other subagent (formerly you) disagreed with.

A bit like internal double crux, but instead of going back and forth you just do one round. Guess you could call it internal ITT.

Comment by toonalfrink on What is a good moment to start writing? · 2019-05-30T12:30:06.146Z · score: 2 (1 votes) · LW · GW
which confuses me because it seems like worrying about being embarrassed is worrying about impressions?

What I meant to say is that I can tell that my work isn't going to be very good from next year's standards, which are better standards because they're more informed

Comment by toonalfrink on What features of people do you know of that might predict academic success? · 2019-05-10T18:58:31.576Z · score: 5 (2 votes) · LW · GW

I expect that there are metrics that screen off gender so we can have better predictions and also circumvent the politics of doing anything related to gender

Comment by toonalfrink on Literature Review: Distributed Teams · 2019-05-02T12:36:54.490Z · score: 11 (4 votes) · LW · GW

My impression is that money can only lower prestige if the amount is low relative to an anchor.

For example a $3000 prize would be high prestige if it's interpreted as an award, but low prestige if it's interpreted as a salary.

Comment by toonalfrink on Subagents, akrasia, and coherence in humans · 2019-04-24T17:15:24.200Z · score: 2 (1 votes) · LW · GW

Besides the subsystems making their own predictions, there might also be a meta-learning system keeping track of which other subsystems tend to make the most accurate predictions in each situation, giving extra weight to the bids of the subsystem which has tended to perform the best in that situation.

This is why I eat junk food, sans guilt. I don't want my central planning subagent to lose influence over unimportant details. Spend your weirdness points wisely.

Comment by toonalfrink on Experimental Open Thread April 2019: Socratic method · 2019-04-02T12:22:48.819Z · score: 3 (2 votes) · LW · GW

Yeah, otherwise you're not narrowing down one person's beliefs, but possibly going back and forth.

Comment by toonalfrink on Open Thread April 2019 · 2019-04-02T01:07:51.285Z · score: 3 (3 votes) · LW · GW

I'd expect that option to be bad overall. I might just be justifying an alief here, but it seems to me that closing a set of people off entirely will entrench you in your beliefs.

Comment by toonalfrink on The Case for The EA Hotel · 2019-04-02T00:30:28.427Z · score: 5 (3 votes) · LW · GW

We have dorms that are purely dedicated to short term paying guests. This allows us to honestly tell people that they're always welcome. I think that's great.

Comment by toonalfrink on The Case for The EA Hotel · 2019-04-01T14:20:45.108Z · score: 10 (4 votes) · LW · GW

The funniest part is that I have a friend who really talks like this. We often listen to what he has to say intently to try to parse any meaning from it, but it seems like he's either too far beyond us or just mad (I'd say both). Guess I have a new nickname for him

Comment by toonalfrink on Experimental Open Thread April 2019: Socratic method · 2019-04-01T02:31:54.917Z · score: 2 (1 votes) · LW · GW

Well I don't really have a justification for it (ha), but I've noticed that explicit deductive thought rarely leads me to insights that turn out to be useful. Instead, I find that simply waiting for ideas to pop into my head, makes the right ideas pop into my head.

Comment by toonalfrink on Experimental Open Thread April 2019: Socratic method · 2019-04-01T02:29:45.317Z · score: 2 (1 votes) · LW · GW

You saw that correctly. What I mean is too often, not always.

Comment by toonalfrink on Experimental Open Thread April 2019: Socratic method · 2019-04-01T01:35:08.217Z · score: 19 (8 votes) · LW · GW

Claim: a typical rationalist is likely to be relying too much on legibility, and would benefit from sometimes not requiring an immediate explicit justification for their beliefs.

Comment by toonalfrink on Unconscious Economics · 2019-03-26T14:31:19.851Z · score: 2 (3 votes) · LW · GW

I have a different hypothesis for the "people aren't like that!" response. It's about signalling high status in order to be given high status. If I claim that "people aren't bad where I come from", it signals that I'm somehow not used to being treated badly, which is evidence that I'm not treated badly, which is evidence that mechanisms for preventing bad behavior are already in place.

This isn't just a random idea, this is introspectively the reason that I keep insisting that people really aren't bad. It's a sermon. An invitation to good people and a threat to bad ones.

The one who gets bullied is the one that openly behaves like they're already being bullied.